Sample records for optimal tuner selection

  1. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  2. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  3. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  4. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  5. Test of a coaxial blade tuner at HTS FNAL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pischalnikov, Y.; Barbanotti, S.; Harms, E.

    2011-03-01

    A coaxial blade tuner has been selected for the 1.3GHz SRF cavities of the Fermilab SRF Accelerator Test Facility. Results from tuner cold tests in the Fermilab Horizontal Test Stand are presented. Fermilab is constructing the SRF Accelerator Test Facility, a facility for accelerator physics research and development. This facility will contain a total of six cryomodules, each containing eight 1.3 GHz nine-cell elliptical cavities. Each cavity will be equipped with a Slim Blade Tuner designed by INFN Milan. The blade tuner incorporates both a stepper motor and piezo actuators to allow for both slow and fast cavity tuning. Themore » stepper motor allows the cavity frequency to be statically tuned over a range of 500 kHz with an accuracy of several Hz. The piezos provide up to 2 kHz of dynamic tuning for compensation of Lorentz force detuning and variations in the He bath pressure. The first eight blade tuners were built at INFN Milan, but the remainder are being manufactured commercially following the INFN design. To date, more than 40 of the commercial tuners have been delivered.« less

  6. Dependence of ion beam current on position of mobile plate tuner in multi-frequencies microwaves electron cyclotron resonance ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurisu, Yosuke; Kiriyama, Ryutaro; Takenaka, Tomoya

    2012-02-15

    We are constructing a tandem-type electron cyclotron resonance ion source (ECRIS). The first stage of this can supply 2.45 GHz and 11-13 GHz microwaves to plasma chamber individually and simultaneously. We optimize the beam current I{sub FC} by the mobile plate tuner. The I{sub FC} is affected by the position of the mobile plate tuner in the chamber as like a circular cavity resonator. We aim to clarify the relation between the I{sub FC} and the ion saturation current in the ECRIS against the position of the mobile plate tuner. We obtained the result that the variation of the plasmamore » density contributes largely to the variation of the I{sub FC} when we change the position of the mobile plate tuner.« less

  7. Model-Based Control of an Aircraft Engine using an Optimal Tuner Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Chicatelli, Amy; Garg, Sanjay

    2012-01-01

    This paper covers the development of a model-based engine control (MBEC) method- ology applied to an aircraft turbofan engine. Here, a linear model extracted from the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) at a cruise operating point serves as the engine and the on-board model. The on-board model is up- dated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. MBEC provides the ability for a tighter control bound of thrust over the entire life cycle of the engine that is not achievable using traditional control feedback, which uses engine pressure ratio or fan speed. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC tighter thrust control. In addition, investigations of using the MBEC to provide a surge limit for the controller limit logic are presented that could provide benefits over a simple acceleration schedule that is currently used in engine control architectures.

  8. Tuner design and RF test of a four-rod RFQ

    NASA Astrophysics Data System (ADS)

    Zhou, QuanFeng; Zhu, Kun; Guo, ZhiYu; Kang, MingLei; Gao, ShuLi; Lu, YuanRong; Chen, JiaEr

    2011-12-01

    A mini-vane four-rod radio frequency quadruple (RFQ) accelerator has been built for neutron imaging. The RFQ will operate at 201.5 MHz, and its length is 2.7 m. The original electric field distribution along the electrodes is not flat. The resonant frequency needs to be tuned to the operating value. And the frequency needs to be compensated for temperature change during high power RF test and beam test. As tuning such a RFQ is difficult, plate tuners and stick tuners are designed. This paper will present the tuners design, the tuning procedure, and the RF properties of the RFQ.

  9. Testing of the new tuner design for the CEBAF 12 GeV upgrade SRF cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edward Daly; G. Davis; William Hicks

    2005-05-01

    The new tuner design for the 12 GeV Upgrade SRF cavities consists of a coarse mechanical tuner and a fine piezoelectric tuner. The mechanism provides a 30:1 mechanical advantage, is pre-loaded at room temperature and tunes the cavities in tension only. All of the components are located in the insulating vacuum space and attached to the helium vessel, including the motor, harmonic drive and piezoelectric actuators. The requirements and detailed design are presented. Measurements of range and resolution of the coarse tuner are presented and discussed.

  10. Proof-of-principle Experiment of a Ferroelectric Tuner for the 1.3 GHz Cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi,E.M.; Hahn, H.; Shchelkunov, S. V.

    2009-01-01

    A novel tuner has been developed by the Omega-P company to achieve fast control of the accelerator RF cavity frequency. The tuner is based on the ferroelectric property which has a variable dielectric constant as function of applied voltage. Tests using a Brookhaven National Laboratory (BNL) 1.3 GHz electron gun cavity have been carried out for a proof-of-principle experiment of the ferroelectric tuner. Two different methods were used to determine the frequency change achieved with the ferroelectric tuner (FT). The first method is based on a S11 measurement at the tuner port to find the reactive impedance change when themore » voltage is applied. The reactive impedance change then is used to estimate the cavity frequency shift. The second method is a direct S21 measurement of the frequency shift in the cavity with the tuner connected. The estimated frequency change from the reactive impedance measurement due to 5 kV is in the range between 3.2 kHz and 14 kHz, while 9 kHz is the result from the direct measurement. The two methods are in reasonable agreement. The detail description of the experiment and the analysis are discussed in the paper.« less

  11. Inductive tuners for microwave driven discharge lamps

    DOEpatents

    Simpson, James E.

    1999-01-01

    An RF powered electrodeless lamp utilizing an inductive tuner in the waveguide which couples the RF power to the lamp cavity, for reducing reflected RF power and causing the lamp to operate efficiently.

  12. Coaxial stub tuner

    NASA Technical Reports Server (NTRS)

    Chern, Shy-Shiun (Inventor)

    1981-01-01

    A coaxial stub tuner assembly is comprised of a short circuit branch diametrically opposite an open circuit branch. The stub of the short circuit branch is tubular, and the stub of the open circuit branch is a rod which extends through the tubular stub into the open circuit branch. The rod is threaded at least at its outer end, and the tubular stub is internally threaded to receive the threads of the rod. The open circuit branch can be easily tuned by turning the threaded rod in the tubular stub to adjust the length of the rod extending into the open circuit branch.

  13. Broadband power amplifier tube: Klystron tube 5K70SK-WBT and step tuner VA-1470S

    NASA Technical Reports Server (NTRS)

    Cox, H. R.; Johnson, J. O.

    1974-01-01

    The design concept, the fabrication, and the acceptance testing of a wide band Klystron tube and remotely controlled step tuner for channel selection are discussed. The equipment was developed for the modification of an existing 20 KW Power Amplifier System which was provided to the contractor as GFE. The replacement Klystron covers a total frequency range of 2025 to 2120 MHz and is tuneable to six (6) each channel with a band width of 22 MHz or greater per channel. A 5 MHz overlap is provided between channels. Channels are selected at the control panel located in the front of the Klystron magnet or from one of three remote control stations connected in parallel with the step tuner. Included in this final report are the results of acceptance tests conducted at the vendor's plant and of the integrated system tests.

  14. Model-Based Control of a Nonlinear Aircraft Engine Simulation using an Optimal Tuner Kalman Filter Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob

    2013-01-01

    This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.

  15. Enhanced production of electron cyclotron resonance plasma by exciting selective microwave mode on a large-bore electron cyclotron resonance ion source with permanent magnet.

    PubMed

    Kimura, Daiju; Kurisu, Yosuke; Nozaki, Dai; Yano, Keisuke; Imai, Youta; Kumakura, Sho; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2014-02-01

    We are constructing a tandem type ECRIS. The first stage is large-bore with cylindrically comb-shaped magnet. We optimize the ion beam current and ion saturation current by a mobile plate tuner. They change by the position of the plate tuner for 2.45 GHz, 11-13 GHz, and multi-frequencies. The peak positions of them are close to the position where the microwave mode forms standing wave between the plate tuner and the extractor. The absorbed powers are estimated for each mode. We show a new guiding principle, which the number of efficient microwave mode should be selected to fit to that of multipole of the comb-shaped magnets. We obtained the excitation of the selective modes using new mobile plate tuner to enhance ECR efficiency.

  16. Tuner control system of Spoke012 SRF cavity for C-ADS injector I

    NASA Astrophysics Data System (ADS)

    Liu, Na; Sun, Yi; Wang, Guang-Wei; Mi, Zheng-Hui; Lin, Hai-Ying; Wang, Qun-Yao; Liu, Rong; Ma, Xin-Peng

    2016-09-01

    A new tuner control system for spoke superconducting radio frequency (SRF) cavities has been developed and applied to cryomodule I of the C-ADS injector I at the Institute of High Energy Physics, Chinese Academy of Sciences. We have successfully implemented the tuner controller based on Programmable Logic Controller (PLC) for the first time and achieved a cavity tuning phase error of ±0.7° (about ±4 Hz peak to peak) in the presence of electromechanical coupled resonance. This paper presents preliminary experimental results based on the PLC tuner controller under proton beam commissioning. Supported by Proton linac accelerator I of China Accelerator Driven sub-critical System (Y12C32W129)

  17. State-space self-tuner for on-line adaptive control

    NASA Technical Reports Server (NTRS)

    Shieh, L. S.

    1994-01-01

    Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.

  18. Results of Accelerated Life Testing of LCLS-II Cavity Tuner Motor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huque, Naeem; Daly, Edward; Pischalnikov, Yuriy

    An Accelerated Life Test (ALT) of the Phytron stepper motor used in the LCLS-II cavity tuner has been conducted at JLab. Since the motor will reside inside the cryomodule, any failure would lead to a very costly and arduous repair. As such, the motor was tested for the equivalent of 30 lifetimes before being approved for use in the production cryomodules. The 9-cell LCLS-II cavity is simulated by disc springs with an equivalent spring constant. Plots of the motor position vs. tuner position ' measured via an installed linear variable differential transformer (LVDT) ' are used to measure motor motion.more » The titanium spindle was inspected for loss of lubrication. The motor passed the ALT, and is set to be installed in the LCLS-II cryomodules.« less

  19. RESULTS OF ACCELERATED LIFE TESTING OF LCLS-II CAVITY TUNER MOTOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huque, Naeem; Daly, Edward F.; Pischalnikov, Yuriy

    An Accelerated Life Test (ALT) of the Phytron stepper motor used in the LCLS-II cavity tuner has been conducted at JLab. Since the motor will reside inside the cryomodule, any failure would lead to a very costly and arduous repair. As such, the motor was tested for the equivalent of 30 lifetimes before being approved for use in the production cryomodules. The 9-cell LCLS-II cavity is simulated by disc springs with an equivalent spring constant. Plots of the motor position vs. tuner position ' measured via an installed linear variable differential transformer (LVDT) ' are used to measure motor motion.more » The titanium spindle was inspected for loss of lubrication. The motor passed the ALT, and is set to be installed in the LCLS-II cryomodules.« less

  20. Feedback control impedance matching system using liquid stub tuner for ion cyclotron heating

    NASA Astrophysics Data System (ADS)

    Nomura, G.; Yokota, M.; Kumazawa, R.; Takahashi, C.; Torii, Y.; Saito, K.; Yamamoto, T.; Takeuchi, N.; Shimpo, F.; Kato, A.; Seki, T.; Mutoh, T.; Watari, T.; Zhao, Y.

    2001-10-01

    A long pulse discharge more than 2 minutes was achieved using Ion Cyclotron Range of Frequency (ICRF) heating only on the Large Helical Device (LHD). The final goal is a steady state operation (30 minutes) at MW level. A liquid stub tuner was newly invented to cope with the long pulse discharge. The liquid surface level was shifted under a high RF voltage operation without breakdown. In the long pulse discharge the reflected power was observed to gradually increase. The shift of the liquid surface was thought to be inevitably required at the further longer discharge. An ICRF heating system consisting of a liquid stub tuner was fabricated to demonstrate a feedback control impedance matching. The required shift of the liquid surface was predicted using a forward and a reflected RF powers as well as the phase difference between them. A liquid stub tuner was controlled by the multiprocessing computer system with CINOS (CHS Integration No Operating System) methods. The prime objective was to improve the performance of data processing and controlling a signal response. By employing this method a number of the program steps was remarkably reduced. A real time feedback control was demonstrated in the system using a temporally changed electric resistance.

  1. Tuner of a Second Harmonic Cavity of the Fermilab Booster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terechkine, I.; Duel, K.; Madrak, R.

    2015-05-17

    Introducing a second harmonic cavity in the accelerating system of the Fermilab Booster promises significant reduc-tion of the particle beam loss during the injection, transi-tion, and extraction stages. To follow the changing energy of the beam during acceleration cycles, the cavity is equipped with a tuner that employs perpendicularly biased AL800 garnet material as the frequency tuning media. The required tuning range of the cavity is from 75.73 MHz at injection to 105.64 MHz at extraction. This large range ne-cessitates the use of a relatively low bias magnetic field at injection, which could lead to high RF loss power densitymore » in the garnet, or a strong bias magnetic field at extraction, which could result in high power consumption in the tuner’s bias magnet. The required 15 Hz repetition rate of the device and high sensitivity of the local RF power loss to the level of the magnetic field added to the challenges of the bias system design. In this report, the main features of a proposed prototype of the second harmonic cavity tuner are presented.« less

  2. A hydrogen maser with cavity auto-tuner for timekeeping

    NASA Technical Reports Server (NTRS)

    Lin, C. F.; He, J. W.; Zhai, Z. C.

    1992-01-01

    A hydrogen maser frequency standard for timekeeping was worked on at the Shanghai Observatory. The maser employs a fast cavity auto-tuner, which can detect and compensate the frequency drift of the high-Q resonant cavity with a short time constant by means of a signal injection method, so that the long term frequency stability of the maser standard is greatly improved. The cavity auto-tuning system and some maser data obtained from the atomic time comparison are described.

  3. Characterization of CNRS Fizeau wedge laser tuner

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom fabricated circuit board which contains a high speed fringe detection and locating circuit. This board includes a dc level discriminator type fringe detector, a counter circuit to determine fringe center, a pulsed laser triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data collection process and interprets the results.

  4. Characterization of CNRS Fizeau wedge laser tuner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom-fabricated circuit board which contains a high-speed fringe detection and locating circuit. This board includes a dc level-discriminator-type fringe detector, a counter circuit to determine fringe center, a pulsed lasermore » triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data-collection process and interprets the results.« less

  5. Tunable biasing magnetic field design of ferrite tuner for ICRF heating system in EAST

    NASA Astrophysics Data System (ADS)

    Manman, XU; Yuntao, SONG; Gen, CHEN; Yanping, ZHAO; Yuzhou, MAO; Guang, LIU; Zhen, PENG

    2017-11-01

    Ion cyclotron range of frequency (ICRF) heating has been used in tokamaks as one of the most successful auxiliary heating tools and has been adopted in the EAST. However, the antenna load will fluctuate with the change of plasma parameters in the ICRF heating process. To ensure the steady operation of the ICRF heating system in the EAST, fast ferrite tuner (FFT) has been carried out to achieve real-time impedance matching. For the requirements of the FFT impedance matching system, the magnet system of the ferrite tuner (FT) was designed by numerical simulations and experimental analysis, where the biasing magnetic circuit and alternating magnetic circuit were the key researched parts of the ferrite magnet. The integral design goal of the FT magnetic circuit is that DC bias magnetic field is 2000 Gs and alternating magnetic field is ±400 Gs. In the FTT, E-type magnetic circuit was adopted. Ferrite material is NdFeB with a thickness of 30 mm by setting the working point of NdFeB, and the ampere turn of excitation coil is 25 through the theoretical calculation and simulation analysis. The coil inductance to generate alternating magnetic field is about 7 mH. Eddy-current effect has been analyzed, while the magnetic field distribution has been measured by a Hall probe in the medium plane of the biasing magnet. Finally, the test results show the good performance of the biasing magnet satisfying the design and operating requirements of the FFT.

  6. Selectivity optimization in green chromatography by gradient stationary phase optimized selectivity liquid chromatography.

    PubMed

    Chen, Kai; Lynen, Frédéric; De Beer, Maarten; Hitzel, Laure; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat

    2010-11-12

    Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation by using a combination of different stationary phases. Previous work has shown that SOSLC offers excellent possibilities for method development, especially after the recent modification towards linear gradient SOSLC. The present work is aimed at developing and extending the SOSLC approach towards selectivity optimization and method development for green chromatography. Contrary to current LC practices, a green mobile phase (water/ethanol/formic acid) is hereby preselected and the composition of the stationary phase is optimized under a given gradient profile to obtain baseline resolution of all target solutes in the shortest possible analysis time. With the algorithm adapted to the high viscosity property of ethanol, the principle is illustrated with a fast, full baseline resolution for a randomly selected mixture composed of sulphonamides, xanthine alkaloids and steroids. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. Self-extinction through optimizing selection.

    PubMed

    Parvinen, Kalle; Dieckmann, Ulf

    2013-09-21

    Evolutionary suicide is a process in which selection drives a viable population to extinction. So far, such selection-driven self-extinction has been demonstrated in models with frequency-dependent selection. This is not surprising, since frequency-dependent selection can disconnect individual-level and population-level interests through environmental feedback. Hence it can lead to situations akin to the tragedy of the commons, with adaptations that serve the selfish interests of individuals ultimately ruining a population. For frequency-dependent selection to play such a role, it must not be optimizing. Together, all published studies of evolutionary suicide have created the impression that evolutionary suicide is not possible with optimizing selection. Here we disprove this misconception by presenting and analyzing an example in which optimizing selection causes self-extinction. We then take this line of argument one step further by showing, in a further example, that selection-driven self-extinction can occur even under frequency-independent selection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Selective robust optimization: A new intensity-modulated proton therapy optimization strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yupeng; Niemela, Perttu; Siljamaki, Sami

    2015-08-15

    Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less

  9. Optimal Item Selection with Credentialing Examinations.

    ERIC Educational Resources Information Center

    Hambleton, Ronald K.; And Others

    The study compared two promising item response theory (IRT) item-selection methods, optimal and content-optimal, with two non-IRT item selection methods, random and classical, for use in fixed-length certification exams. The four methods were used to construct 20-item exams from a pool of approximately 250 items taken from a 1985 certification…

  10. Selective Optimization

    DTIC Science & Technology

    2015-07-06

    NUMBER 5b. GRANT NUMBER AFOSR FA9550-12-1-0154 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Shabbir Ahmed and Santanu S. Dey 5d. PROJECT NUMBER 5e. TASK...standard mixed-integer programming (MIP) formulations of selective optimization problems. While such formulations can be attacked by commercial...F33615-86-C-5169. 5b. GRANT NUMBER. Enter all grant numbers as they appear in the report, e.g. AFOSR-82-1234. 5c. PROGRAM ELEMENT NUMBER. Enter

  11. Analysis of double stub tuner control stability in a many element phased array antenna with strong cross-coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, G. M.; Fitzgerald, E.; Johnson, D. K.

    2014-02-12

    Active stub tuning with a fast ferrite tuner (FFT) allows for the system to respond dynamically to changes in the plasma impedance such as during the L-H transition or edge localized modes (ELMs), and has greatly increased the effectiveness of fusion ion cyclotron range of frequency systems. A high power waveguide double-stub tuner is under development for use with the Alcator C-Mod lower hybrid current drive (LHCD) system. Exact impedance matching with a double-stub is possible for a single radiating element under most load conditions, with the reflection coefficient reduced from Γ to Γ{sup 2} in the “forbidden region.” Themore » relative phase shift between adjacent columns of a LHCD antenna is critical for control of the launched n{sub ∥} spectrum. Adding a double-stub tuning network will perturb the phase of the forward wave particularly if the unmatched reflection coefficient is high. This effect can be compensated by adjusting the phase of the low power microwave drive for each klystron amplifier. Cross-coupling of the reflected power between columns of the launcher must also be considered. The problem is simulated by cascading a scattering matrix for the plasma provided by a linear coupling model with the measured launcher scattering matrix and that of the FFTs. The solution is advanced in an iterative manner similar to the time-dependent behavior of the real system. System performance is presented under a range of edge density conditions from under-dense to over-dense and a range of launched n{sub ∥}.« less

  12. A fully integrated direct-conversion digital satellite tuner in 0.18 μm CMOS

    NASA Astrophysics Data System (ADS)

    Si, Chen; Zengwang, Yang; Mingliang, Gu

    2011-04-01

    A fully integrated direct-conversion digital satellite tuner for DVB-S/S2 and ABS-S applications is presented. A broadband noise-canceling Balun-LNA and passive quadrature mixers provided a high-linearity low noise RF front-end, while the synthesizer integrated the loop filter to reduce the solution cost and system debug time. Fabricated in 0.18 μm CMOS, the chip achieves a less than 7.6 dB noise figure over a 900-2150 MHz L-band, while the measured sensitivity for 4.42 MS/s QPSK-3/4 mode is -91 dBm at the PCB connector. The fully integrated integer-N synthesizer operating from 2150 to 4350 MHz achieves less than 1 °C integrated phase error. The chip consumes about 145 mA at a 3.3 V supply with internal integrated LDOs.

  13. Optimel: Software for selecting the optimal method

    NASA Astrophysics Data System (ADS)

    Popova, Olga; Popov, Boris; Romanov, Dmitry; Evseeva, Marina

    Optimel: software for selecting the optimal method automates the process of selecting a solution method from the optimization methods domain. Optimel features practical novelty. It saves time and money when conducting exploratory studies if its objective is to select the most appropriate method for solving an optimization problem. Optimel features theoretical novelty because for obtaining the domain a new method of knowledge structuring was used. In the Optimel domain, extended quantity of methods and their properties are used, which allows identifying the level of scientific studies, enhancing the user's expertise level, expand the prospects the user faces and opening up new research objectives. Optimel can be used both in scientific research institutes and in educational institutions.

  14. Combinatorial Optimization in Project Selection Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Dewi, Sari; Sawaluddin

    2018-01-01

    This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.

  15. Managing the Public Sector Research and Development Portfolio Selection Process: A Case Study of Quantitative Selection and Optimization

    DTIC Science & Technology

    2016-09-01

    PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION by Jason A. Schwartz...PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION 5. FUNDING NUMBERS 6...describing how public sector organizations can implement a research and development (R&D) portfolio optimization strategy to maximize the cost

  16. Optimized Periocular Template Selection for Human Recognition

    PubMed Central

    Sa, Pankaj K.; Majhi, Banshidhar

    2013-01-01

    A novel approach for selecting a rectangular template around periocular region optimally potential for human recognition is proposed. A comparatively larger template of periocular image than the optimal one can be slightly more potent for recognition, but the larger template heavily slows down the biometric system by making feature extraction computationally intensive and increasing the database size. A smaller template, on the contrary, cannot yield desirable recognition though the smaller template performs faster due to low computation for feature extraction. These two contradictory objectives (namely, (a) to minimize the size of periocular template and (b) to maximize the recognition through the template) are aimed to be optimized through the proposed research. This paper proposes four different approaches for dynamic optimal template selection from periocular region. The proposed methods are tested on publicly available unconstrained UBIRISv2 and FERET databases and satisfactory results have been achieved. Thus obtained template can be used for recognition of individuals in an organization and can be generalized to recognize every citizen of a nation. PMID:23984370

  17. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  18. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Wang, J

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less

  19. An experimental and theoretical investigation of a fuel system tuner for the suppression of combustion driven oscillations

    NASA Astrophysics Data System (ADS)

    Scarborough, David E.

    Manufacturers of commercial, power-generating, gas turbine engines continue to develop combustors that produce lower emissions of nitrogen oxides (NO x) in order to meet the environmental standards of governments around the world. Lean, premixed combustion technology is one technique used to reduce NOx emissions in many current power and energy generating systems. However, lean, premixed combustors are susceptible to thermo-acoustic oscillations, which are pressure and heat-release fluctuations that occur because of a coupling between the combustion process and the natural acoustic modes of the system. These pressure oscillations lead to premature failure of system components, resulting in very costly maintenance and downtime. Therefore, a great deal of work has gone into developing methods to prevent or eliminate these combustion instabilities. This dissertation presents the results of a theoretical and experimental investigation of a novel Fuel System Tuner (FST) used to damp detrimental combustion oscillations in a gas turbine combustor by changing the fuel supply system impedance, which controls the amplitude and phase of the fuel flowrate. When the FST is properly tuned, the heat release oscillations resulting from the fuel-air ratio oscillations damp, rather than drive, the combustor acoustic pressure oscillations. A feasibility study was conducted to prove the validity of the basic idea and to develop some basic guidelines for designing the FST. Acoustic models for the subcomponents of the FST were developed, and these models were experimentally verified using a two-microphone impedance tube. Models useful for designing, analyzing, and predicting the performance of the FST were developed and used to demonstrate the effectiveness of the FST. Experimental tests showed that the FST reduced the acoustic pressure amplitude of an unstable, model, gas-turbine combustor over a wide range of operating conditions and combustor configurations. Finally, combustor

  20. Optimization methods for activities selection problems

    NASA Astrophysics Data System (ADS)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  1. Optimal Sensor Selection for Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  2. Training set optimization under population structure in genomic selection.

    PubMed

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  3. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2017-04-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  4. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  5. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    PubMed

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A Rational Analysis of the Selection Task as Optimal Data Selection.

    ERIC Educational Resources Information Center

    Oaksford, Mike; Chater, Nick

    1994-01-01

    Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)

  7. Optimizing event selection with the random grid search

    NASA Astrophysics Data System (ADS)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; Stewart, Chip

    2018-07-01

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  8. Adaptive feature selection using v-shaped binary particle swarm optimization.

    PubMed

    Teng, Xuyang; Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers.

  9. Adaptive feature selection using v-shaped binary particle swarm optimization

    PubMed Central

    Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850

  10. Transferability of optimally-selected climate models in the quantification of climate change impacts on hydrology

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe

    2016-11-01

    Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.

  11. Optimizing event selection with the random grid search

    DOE PAGES

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; ...

    2018-02-27

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  12. Optimizing Event Selection with the Random Grid Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events inmore » the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  13. Optimizing event selection with the random grid search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  14. Mode perturbation method for optimal guided wave mode and frequency selection.

    PubMed

    Philtron, J H; Rose, J L

    2014-09-01

    With a thorough understanding of guided wave mechanics, researchers can predict which guided wave modes will have a high probability of success in a particular nondestructive evaluation application. However, work continues to find optimal mode and frequency selection for a given application. This "optimal" mode could give the highest sensitivity to defects or the greatest penetration power, increasing inspection efficiency. Since material properties used for modeling work may be estimates, in many cases guided wave mode and frequency selection can be adjusted for increased inspection efficiency in the field. In this paper, a novel mode and frequency perturbation method is described and used to identify optimal mode points based on quantifiable wave characteristics. The technique uses an ultrasonic phased array comb transducer to sweep in phase velocity and frequency space. It is demonstrated using guided interface waves for bond evaluation. After searching nearby mode points, an optimal mode and frequency can be selected which has the highest sensitivity to a defect, or gives the greatest penetration power. The optimal mode choice for a given application depends on the requirements of the inspection. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    PubMed

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  16. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    PubMed Central

    Faltermeier, Rupert; Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses. PMID:26693250

  17. Quantum dot laser optimization: selectively doped layers

    NASA Astrophysics Data System (ADS)

    Korenev, Vladimir V.; Konoplev, Sergey S.; Savelyev, Artem V.; Shernyakov, Yurii M.; Maximov, Mikhail V.; Zhukov, Alexey E.

    2016-08-01

    Edge emitting quantum dot (QD) lasers are discussed. It has been recently proposed to use modulation p-doping of the layers that are adjacent to QD layers in order to control QD's charge state. Experimentally it has been proven useful to enhance ground state lasing and suppress the onset of excited state lasing at high injection. These results have been also confirmed with numerical calculations involving solution of drift-diffusion equations. However, deep understanding of physical reasons for such behavior and laser optimization requires analytical approaches to the problem. In this paper, under a set of assumptions we provide an analytical model that explains major effects of selective p-doping. Capture rates of elections and holes can be calculated by solving Poisson equations for electrons and holes around the charged QD layer. The charge itself is ruled by capture rates and selective doping concentration. We analyzed this self-consistent set of equations and showed that it can be used to optimize QD laser performance and to explain underlying physics.

  18. A parallel optimization method for product configuration and supplier selection based on interval

    NASA Astrophysics Data System (ADS)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  19. [Academic burnout and selection-optimization-compensation strategy in medical students].

    PubMed

    Chun, Kyung Hee; Park, Young Soon; Lee, Young Hwan; Kim, Seong Yong

    2014-12-01

    This study was conducted to examine the relationship between academic demand, academic burnout, and the selection-optimization-compensation (SOC) strategy in medical students. A total of 317 students at Yeungnam University, comprising 90 premedical course students, 114 medical course students, and 113 graduate course students, completed a survey that addressed the factors of academic burnout and the selection-optimization-compensation strategy. We analyzed variances of burnout and SOC strategy use by group, and stepwise multiple regression analysis was conducted. There were significant differences in emotional exhaustion and cynicism between groups and year in school. In the SOC strategy, there were no significant differences between groups except for elective selection. The second-year medical and graduate students experienced significantly greater exhaustion (p<0.001), and first-year premedical students experienced significantly higher cynicism (p<0.001). By multiple regression analysis, subfactors of academic burnout and emotional exhaustion were significantly affected by academic demand (p<0.001), and 46% of the variance was explained. Cynicism was significantly affected by elective selection (p<0.05), and inefficacy was significantly influenced by optimization (p<0.001). To improve adaptation, prescriptive strategies and preventive support should be implemented with regard to academic burnout in medical school. Longitudinal and qualitative studies on burnout must be conducted.

  20. Opposing selection and environmental variation modify optimal timing of breeding.

    PubMed

    Tarwater, Corey E; Beissinger, Steven R

    2013-09-17

    Studies of evolution in wild populations often find that the heritable phenotypic traits of individuals producing the most offspring do not increase proportionally in the population. This paradox may arise when phenotypic traits influence both fecundity and viability and when there is a tradeoff between these fitness components, leading to opposing selection. Such tradeoffs are the foundation of life history theory, but they are rarely investigated in selection studies. Timing of breeding is a classic example of a heritable trait under directional selection that does not result in an evolutionary response. Using a 22-y study of a tropical parrot, we show that opposing viability and fecundity selection on the timing of breeding is common and affects optimal breeding date, defined by maximization of fitness. After accounting for sampling error, the directions of viability (positive) and fecundity (negative) selection were consistent, but the magnitude of selection fluctuated among years. Environmental conditions (rainfall and breeding density) primarily and breeding experience secondarily modified selection, shifting optimal timing among individuals and years. In contrast to other studies, viability selection was as strong as fecundity selection, late-born juveniles had greater survival than early-born juveniles, and breeding later in the year increased fitness under opposing selection. Our findings provide support for life history tradeoffs influencing selection on phenotypic traits, highlight the need to unify selection and life history theory, and illustrate the importance of monitoring survival as well as reproduction for understanding phenological responses to climate change.

  1. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate

  2. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  3. A quantitative model of optimal data selection in Wason's selection task.

    PubMed

    Hattori, Masasi

    2002-10-01

    The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.

  4. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  5. Optimization of selection for growth in Menz Sheep while minimizing inbreeding depression in fitness traits

    PubMed Central

    2013-01-01

    The genetic trends in fitness (inbreeding, fertility and survival) of a closed nucleus flock of Menz sheep under selection during ten years for increased body weight were investigated to evaluate the consequences of selection for body weight on fitness. A mate selection tool was used to optimize in retrospect the actual selection and matings conducted over the project period to assess if the observed genetic gains in body weight could have been achieved with a reduced level of inbreeding. In the actual selection, the genetic trends for yearling weight, fertility of ewes and survival of lambs were 0.81 kg, –0.00026% and 0.016% per generation. The average inbreeding coefficient remained zero for the first few generations and then tended to increase over generations. The genetic gains achieved with the optimized retrospective selection and matings were highly comparable with the observed values, the correlation between the average breeding values of lambs born from the actual and optimized matings over the years being 0.99. However, the level of inbreeding with the optimized mate selections remained zero until late in the years of selection. Our results suggest that an optimal selection strategy that considers both genetic merits and coancestry of mates should be adopted to sustain the Menz sheep breeding program. PMID:23783076

  6. Gradient stationary phase optimized selectivity liquid chromatography with conventional columns.

    PubMed

    Chen, Kai; Lynen, Frédéric; Szucs, Roman; Hanna-Brown, Melissa; Sandra, Pat

    2013-05-21

    Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation. By combination of different stationary phases, SOSLC offers excellent possibilities for method development under both isocratic and gradient conditions. The so far available commercial SOSLC protocol utilizes dedicated column cartridges and corresponding cartridge holders to build up the combined column of different stationary phases. The present work is aimed at developing and extending the gradient SOSLC approach towards coupling conventional columns. Generic tubing was used to connect short commercially available LC columns. Fast and base-line separation of a mixture of 12 compounds containing phenones, benzoic acids and hydroxybenzoates under both isocratic and linear gradient conditions was selected to demonstrate the potential of SOSLC. The influence of the connecting tubing on the deviation of predictions is also discussed.

  7. An opinion formation based binary optimization approach for feature selection

    NASA Astrophysics Data System (ADS)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  8. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  9. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  10. A novel channel selection method for optimal classification in different motor imagery BCI paradigms.

    PubMed

    Shan, Haijun; Xu, Haojie; Zhu, Shanan; He, Bin

    2015-10-21

    For sensorimotor rhythms based brain-computer interface (BCI) systems, classification of different motor imageries (MIs) remains a crucial problem. An important aspect is how many scalp electrodes (channels) should be used in order to reach optimal performance classifying motor imaginations. While the previous researches on channel selection mainly focus on MI tasks paradigms without feedback, the present work aims to investigate the optimal channel selection in MI tasks paradigms with real-time feedback (two-class control and four-class control paradigms). In the present study, three datasets respectively recorded from MI tasks experiment, two-class control and four-class control experiments were analyzed offline. Multiple frequency-spatial synthesized features were comprehensively extracted from every channel, and a new enhanced method IterRelCen was proposed to perform channel selection. IterRelCen was constructed based on Relief algorithm, but was enhanced from two aspects: change of target sample selection strategy and adoption of the idea of iterative computation, and thus performed more robust in feature selection. Finally, a multiclass support vector machine was applied as the classifier. The least number of channels that yield the best classification accuracy were considered as the optimal channels. One-way ANOVA was employed to test the significance of performance improvement among using optimal channels, all the channels and three typical MI channels (C3, C4, Cz). The results show that the proposed method outperformed other channel selection methods by achieving average classification accuracies of 85.2, 94.1, and 83.2 % for the three datasets, respectively. Moreover, the channel selection results reveal that the average numbers of optimal channels were significantly different among the three MI paradigms. It is demonstrated that IterRelCen has a strong ability for feature selection. In addition, the results have shown that the numbers of optimal

  11. Simple summation rule for optimal fixation selection in visual search.

    PubMed

    Najemnik, Jiri; Geisler, Wilson S

    2009-06-01

    When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.

  12. Application’s Method of Quadratic Programming for Optimization of Portfolio Selection

    NASA Astrophysics Data System (ADS)

    Kawamoto, Shigeru; Takamoto, Masanori; Kobayashi, Yasuhiro

    Investors or fund-managers face with optimization of portfolio selection, which means that determine the kind and the quantity of investment among several brands. We have developed a method to obtain optimal stock’s portfolio more rapidly from twice to three times than conventional method with efficient universal optimization. The method is characterized by quadratic matrix of utility function and constrained matrices divided into several sub-matrices by focusing on structure of these matrices.

  13. Age-Related Differences in Goals: Testing Predictions from Selection, Optimization, and Compensation Theory and Socioemotional Selectivity Theory

    ERIC Educational Resources Information Center

    Penningroth, Suzanna L.; Scott, Walter D.

    2012-01-01

    Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two…

  14. WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization

    NASA Astrophysics Data System (ADS)

    Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry

    2018-01-01

    We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.

  15. Optimizing Ligand Efficiency of Selective Androgen Receptor Modulators (SARMs)

    PubMed Central

    2015-01-01

    A series of selective androgen receptor modulators (SARMs) containing the 1-(trifluoromethyl)benzyl alcohol core have been optimized for androgen receptor (AR) potency and drug-like properties. We have taken advantage of the lipophilic ligand efficiency (LLE) parameter as a guide to interpret the effect of structural changes on AR activity. Over the course of optimization efforts the LLE increased over 3 log units leading to a SARM 43 with nanomolar potency, good aqueous kinetic solubility (>700 μM), and high oral bioavailability in rats (83%). PMID:26819671

  16. Ant-cuckoo colony optimization for feature selection in digital mammogram.

    PubMed

    Jona, J B; Nagaveni, N

    2014-01-15

    Digital mammogram is the only effective screening method to detect the breast cancer. Gray Level Co-occurrence Matrix (GLCM) textural features are extracted from the mammogram. All the features are not essential to detect the mammogram. Therefore identifying the relevant feature is the aim of this work. Feature selection improves the classification rate and accuracy of any classifier. In this study, a new hybrid metaheuristic named Ant-Cuckoo Colony Optimization a hybrid of Ant Colony Optimization (ACO) and Cuckoo Search (CS) is proposed for feature selection in Digital Mammogram. ACO is a good metaheuristic optimization technique but the drawback of this algorithm is that the ant will walk through the path where the pheromone density is high which makes the whole process slow hence CS is employed to carry out the local search of ACO. Support Vector Machine (SVM) classifier with Radial Basis Kernal Function (RBF) is done along with the ACO to classify the normal mammogram from the abnormal mammogram. Experiments are conducted in miniMIAS database. The performance of the new hybrid algorithm is compared with the ACO and PSO algorithm. The results show that the hybrid Ant-Cuckoo Colony Optimization algorithm is more accurate than the other techniques.

  17. Toward optimal feature and time segment selection by divergence method for EEG signals classification.

    PubMed

    Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing

    2018-06-01

    Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Training set optimization under population structure in genomic selection

    USDA-ARS?s Scientific Manuscript database

    The optimization of the training set (TRS) in genomic selection (GS) has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the Coefficient of D...

  19. Pipe degradation investigations for optimization of flow-accelerated corrosion inspection location selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chandra, S.; Habicht, P.; Chexal, B.

    1995-12-01

    A large amount of piping in a typical nuclear power plant is susceptible to Flow-Accelerated Corrosion (FAC) wall thinning to varying degrees. A typical PAC monitoring program includes the wall thickness measurement of a select number of components in order to judge the structural integrity of entire systems. In order to appropriately allocate resources and maintain an adequate FAC program, it is necessary to optimize the selection of components for inspection by focusing on those components which provide the best indication of system susceptibility to FAC. A better understanding of system FAC predictability and the types of FAC damage encounteredmore » can provide some of the insight needed to better focus and optimize the inspection plan for an upcoming refueling outage. Laboratory examination of FAC damaged components removed from service at Northeast Utilities` (NU) nuclear power plants provides a better understanding of the damage mechanisms involved and contributing causes. Selected results of this ongoing study are presented with specific conclusions which will help NU to better focus inspections and thus optimize the ongoing FAC inspection program.« less

  20. Mode selective generation of guided waves by systematic optimization of the interfacial shear stress profile

    NASA Astrophysics Data System (ADS)

    Yazdanpanah Moghadam, Peyman; Quaegebeur, Nicolas; Masson, Patrice

    2015-01-01

    Piezoelectric transducers are commonly used in structural health monitoring systems to generate and measure ultrasonic guided waves (GWs) by applying interfacial shear and normal stresses to the host structure. In most cases, in order to perform damage detection, advanced signal processing techniques are required, since a minimum of two dispersive modes are propagating in the host structure. In this paper, a systematic approach for mode selection is proposed by optimizing the interfacial shear stress profile applied to the host structure, representing the first step of a global optimization of selective mode actuator design. This approach has the potential of reducing the complexity of signal processing tools as the number of propagating modes could be reduced. Using the superposition principle, an analytical method is first developed for GWs excitation by a finite number of uniform segments, each contributing with a given elementary shear stress profile. Based on this, cost functions are defined in order to minimize the undesired modes and amplify the selected mode and the optimization problem is solved with a parallel genetic algorithm optimization framework. Advantages of this method over more conventional transducers tuning approaches are that (1) the shear stress can be explicitly optimized to both excite one mode and suppress other undesired modes, (2) the size of the excitation area is not constrained and mode-selective excitation is still possible even if excitation width is smaller than all excited wavelengths, and (3) the selectivity is increased and the bandwidth extended. The complexity of the optimal shear stress profile obtained is shown considering two cost functions with various optimal excitation widths and number of segments. Results illustrate that the desired mode (A0 or S0) can be excited dominantly over other modes up to a wave power ratio of 1010 using an optimal shear stress profile.

  1. Stationary-phase optimized selectivity liquid chromatography: development of a linear gradient prediction algorithm.

    PubMed

    De Beer, Maarten; Lynen, Fréderic; Chen, Kai; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat

    2010-03-01

    Stationary-phase optimized selectivity liquid chromatography (SOS-LC) is a tool in reversed-phase LC (RP-LC) to optimize the selectivity for a given separation by combining stationary phases in a multisegment column. The presently (commercially) available SOS-LC optimization procedure and algorithm are only applicable to isocratic analyses. Step gradient SOS-LC has been developed, but this is still not very elegant for the analysis of complex mixtures composed of components covering a broad hydrophobicity range. A linear gradient prediction algorithm has been developed allowing one to apply SOS-LC as a generic RP-LC optimization method. The algorithm allows operation in isocratic, stepwise, and linear gradient run modes. The features of SOS-LC in the linear gradient mode are demonstrated by means of a mixture of 13 steroids, whereby baseline separation is predicted and experimentally demonstrated.

  2. Optimal Portfolio Selection Under Concave Price Impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Jin, E-mail: jinma@usc.edu; Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk; Xu Jing, E-mail: xujing8023@yahoo.com.cn

    2013-06-15

    In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solutionmore » to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.« less

  3. Selective waste collection optimization in Romania and its impact to urban climate

    NASA Astrophysics Data System (ADS)

    Mihai, Šercǎianu; Iacoboaea, Cristina; Petrescu, Florian; Aldea, Mihaela; Luca, Oana; Gaman, Florian; Parlow, Eberhard

    2016-08-01

    According to European Directives, transposed in national legislation, the Member States should organize separate collection systems at least for paper, metal, plastic, and glass until 2015. In Romania, since 2011 only 12% of municipal collected waste was recovered, the rest being stored in landfills, although storage is considered the last option in the waste hierarchy. At the same time there was selectively collected only 4% of the municipal waste. Surveys have shown that the Romanian people do not have selective collection bins close to their residencies. The article aims to analyze the current situation in Romania in the field of waste collection and management and to make a proposal for selective collection containers layout, using geographic information systems tools, for a case study in Romania. Route optimization is used based on remote sensing technologies and network analyst protocols. Optimizing selective collection system the greenhouse gases, particles and dust emissions can be reduced.

  4. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    NASA Astrophysics Data System (ADS)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  5. Joint Optimization of Receiver Placement and Illuminator Selection for a Multiband Passive Radar Network.

    PubMed

    Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin

    2017-06-14

    The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.

  6. OPTIMAL TIME-SERIES SELECTION OF QUASARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Nathaniel R.; Bloom, Joshua S.

    2011-03-15

    We present a novel method for the optimal selection of quasars using time-series observations in a single photometric bandpass. Utilizing the damped random walk model of Kelly et al., we parameterize the ensemble quasar structure function in Sloan Stripe 82 as a function of observed brightness. The ensemble model fit can then be evaluated rigorously for and calibrated with individual light curves with no parameter fitting. This yields a classification in two statistics-one describing the fit confidence and the other describing the probability of a false alarm-which can be tuned, a priori, to achieve high quasar detection fractions (99% completenessmore » with default cuts), given an acceptable rate of false alarms. We establish the typical rate of false alarms due to known variable stars as {approx}<3% (high purity). Applying the classification, we increase the sample of potential quasars relative to those known in Stripe 82 by as much as 29%, and by nearly a factor of two in the redshift range 2.5 < z < 3, where selection by color is extremely inefficient. This represents 1875 new quasars in a 290 deg{sup 2} field. The observed rates of both quasars and stars agree well with the model predictions, with >99% of quasars exhibiting the expected variability profile. We discuss the utility of the method at high redshift and in the regime of noisy and sparse data. Our time-series selection complements well-independent selection based on quasar colors and has strong potential for identifying high-redshift quasars for Baryon Acoustic Oscillations and other cosmology studies in the LSST era.« less

  7. Cartridge Casing Catcher With Reduced Firearm Ejection Port Flash and Noise

    DTIC Science & Technology

    2009-05-26

    acoustic tuner structure comprises at least one of a quarter wave tuner, a Quincke tuner, and a Helmholtz tuner. The magnetic material comprises magnetic...of noise) will be attenuated. FIG. 2B illustrates a Herschel- Quincke (usually simply called Quincke ) or interference tuner 10’. The Quincke tnner... Quincke tuner, and a Helmholtz resonator similar to the acoustic tnners illustrated in FIGS. 2(A-C), respectively. The acoustic tnner structure 240 of

  8. To Eat or Not to Eat: An Easy Simulation of Optimal Diet Selection in the Classroom

    ERIC Educational Resources Information Center

    Ray, Darrell L.

    2010-01-01

    Optimal diet selection, a component of optimal foraging theory, suggests that animals should select a diet that either maximizes energy or nutrient consumption per unit time or minimizes the foraging time needed to attain required energy or nutrients. In this exercise, students simulate the behavior of foragers that either show no foraging…

  9. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    NASA Astrophysics Data System (ADS)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  10. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGES

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  11. Nursing performance under high workload: a diary study on the moderating role of selection, optimization and compensation strategies.

    PubMed

    Baethge, Anja; Müller, Andreas; Rigotti, Thomas

    2016-03-01

    The aim of this study was to investigate whether selective optimization with compensation constitutes an individualized action strategy for nurses wanting to maintain job performance under high workload. High workload is a major threat to healthcare quality and performance. Selective optimization with compensation is considered to enhance the efficient use of intra-individual resources and, therefore, is expected to act as a buffer against the negative effects of high workload. The study applied a diary design. Over five consecutive workday shifts, self-report data on workload was collected at three randomized occasions during each shift. Self-reported job performance was assessed in the evening. Self-reported selective optimization with compensation was assessed prior to the diary reporting. Data were collected in 2010. Overall, 136 nurses from 10 German hospitals participated. Selective optimization with compensation was assessed with a nine-item scale that was specifically developed for nursing. The NASA-TLX scale indicating the pace of task accomplishment was used to measure workload. Job performance was assessed with one item each concerning performance quality and forgetting of intentions. There was a weaker negative association between workload and both indicators of job performance in nurses with a high level of selective optimization with compensation, compared with nurses with a low level. Considering the separate strategies, selection and compensation turned out to be effective. The use of selective optimization with compensation is conducive to nurses' job performance under high workload levels. This finding is in line with calls to empower nurses' individual decision-making. © 2015 John Wiley & Sons Ltd.

  12. The Application of Optimal Defaults to Improve Elementary School Lunch Selections: Proof of Concept

    ERIC Educational Resources Information Center

    Loeb, Katharine L.; Radnitz, Cynthia; Keller, Kathleen L.; Schwartz, Marlene B.; Zucker, Nancy; Marcus, Sue; Pierson, Richard N.; Shannon, Michael; DeLaurentis, Danielle

    2018-01-01

    Background: In this study, we applied behavioral economics to optimize elementary school lunch choices via parent-driven decisions. Specifically, this experiment tested an optimal defaults paradigm, examining whether strategically manipulating the health value of a default menu could be co-opted to improve school-based lunch selections. Methods:…

  13. Parallel algorithms for islanded microgrid with photovoltaic and energy storage systems planning optimization problem: Material selection and quantity demand optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yang; Liu, Chun; Huang, Yuehui; Wang, Tieqiang; Sun, Chenjun; Yuan, Yue; Zhang, Xinsong; Wu, Shuyun

    2017-02-01

    With the development of roof photovoltaic power (PV) generation technology and the increasingly urgent need to improve supply reliability levels in remote areas, islanded microgrid with photovoltaic and energy storage systems (IMPE) is developing rapidly. The high costs of photovoltaic panel material and energy storage battery material have become the primary factors that hinder the development of IMPE. The advantages and disadvantages of different types of photovoltaic panel materials and energy storage battery materials are analyzed in this paper, and guidance is provided on material selection for IMPE planners. The time sequential simulation method is applied to optimize material demands of the IMPE. The model is solved by parallel algorithms that are provided by a commercial solver named CPLEX. Finally, to verify the model, an actual IMPE is selected as a case system. Simulation results on the case system indicate that the optimization model and corresponding algorithm is feasible. Guidance for material selection and quantity demand for IMPEs in remote areas is provided by this method.

  14. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  15. General equations for optimal selection of diagnostic image acquisition parameters in clinical X-ray imaging.

    PubMed

    Zheng, Xiaoming

    2017-12-01

    The purpose of this work was to examine the effects of relationship functions between diagnostic image quality and radiation dose on the governing equations for image acquisition parameter variations in X-ray imaging. Various equations were derived for the optimal selection of peak kilovoltage (kVp) and exposure parameter (milliAmpere second, mAs) in computed tomography (CT), computed radiography (CR), and direct digital radiography. Logistic, logarithmic, and linear functions were employed to establish the relationship between radiation dose and diagnostic image quality. The radiation dose to the patient, as a function of image acquisition parameters (kVp, mAs) and patient size (d), was used in radiation dose and image quality optimization. Both logistic and logarithmic functions resulted in the same governing equation for optimal selection of image acquisition parameters using a dose efficiency index. For image quality as a linear function of radiation dose, the same governing equation was derived from the linear relationship. The general equations should be used in guiding clinical X-ray imaging through optimal selection of image acquisition parameters. The radiation dose to the patient could be reduced from current levels in medical X-ray imaging.

  16. Optimizing the sequence of diameter distributions and selection harvests for uneven-aged stand management

    Treesearch

    Robert G. Haight; J. Douglas Brodie; Darius M. Adams

    1985-01-01

    The determination of an optimal sequence of diameter distributions and selection harvests for uneven-aged stand management is formulated as a discrete-time optimal-control problem with bounded control variables and free-terminal point. An efficient programming technique utilizing gradients provides solutions that are stable and interpretable on the basis of economic...

  17. Selection of optimal complexity for ENSO-EMR model by minimum description length principle

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.

    2012-12-01

    One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression

  18. Optimized bioregenerative space diet selection with crew choice

    NASA Technical Reports Server (NTRS)

    Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean

    2003-01-01

    Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.

  19. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1993-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  20. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1992-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  1. Hyperopt: a Python library for model selection and hyperparameter optimization

    NASA Astrophysics Data System (ADS)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  2. Dynamic nuclear polarization and optimal control spatial-selective 13C MRI and MRS

    NASA Astrophysics Data System (ADS)

    Vinding, Mads S.; Laustsen, Christoffer; Maximov, Ivan I.; Søgaard, Lise Vejby; Ardenkjær-Larsen, Jan H.; Nielsen, Niels Chr.

    2013-02-01

    Aimed at 13C metabolic magnetic resonance imaging (MRI) and spectroscopy (MRS) applications, we demonstrate that dynamic nuclear polarization (DNP) may be combined with optimal control 2D spatial selection to simultaneously obtain high sensitivity and well-defined spatial restriction. This is achieved through the development of spatial-selective single-shot spiral-readout MRI and MRS experiments combined with dynamic nuclear polarization hyperpolarized [1-13C]pyruvate on a 4.7 T pre-clinical MR scanner. The method stands out from related techniques by facilitating anatomic shaped region-of-interest (ROI) single metabolite signals available for higher image resolution or single-peak spectra. The 2D spatial-selective rf pulses were designed using a novel Krotov-based optimal control approach capable of iteratively fast providing successful pulse sequences in the absence of qualified initial guesses. The technique may be important for early detection of abnormal metabolism, monitoring disease progression, and drug research.

  3. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    PubMed

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  4. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  5. ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amarasinghe, Saman

    This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less

  6. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    PubMed

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  7. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    PubMed Central

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-01-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060

  8. Portfolio optimization for seed selection in diverse weather scenarios.

    PubMed

    Marko, Oskar; Brdar, Sanja; Panić, Marko; Šašić, Isidora; Despotović, Danica; Knežević, Milivoje; Crnojević, Vladimir

    2017-01-01

    The aim of this work was to develop a method for selection of optimal soybean varieties for the American Midwest using data analytics. We extracted the knowledge about 174 varieties from the dataset, which contained information about weather, soil, yield and regional statistical parameters. Next, we predicted the yield of each variety in each of 6,490 observed subregions of the Midwest. Furthermore, yield was predicted for all the possible weather scenarios approximated by 15 historical weather instances contained in the dataset. Using predicted yields and covariance between varieties through different weather scenarios, we performed portfolio optimisation. In this way, for each subregion, we obtained a selection of varieties, that proved superior to others in terms of the amount and stability of yield. According to the rules of Syngenta Crop Challenge, for which this research was conducted, we aggregated the results across all subregions and selected up to five soybean varieties that should be distributed across the network of seed retailers. The work presented in this paper was the winning solution for Syngenta Crop Challenge 2017.

  9. Portfolio optimization for seed selection in diverse weather scenarios

    PubMed Central

    Brdar, Sanja; Panić, Marko; Šašić, Isidora; Despotović, Danica; Knežević, Milivoje; Crnojević, Vladimir

    2017-01-01

    The aim of this work was to develop a method for selection of optimal soybean varieties for the American Midwest using data analytics. We extracted the knowledge about 174 varieties from the dataset, which contained information about weather, soil, yield and regional statistical parameters. Next, we predicted the yield of each variety in each of 6,490 observed subregions of the Midwest. Furthermore, yield was predicted for all the possible weather scenarios approximated by 15 historical weather instances contained in the dataset. Using predicted yields and covariance between varieties through different weather scenarios, we performed portfolio optimisation. In this way, for each subregion, we obtained a selection of varieties, that proved superior to others in terms of the amount and stability of yield. According to the rules of Syngenta Crop Challenge, for which this research was conducted, we aggregated the results across all subregions and selected up to five soybean varieties that should be distributed across the network of seed retailers. The work presented in this paper was the winning solution for Syngenta Crop Challenge 2017. PMID:28863173

  10. Systematic optimization model and algorithm for binding sequence selection in computational enzyme design

    PubMed Central

    Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan

    2013-01-01

    A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589

  11. Optimization of multi-environment trials for genomic selection based on crop models.

    PubMed

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  12. Space-planning and structural solutions of low-rise buildings: Optimal selection methods

    NASA Astrophysics Data System (ADS)

    Gusakova, Natalya; Minaev, Nikolay; Filushina, Kristina; Dobrynina, Olga; Gusakov, Alexander

    2017-11-01

    The present study is devoted to elaboration of methodology used to select appropriately the space-planning and structural solutions in low-rise buildings. Objective of the study is working out the system of criteria influencing the selection of space-planning and structural solutions which are most suitable for low-rise buildings and structures. Application of the defined criteria in practice aim to enhance the efficiency of capital investments, energy and resource saving, create comfortable conditions for the population considering climatic zoning of the construction site. Developments of the project can be applied while implementing investment-construction projects of low-rise housing at different kinds of territories based on the local building materials. The system of criteria influencing the optimal selection of space-planning and structural solutions of low-rise buildings has been developed. Methodological basis has been also elaborated to assess optimal selection of space-planning and structural solutions of low-rise buildings satisfying the requirements of energy-efficiency, comfort and safety, and economical efficiency. Elaborated methodology enables to intensify the processes of low-rise construction development for different types of territories taking into account climatic zoning of the construction site. Stimulation of low-rise construction processes should be based on the system of approaches which are scientifically justified; thus it allows enhancing energy efficiency, comfort, safety and economical effectiveness of low-rise buildings.

  13. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    PubMed

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  14. Optimal Reference Gene Selection for Expression Studies in Human Reticulocytes.

    PubMed

    Aggarwal, Anu; Jamwal, Manu; Viswanathan, Ganesh K; Sharma, Prashant; Sachdeva, ManUpdesh S; Bansal, Deepak; Malhotra, Pankaj; Das, Reena

    2018-05-01

    Reference genes are indispensable for normalizing mRNA levels across samples in real-time quantitative PCR. Their expression levels vary under different experimental conditions and because of several inherent characteristics. Appropriate reference gene selection is thus critical for gene-expression studies. This study aimed at selecting optimal reference genes for gene-expression analysis of reticulocytes and at validating them in hereditary spherocytosis (HS) and β-thalassemia intermedia (βTI) patients. Seven reference genes (PGK1, MPP1, HPRT1, ACTB, GAPDH, RN18S1, and SDHA) were selected because of published reports. Real-time quantitative PCR was performed on reticulocytes in 20 healthy volunteers, 15 HS patients, and 10 βTI patients. Threshold cycle values were compared with fold-change method and RefFinder software. The stable reference genes recommended by RefFinder were validated with SLC4A1 and flow cytometric eosin-5'-maleimide binding assay values in HS patients and HBG2 and high performance liquid chromatography-derived percentage of hemoglobin F in βTI. Comprehensive ranking predicted MPP1 and GAPDH as optimal reference genes for reticulocytes that were not affected in HS and βTI. This was further confirmed on validation with eosin-5'-maleimide results and percentage of hemoglobin F in HS and βTI patients, respectively. Hence, MPP1 and GAPDH are good reference genes for reticulocyte expression studies compared with ACTB and RN18S1, the two most commonly used reference genes. Copyright © 2018 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  15. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter

  16. A reliable computational workflow for the selection of optimal screening libraries.

    PubMed

    Gilad, Yocheved; Nadassy, Katalin; Senderowitz, Hanoch

    2015-01-01

    The experimental screening of compound collections is a common starting point in many drug discovery projects. Successes of such screening campaigns critically depend on the quality of the screened library. Many libraries are currently available from different vendors yet the selection of the optimal screening library for a specific project is challenging. We have devised a novel workflow for the rational selection of project-specific screening libraries. The workflow accepts as input a set of virtual candidate libraries and applies the following steps to each library: (1) data curation; (2) assessment of ADME/T profile; (3) assessment of the number of promiscuous binders/frequent HTS hitters; (4) assessment of internal diversity; (5) assessment of similarity to known active compound(s) (optional); (6) assessment of similarity to in-house or otherwise accessible compound collections (optional). For ADME/T profiling, Lipinski's and Veber's rule-based filters were implemented and a new blood brain barrier permeation model was developed and validated (85 and 74 % success rate for training set and test set, respectively). Diversity and similarity descriptors which demonstrated best performances in terms of their ability to select either diverse or focused sets of compounds from three databases (Drug Bank, CMC and CHEMBL) were identified and used for diversity and similarity assessments. The workflow was used to analyze nine common screening libraries available from six vendors. The results of this analysis are reported for each library providing an assessment of its quality. Furthermore, a consensus approach was developed to combine the results of these analyses into a single score for selecting the optimal library under different scenarios. We have devised and tested a new workflow for the rational selection of screening libraries under different scenarios. The current workflow was implemented using the Pipeline Pilot software yet due to the usage of generic

  17. Electrode channel selection based on backtracking search optimization in motor imagery brain-computer interfaces.

    PubMed

    Dai, Shengfa; Wei, Qingguo

    2017-01-01

    Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.

  18. Chiral stationary phase optimized selectivity liquid chromatography: A strategy for the separation of chiral isomers.

    PubMed

    Hegade, Ravindra Suryakant; De Beer, Maarten; Lynen, Frederic

    2017-09-15

    Chiral Stationary-Phase Optimized Selectivity Liquid Chromatography (SOSLC) is proposed as a tool to optimally separate mixtures of enantiomers on a set of commercially available coupled chiral columns. This approach allows for the prediction of the separation profiles on any possible combination of the chiral stationary phases based on a limited number of preliminary analyses, followed by automated selection of the optimal column combination. Both the isocratic and gradient SOSLC approach were implemented for prediction of the retention times for a mixture of 4 chiral pairs on all possible combinations of the 5 commercial chiral columns. Predictions in isocratic and gradient mode were performed with a commercially available and with an in-house developed Microsoft visual basic algorithm, respectively. Optimal predictions in the isocratic mode required the coupling of 4 columns whereby relative deviations between the predicted and experimental retention times ranged between 2 and 7%. Gradient predictions led to the coupling of 3 chiral columns allowing baseline separation of all solutes, whereby differences between predictions and experiments ranged between 0 and 12%. The methodology is a novel tool allowing optimizing the separation of mixtures of optical isomers. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  20. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  1. Selecting a proper design period for heliostat field layout optimization using Campo code

    NASA Astrophysics Data System (ADS)

    Saghafifar, Mohammad; Gadalla, Mohamed

    2016-09-01

    In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.

  2. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    NASA Astrophysics Data System (ADS)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model

  3. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications.

    PubMed

    Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.

  4. [Analysis of visible extinction spectrum of particle system and selection of optimal wavelength].

    PubMed

    Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin

    2008-09-01

    In the total light scattering particle sizing technique, the extinction spectrum of particle system contains some information about the particle size and refractive index. The visible extinction spectra of the common monomodal and biomodal R-R particle size distribution were computed, and the variation in the visible extinction spectrum with the particle size and refractive index was analyzed. The corresponding wavelengths were selected as the measurement wavelengths at which the second order differential extinction spectrum was discontinuous. Furthermore, the minimum and the maximum wavelengths in the visible region were also selected as the measurement wavelengths. The genetic algorithm was used as the inversion method under the dependent model The computer simulation and experiments illustrate that it is feasible to make an analysis of the extinction spectrum and use this selection method of the optimal wavelength in the total light scattering particle sizing. The rough contour of the particle size distribution can be determined after the analysis of visible extinction spectrum, so the search range of the particle size parameter is reduced in the optimal algorithm, and then a more accurate inversion result can be obtained using the selection method. The inversion results of monomodal and biomodal distribution are all still satisfactory when 1% stochastic noise is put in the transmission extinction measurement values.

  5. Contrast based band selection for optimized weathered oil detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Levaux, Florian; Bostater, Charles R., Jr.; Neyt, Xavier

    2012-09-01

    Hyperspectral imagery offers unique benefits for detection of land and water features due to the information contained in reflectance signatures such as the bi-directional reflectance distribution function or BRDF. The reflectance signature directly shows the relative absorption and backscattering features of targets. These features can be very useful in shoreline monitoring or surveillance applications, for example to detect weathered oil. In real-time detection applications, processing of hyperspectral data can be an important tool and Optimal band selection is thus important in real time applications in order to select the essential bands using the absorption and backscatter information. In the present paper, band selection is based upon the optimization of target detection using contrast algorithms. The common definition of the contrast (using only one band out of all possible combinations available within a hyperspectral image) is generalized in order to consider all the possible combinations of wavelength dependent contrasts using hyperspectral images. The inflection (defined here as an approximation of the second derivative) is also used in order to enhance the variations in the reflectance spectra as well as in the contrast spectrua in order to assist in optimal band selection. The results of the selection in term of target detection (false alarms and missed detection) are also compared with a previous method to perform feature detection, namely the matched filter. In this paper, imagery is acquired using a pushbroom hyperspectral sensor mounted at the bow of a small vessel. The sensor is mechanically rotated using an optical rotation stage. This opto-mechanical scanning system produces hyperspectral images with pixel sizes on the order of mm to cm scales, depending upon the distance between the sensor and the shoreline being monitored. The motion of the platform during the acquisition induces distortions in the collected HSI imagery. It is therefore

  6. Optimal marker-assisted selection to increase the effective size of small populations.

    PubMed

    Wang, J

    2001-02-01

    An approach to the optimal utilization of marker and pedigree information in minimizing the rates of inbreeding and genetic drift at the average locus of the genome (not just the marked loci) in a small diploid population is proposed, and its efficiency is investigated by stochastic simulations. The approach is based on estimating the expected pedigree of each chromosome by using marker and individual pedigree information and minimizing the average coancestry of selected chromosomes by quadratic integer programming. It is shown that the approach is much more effective and much less computer demanding in implementation than previous ones. For pigs with 10 offspring per mother genotyped for two markers (each with four alleles at equal initial frequency) per chromosome of 100 cM, the approach can increase the average effective size for the whole genome by approximately 40 and 55% if mating ratios (the number of females mated with a male) are 3 and 12, respectively, compared with the corresponding values obtained by optimizing between-family selection using pedigree information only. The efficiency of the marker-assisted selection method increases with increasing amount of marker information (number of markers per chromosome, heterozygosity per marker) and family size, but decreases with increasing genome size. For less prolific species, the approach is still effective if the mating ratio is large so that a high marker-assisted selection pressure on the rarer sex can be maintained.

  7. Evaluating Varied Label Designs for Use with Medical Devices: Optimized Labels Outperform Existing Labels in the Correct Selection of Devices and Time to Select.

    PubMed

    Bix, Laura; Seo, Do Chan; Ladoni, Moslem; Brunk, Eric; Becker, Mark W

    2016-01-01

    Effective standardization of medical device labels requires objective study of varied designs. Insufficient empirical evidence exists regarding how practitioners utilize and view labeling. Measure the effect of graphic elements (boxing information, grouping information, symbol use and color-coding) to optimize a label for comparison with those typical of commercial medical devices. Participants viewed 54 trials on a computer screen. Trials were comprised of two labels that were identical with regard to graphics, but differed in one aspect of information (e.g., one had latex, the other did not). Participants were instructed to select the label along a given criteria (e.g., latex containing) as quickly as possible. Dependent variables were binary (correct selection) and continuous (time to correct selection). Eighty-nine healthcare professionals were recruited at Association of Surgical Technologists (AST) conferences, and using a targeted e-mail of AST members. Symbol presence, color coding and grouping critical pieces of information all significantly improved selection rates and sped time to correct selection (α = 0.05). Conversely, when critical information was graphically boxed, probability of correct selection and time to selection were impaired (α = 0.05). Subsequently, responses from trials containing optimal treatments (color coded, critical information grouped with symbols) were compared to two labels created based on a review of those commercially available. Optimal labels yielded a significant positive benefit regarding the probability of correct choice ((P<0.0001) LSM; UCL, LCL: 97.3%; 98.4%, 95.5%)), as compared to the two labels we created based on commercial designs (92.0%; 94.7%, 87.9% and 89.8%; 93.0%, 85.3%) and time to selection. Our study provides data regarding design factors, namely: color coding, symbol use and grouping of critical information that can be used to significantly enhance the performance of medical device labels.

  8. An enhancement of binary particle swarm optimization for gene selection in classifying cancer classes

    PubMed Central

    2013-01-01

    Background Gene expression data could likely be a momentous help in the progress of proficient cancer diagnoses and classification platforms. Lately, many researchers analyze gene expression data using diverse computational intelligence methods, for selecting a small subset of informative genes from the data for cancer classification. Many computational methods face difficulties in selecting small subsets due to the small number of samples compared to the huge number of genes (high-dimension), irrelevant genes, and noisy genes. Methods We propose an enhanced binary particle swarm optimization to perform the selection of small subsets of informative genes which is significant for cancer classification. Particle speed, rule, and modified sigmoid function are introduced in this proposed method to increase the probability of the bits in a particle’s position to be zero. The method was empirically applied to a suite of ten well-known benchmark gene expression data sets. Results The performance of the proposed method proved to be superior to other previous related works, including the conventional version of binary particle swarm optimization (BPSO) in terms of classification accuracy and the number of selected genes. The proposed method also requires lower computational time compared to BPSO. PMID:23617960

  9. Optimal test selection for prediction uncertainty reduction

    DOE PAGES

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less

  10. A Miniaturized Spectrometer for Optimized Selection of Subsurface Samples for Future MSR Missions

    NASA Astrophysics Data System (ADS)

    De Sanctis, M. C.; Altieri, F.; De Angelis, S.; Ferrari, M.; Frigeri, A.; Biondi, D.; Novi, S.; Antonacci, F.; Gabrieli, R.; Paolinetti, R.; Villa, F.; Ammannito, A.; Mugnuolo, R.; Pirrotta, S.

    2018-04-01

    We present the concept of a miniaturized spectrometer based on the ExoMars2020/Ma_MISS experiment. Coupled with a drill tool, it will allow an assessment of subsurface composition and optimize the selection of martian samples with a high astrobiological potential.

  11. Age-related differences in goals: testing predictions from selection, optimization, and compensation theory and socioemotional selectivity theory.

    PubMed

    Penningroth, Suzanna L; Scott, Walter D

    2012-01-01

    Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two theories. Older adults and two groups of younger adults (college students and non-students) listed their current goals, which were then coded by independent raters. Observed age group differences in goals generally supported both theories. Specifically, when compared to younger adults, older adults reported more goals focused on maintenance/loss prevention, the present, emotion-focus and generativity, and social selection, and less goals focused on knowledge acquisition and the future. However, contrary to prediction, older adults also showed less goal focusing than younger adults, reporting goals from a broader set of life domains (e.g., health, property/possessions, friendship).

  12. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications

    PubMed Central

    Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096

  13. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Minsun, E-mail: mk688@uw.edu; Stewart, Robert D.; Phillips, Mark H.

    2015-11-15

    Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T{sub d}), and the size and location of tumormore » target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D{sub mean} ≤ 45 Gy), lungs (D{sub mean} ≤ 20 Gy), cord (D{sub max} ≤ 45 Gy), esophagus (D{sub max} ≤ 63 Gy), and unspecified tissues (D{sub 05} ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D{sub 95} of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T{sub d} (3–100 days), tumor lag-time (T{sub k} = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D{sub 95} were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T{sub d} and T{sub k} used. Tumor EUD was up to 17% larger than the conventional prescription. For fast

  14. Tabu search and binary particle swarm optimization for feature selection using microarray data.

    PubMed

    Chuang, Li-Yeh; Yang, Cheng-Huei; Yang, Cheng-Hong

    2009-12-01

    Gene expression profiles have great potential as a medical diagnosis tool because they represent the state of a cell at the molecular level. In the classification of cancer type research, available training datasets generally have a fairly small sample size compared to the number of genes involved. This fact poses an unprecedented challenge to some classification methodologies due to training data limitations. Therefore, a good selection method for genes relevant for sample classification is needed to improve the predictive accuracy, and to avoid incomprehensibility due to the large number of genes investigated. In this article, we propose to combine tabu search (TS) and binary particle swarm optimization (BPSO) for feature selection. BPSO acts as a local optimizer each time the TS has been run for a single generation. The K-nearest neighbor method with leave-one-out cross-validation and support vector machine with one-versus-rest serve as evaluators of the TS and BPSO. The proposed method is applied and compared to the 11 classification problems taken from the literature. Experimental results show that our method simplifies features effectively and either obtains higher classification accuracy or uses fewer features compared to other feature selection methods.

  15. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  16. Optimality and stability of symmetric evolutionary games with applications in genetic selection.

    PubMed

    Huang, Yuanyuan; Hao, Yiping; Wang, Min; Zhou, Wen; Wu, Zhijun

    2015-06-01

    Symmetric evolutionary games, i.e., evolutionary games with symmetric fitness matrices, have important applications in population genetics, where they can be used to model for example the selection and evolution of the genotypes of a given population. In this paper, we review the theory for obtaining optimal and stable strategies for symmetric evolutionary games, and provide some new proofs and computational methods. In particular, we review the relationship between the symmetric evolutionary game and the generalized knapsack problem, and discuss the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We also derive more efficient computational methods for the evaluation of the conditions than conventional approaches. We demonstrate how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders.

  17. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  18. Optimal Electrode Selection for Electrical Resistance Tomography in Carbon Fiber Reinforced Polymer Composites.

    PubMed

    Escalona Galvis, Luis Waldo; Diaz-Montiel, Paulina; Venkataraman, Satchi

    2017-02-04

    Electrical Resistance Tomography (ERT) offers a non-destructive evaluation (NDE) technique that takes advantage of the inherent electrical properties in carbon fiber reinforced polymer (CFRP) composites for internal damage characterization. This paper investigates a method of optimum selection of sensing configurations for delamination detection in thick cross-ply laminates using ERT. Reduction in the number of sensing locations and measurements is necessary to minimize hardware and computational effort. The present work explores the use of an effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations resulting from selecting sensing electrode pairs. Singular Value Decomposition (SVD) is applied to obtain a spectral representation of the resistance measurements in the laminate for subsequent EI based reduction to take place. The electrical potential field in a CFRP laminate is calculated using finite element analysis (FEA) applied on models for two different laminate layouts considering a set of specified delamination sizes and locations with two different sensing arrangements. The effectiveness of the EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set and the reduced set of resistance measurements. This investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERT based damage detection.

  19. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, D; Nguyen, D; Voronenko, Y

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH

  20. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.

    PubMed

    Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald

    2018-01-01

    Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.

  1. Near-optimal experimental design for model selection in systems biology.

    PubMed

    Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M

    2013-10-15

    Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Toolbox 'NearOED' available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org).

  2. Imaging multicellular specimens with real-time optimized tiling light-sheet selective plane illumination microscopy

    PubMed Central

    Fu, Qinyi; Martin, Benjamin L.; Matus, David Q.; Gao, Liang

    2016-01-01

    Despite the progress made in selective plane illumination microscopy, high-resolution 3D live imaging of multicellular specimens remains challenging. Tiling light-sheet selective plane illumination microscopy (TLS-SPIM) with real-time light-sheet optimization was developed to respond to the challenge. It improves the 3D imaging ability of SPIM in resolving complex structures and optimizes SPIM live imaging performance by using a real-time adjustable tiling light sheet and creating a flexible compromise between spatial and temporal resolution. We demonstrate the 3D live imaging ability of TLS-SPIM by imaging cellular and subcellular behaviours in live C. elegans and zebrafish embryos, and show how TLS-SPIM can facilitate cell biology research in multicellular specimens by studying left-right symmetry breaking behaviour of C. elegans embryos. PMID:27004937

  3. Discovery and Optimization of a Novel Series of Highly Selective JAK1 Kinase Inhibitors.

    PubMed

    Grimster, Neil P; Anderson, Erica; Alimzhanov, Marat; Bebernitz, Geraldine; Bell, Kirsten; Chuaqui, Claudio; Deegan, Tracy; Ferguson, Andrew D; Gero, Thomas; Harsch, Andreas; Huszar, Dennis; Kawatkar, Aarti; Kettle, Jason Grant; Lyne, Paul D; Read, Jon A; Rivard Costa, Caroline; Ruston, Linette; Schroeder, Patricia; Shi, Jie; Su, Qibin; Throner, Scott; Toader, Dorin; Vasbinder, Melissa Marie; Woessner, Richard; Wang, Haixia; Wu, Allan; Ye, Minwei; Zheng, Weijia; Zinda, Michael

    2018-06-01

    Herein, we report the discovery and characterization of a novel series of pyrimidine based JAK1 inhibitors. Optimization of these ATP competitive compounds was guided by X-ray crystallography and a structure-based drug design approach, focusing on selectivity, potency, and pharmaceutical properties. The best compound, 24, displayed remarkable JAK1 selectivity (~1000-fold vs JAK2,3 and TYK2), as well as a good kinase selectivity profile. Moreover, a dose-dependent reduction in pSTAT3, a downstream marker of JAK1 inhibition, was observed when 24 was examined in vivo.

  4. Control-Display Investigation of Complex Trajectory Flight Using the Microwave Landing System. Analysis Phase.

    DTIC Science & Technology

    1979-12-01

    MLS-1, Oirect ILS Replacement Tuner L@ ’,, -_Y VOVO~F AG . MLS-2, Selectable Azimuth And Elevation Tuner/Selector ©@ MLS 032 HOG DIS DIM F CHAN ON...LIGHTS (5) / ~YELLOW /" / /GREEN - fR IIN _!4N INA iL 4 R91 .. ms 115.15 ’ 5 .s at E aG E N T C R 3 /T R N G V E R T CHAN A 0 L APPR ALA~ 0 T I0 HgA...due to the age of the aircraft, Lhe present autopilot is of an early vintage and is not recom- mended for use below 1,000 ft. unless the controls

  5. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial

    PubMed Central

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-01-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes. PMID:28763039

  6. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial.

    PubMed

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-08-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.

  7. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2015-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a 'Color-Enhanced' sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  8. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2016-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a "Color-Enhanced" sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  9. Optimizing selective cutting strategies for maximum carbon stocks and yield of Moso bamboo forest using BIOME-BGC model.

    PubMed

    Mao, Fangjie; Zhou, Guomo; Li, Pingheng; Du, Huaqiang; Xu, Xiaojun; Shi, Yongjun; Mo, Lufeng; Zhou, Yufeng; Tu, Guoqing

    2017-04-15

    The selective cutting method currently used in Moso bamboo forests has resulted in a reduction of stand productivity and carbon sequestration capacity. Given the time and labor expense involved in addressing this problem manually, simulation using an ecosystem model is the most suitable approach. The BIOME-BGC model was improved to suit managed Moso bamboo forests, which was adapted to include age structure, specific ecological processes and management measures of Moso bamboo forest. A field selective cutting experiment was done in nine plots with three cutting intensities (high-intensity, moderate-intensity and low-intensity) during 2010-2013, and biomass of these plots was measured for model validation. Then four selective cutting scenarios were simulated by the improved BIOME-BGC model to optimize the selective cutting timings, intervals, retained ages and intensities. The improved model matched the observed aboveground carbon density and yield of different plots, with a range of relative error from 9.83% to 15.74%. The results of different selective cutting scenarios suggested that the optimal selective cutting measure should be cutting 30% culms of age 6, 80% culms of age 7, and all culms thereafter (above age 8) in winter every other year. The vegetation carbon density and harvested carbon density of this selective cutting method can increase by 74.63% and 21.5%, respectively, compared with the current selective cutting measure. The optimized selective cutting measure developed in this study can significantly promote carbon density, yield, and carbon sink capacity in Moso bamboo forests. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  11. A new approach to optimal selection of services in health care organizations.

    PubMed

    Adolphson, D L; Baird, M L; Lawrence, K D

    1991-01-01

    A new reimbursement policy adopted by Medicare in 1983 caused financial difficulties for many hospitals and health care organizations. Several organizations responded to these difficulties by developing systems to carefully measure their costs of providing services. The purpose of such systems was to provide relevant information about the profitability of hospital services. This paper presents a new method of making hospital service selection decisions: it is based on an optimization model that avoids arbitrary cost allocations as a basis for computing the costs of offering a given service. The new method provides more reliable information about which services are profitable or unprofitable, and it provides an accurate measure of the degree to which a service is profitable or unprofitable. The new method also provides useful information about the sensitivity of the optimal decision to changes in costs and revenues. Specialized algorithms for the optimization model lead to very efficient implementation of the method, even for the largest health care organizations.

  12. Optimal Electrode Selection for Electrical Resistance Tomography in Carbon Fiber Reinforced Polymer Composites

    PubMed Central

    Escalona Galvis, Luis Waldo; Diaz-Montiel, Paulina; Venkataraman, Satchi

    2017-01-01

    Electrical Resistance Tomography (ERT) offers a non-destructive evaluation (NDE) technique that takes advantage of the inherent electrical properties in carbon fiber reinforced polymer (CFRP) composites for internal damage characterization. This paper investigates a method of optimum selection of sensing configurations for delamination detection in thick cross-ply laminates using ERT. Reduction in the number of sensing locations and measurements is necessary to minimize hardware and computational effort. The present work explores the use of an effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations resulting from selecting sensing electrode pairs. Singular Value Decomposition (SVD) is applied to obtain a spectral representation of the resistance measurements in the laminate for subsequent EI based reduction to take place. The electrical potential field in a CFRP laminate is calculated using finite element analysis (FEA) applied on models for two different laminate layouts considering a set of specified delamination sizes and locations with two different sensing arrangements. The effectiveness of the EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set and the reduced set of resistance measurements. This investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERT based damage detection. PMID:28772485

  13. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  14. A fast inverse treatment planning strategy facilitating optimized catheter selection in image-guided high-dose-rate interstitial gynecologic brachytherapy.

    PubMed

    Guthier, Christian V; Damato, Antonio L; Hesser, Juergen W; Viswanathan, Akila N; Cormack, Robert A

    2017-12-01

    Interstitial high-dose rate (HDR) brachytherapy is an important therapeutic strategy for the treatment of locally advanced gynecologic (GYN) cancers. The outcome of this therapy is determined by the quality of dose distribution achieved. This paper focuses on a novel yet simple heuristic for catheter selection for GYN HDR brachytherapy and their comparison against state of the art optimization strategies. The proposed technique is intended to act as a decision-supporting tool to select a favorable needle configuration. The presented heuristic for catheter optimization is based on a shrinkage-type algorithm (SACO). It is compared against state of the art planning in a retrospective study of 20 patients who previously received image-guided interstitial HDR brachytherapy using a Syed Neblett template. From those plans, template orientation and position are estimated via a rigid registration of the template with the actual catheter trajectories. All potential straight trajectories intersecting the contoured clinical target volume (CTV) are considered for catheter optimization. Retrospectively generated plans and clinical plans are compared with respect to dosimetric performance and optimization time. All plans were generated with one single run of the optimizer lasting 0.6-97.4 s. Compared to manual optimization, SACO yields a statistically significant (P ≤ 0.05) improved target coverage while at the same time fulfilling all dosimetric constraints for organs at risk (OARs). Comparing inverse planning strategies, dosimetric evaluation for SACO and "hybrid inverse planning and optimization" (HIPO), as gold standard, shows no statistically significant difference (P > 0.05). However, SACO provides the potential to reduce the number of used catheters without compromising plan quality. The proposed heuristic for needle selection provides fast catheter selection with optimization times suited for intraoperative treatment planning. Compared to manual optimization, the

  15. A Compensatory Approach to Optimal Selection with Mastery Scores. Research Report 94-2.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Vos, Hans J.

    This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious example is the use of individualized instruction in…

  16. Selective advantage of implementing optimal contributions selection and timescales for the convergence of long-term genetic contributions.

    PubMed

    Howard, David M; Pong-Wong, Ricardo; Knap, Pieter W; Kremer, Valentin D; Woolliams, John A

    2018-05-10

    Optimal contributions selection (OCS) provides animal breeders with a framework for maximising genetic gain for a predefined rate of inbreeding. Simulation studies have indicated that the source of the selective advantage of OCS is derived from breeding decisions being more closely aligned with estimates of Mendelian sampling terms ([Formula: see text]) of selection candidates, rather than estimated breeding values (EBV). This study represents the first attempt to assess the source of the selective advantage provided by OCS using a commercial pig population and by testing three hypotheses: (1) OCS places more emphasis on [Formula: see text] compared to EBV for determining which animals were selected as parents, (2) OCS places more emphasis on [Formula: see text] compared to EBV for determining which of those parents were selected to make a long-term genetic contribution (r), and (3) OCS places more emphasis on [Formula: see text] compared to EBV for determining the magnitude of r. The population studied also provided an opportunity to investigate the convergence of r over time. Selection intensity limited the number of males available for analysis, but females provided some evidence that the selective advantage derived from applying an OCS algorithm resulted from greater weighting being placed on [Formula: see text] during the process of decision-making. Male r were found to converge initially at a faster rate than female r, with approximately 90% convergence achieved within seven generations across both sexes. This study of commercial data provides some support to results from theoretical and simulation studies that the source of selective advantage from OCS comes from [Formula: see text]. The implication that genomic selection (GS) improves estimation of [Formula: see text] should allow for even greater genetic gains for a predefined rate of inbreeding, once the synergistic benefits of combining OCS and GS are realised.

  17. An Ant Colony Optimization Based Feature Selection for Web Page Classification

    PubMed Central

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods. PMID:25136678

  18. An ant colony optimization based feature selection for web page classification.

    PubMed

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  19. Analysis Methodology for Optimal Selection of Ground Station Site in Space Missions

    NASA Astrophysics Data System (ADS)

    Nieves-Chinchilla, J.; Farjas, M.; Martínez, R.

    2013-12-01

    Optimization of ground station sites is especially important in complex missions that include several small satellites (clusters or constellations) such as the QB50 project, where one ground station would be able to track several spatial vehicles, even simultaneously. In this regard the design of the communication system has to carefully take into account the ground station site and relevant signal phenomena, depending on the frequency band. To propose the optimal location of the ground station, these aspects become even more relevant to establish a trusted communication link due to the ground segment site in urban areas and/or selection of low orbits for the space segment. In addition, updated cartography with high resolution data of the location and its surroundings help to develop recommendations in the design of its location for spatial vehicles tracking and hence to improve effectiveness. The objectives of this analysis methodology are: completion of cartographic information, modelling the obstacles that hinder communication between the ground and space segment and representation in the generated 3D scene of the degree of impairment in the signal/noise of the phenomena that interferes with communication. The integration of new technologies of geographic data capture, such as 3D Laser Scan, determine that increased optimization of the antenna elevation mask, in its AOS and LOS azimuths along the horizon visible, maximizes visibility time with spatial vehicles. Furthermore, from the three-dimensional cloud of points captured, specific information is selected and, using 3D modeling techniques, the 3D scene of the antenna location site and surroundings is generated. The resulting 3D model evidences nearby obstacles related to the cartographic conditions such as mountain formations and buildings, and any additional obstacles that interfere with the operational quality of the antenna (other antennas and electronic devices that emit or receive in the same bandwidth

  20. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    PubMed

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  1. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    PubMed

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Memory control beliefs and everyday forgetfulness in adulthood: the effects of selection, optimization, and compensation strategies.

    PubMed

    Scheibner, Gunnar Benjamin; Leathem, Janet

    2012-01-01

    Controlling for age, gender, education, and self-rated health, the present study used regression analyses to examine the relationships between memory control beliefs and self-reported forgetfulness in the context of the meta-theory of Selective Optimization with Compensation (SOC). Findings from this online survey (N = 409) indicate that, among adult New Zealanders, a higher sense of memory control accounts for a 22.7% reduction in self-reported forgetfulness. Similarly, optimization was found to account for a 5% reduction in forgetfulness while the strategies of selection and compensation were not related to self-reports of forgetfulness. Optimization partially mediated the beneficial effects that some memory beliefs (e.g., believing that memory decline is inevitable and believing in the potential for memory improvement) have on forgetfulness. It was concluded that memory control beliefs are important predictors of self-reported forgetfulness while the support for the SOC model in the context of memory controllability and everyday forgetfulness is limited.

  3. Variationally optimal selection of slow coordinates and reaction coordinates in macromolecular systems

    NASA Astrophysics Data System (ADS)

    Noe, Frank

    To efficiently simulate and generate understanding from simulations of complex macromolecular systems, the concept of slow collective coordinates or reaction coordinates is of fundamental importance. Here we will introduce variational approaches to approximate the slow coordinates and the reaction coordinates between selected end-states given MD simulations of the macromolecular system and a (possibly large) basis set of candidate coordinates. We will then discuss how to select physically intuitive order paremeters that are good surrogates of this variationally optimal result. These result can be used in order to construct Markov state models or other models of the stationary and kinetics properties, in order to parametrize low-dimensional / coarse-grained model of the dynamics. Deutsche Forschungsgemeinschaft, European Research Council.

  4. Making the Optimal Decision in Selecting Protective Clothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, J. Mark

    2008-01-15

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered. This article discusses these factors as well as surveys of plants regarding their level of usage of single use protective clothing and should help individuals making decisions about protective clothing as it appliesmore » to their application. Individuals considering using SUPC should not jump to conclusions. The survey conducted clearly indicates that plants have different drivers. An evaluation should be performed to understand the facility's true drivers for selecting clothing. It is recommended that an interdisciplinary team be formed including representatives from budgets and cost, safety, radwaste, health physics, and key user groups to perform the analysis. The right questions need to be asked and answered by the company providing the clothing to formulate a proper perspective and conclusion. The conclusions and recommendations need to be shared with senior management so that the drivers, expected results, and associated costs are understood and endorsed. In the end, the individual making the recommendation should ask himself/herself: 'Is my decision emotional, or logical and economical?' 'Have I reached the optimal decision for my plant?'.« less

  5. An in vivo library-versus-library selection of optimized protein-protein interactions.

    PubMed

    Pelletier, J N; Arndt, K M; Plückthun, A; Michnick, S W

    1999-07-01

    We describe a rapid and efficient in vivo library-versus-library screening strategy for identifying optimally interacting pairs of heterodimerizing polypeptides. Two leucine zipper libraries, semi-randomized at the positions adjacent to the hydrophobic core, were genetically fused to either one of two designed fragments of the enzyme murine dihydrofolate reductase (mDHFR), and cotransformed into Escherichia coli. Interaction between the library polypeptides reconstituted enzymatic activity of mDHFR, allowing bacterial growth. Analysis of the resulting colonies revealed important biases in the zipper sequences relative to the original libraries, which are consistent with selection for stable, heterodimerizing pairs. Using more weakly associating mDHFR fragments, we increased the stringency of selection. We enriched the best-performing leucine zipper pairs by multiple passaging of the pooled, selected colonies in liquid culture, as the best pairs allowed for better bacterial propagation. This competitive growth allowed small differences among the pairs to be amplified, and different sequence positions were enriched at different rates. We applied these selection processes to a library-versus-library sample of 2.0 x 10(6) combinations and selected a novel leucine zipper pair that may be appropriate for use in further in vivo heterodimerization strategies.

  6. Genetic Particle Swarm Optimization-Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection.

    PubMed

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-07-30

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.

  7. Evaluation of the selection methods used in the exIWO algorithm based on the optimization of multidimensional functions

    NASA Astrophysics Data System (ADS)

    Kostrzewa, Daniel; Josiński, Henryk

    2016-06-01

    The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.

  8. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  9. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  10. An Optimization Model For Strategy Decision Support to Select Kind of CPO’s Ship

    NASA Astrophysics Data System (ADS)

    Suaibah Nst, Siti; Nababan, Esther; Mawengkang, Herman

    2018-01-01

    The selection of marine transport for the distribution of crude palm oil (CPO) is one of strategy that can be considered in reducing cost of transport. The cost of CPO’s transport from one area to CPO’s factory located at the port of destination may affect the level of CPO’s prices and the number of demands. In order to maintain the availability of CPO a strategy is required to minimize the cost of transporting. In this study, the strategy used to select kind of charter ships as barge or chemical tanker. This study aims to determine an optimization model for strategy decision support in selecting kind of CPO’s ship by minimizing costs of transport. The select of ship was done randomly, so that two-stage stochastic programming model was used to select the kind of ship. Model can help decision makers to select either barge or chemical tanker to distribute CPO.

  11. Selection of appropriate training and validation set chemicals for modelling dermal permeability by U-optimal design.

    PubMed

    Xu, G; Hughes-Oliver, J M; Brooks, J D; Yeatts, J L; Baynes, R E

    2013-01-01

    Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].

  12. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems

    ERIC Educational Resources Information Center

    Robinson, Stephanie A.; Rickenbach, Elizabeth H.; Lachman, Margie E.

    2016-01-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to…

  13. efficient association study design via power-optimized tag SNP selection

    PubMed Central

    HAN, BUHM; KANG, HYUN MIN; SEO, MYEONG SEONG; ZAITLEN, NOAH; ESKIN, ELEAZAR

    2008-01-01

    Discovering statistical correlation between causal genetic variation and clinical traits through association studies is an important method for identifying the genetic basis of human diseases. Since fully resequencing a cohort is prohibitively costly, genetic association studies take advantage of local correlation structure (or linkage disequilibrium) between single nucleotide polymorphisms (SNPs) by selecting a subset of SNPs to be genotyped (tag SNPs). While many current association studies are performed using commercially available high-throughput genotyping products that define a set of tag SNPs, choosing tag SNPs remains an important problem for both custom follow-up studies as well as designing the high-throughput genotyping products themselves. The most widely used tag SNP selection method optimizes over the correlation between SNPs (r2). However, tag SNPs chosen based on an r2 criterion do not necessarily maximize the statistical power of an association study. We propose a study design framework that chooses SNPs to maximize power and efficiently measures the power through empirical simulation. Empirical results based on the HapMap data show that our method gains considerable power over a widely used r2-based method, or equivalently reduces the number of tag SNPs required to attain the desired power of a study. Our power-optimized 100k whole genome tag set provides equivalent power to the Affymetrix 500k chip for the CEU population. For the design of custom follow-up studies, our method provides up to twice the power increase using the same number of tag SNPs as r2-based methods. Our method is publicly available via web server at http://design.cs.ucla.edu. PMID:18702637

  14. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Solikhin

    2016-06-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.

  15. Optimization of wastewater treatment alternative selection by hierarchy grey relational analysis.

    PubMed

    Zeng, Guangming; Jiang, Ru; Huang, Guohe; Xu, Min; Li, Jianbing

    2007-01-01

    This paper describes an innovative systematic approach, namely hierarchy grey relational analysis for optimal selection of wastewater treatment alternatives, based on the application of analytic hierarchy process (AHP) and grey relational analysis (GRA). It can be applied for complicated multicriteria decision-making to obtain scientific and reasonable results. The effectiveness of this approach was verified through a real case study. Four wastewater treatment alternatives (A(2)/O, triple oxidation ditch, anaerobic single oxidation ditch and SBR) were evaluated and compared against multiple economic, technical and administrative performance criteria, including capital cost, operation and maintenance (O and M) cost, land area, removal of nitrogenous and phosphorous pollutants, sludge disposal effect, stability of plant operation, maturity of technology and professional skills required for O and M. The result illustrated that the anaerobic single oxidation ditch was the optimal scheme and would obtain the maximum general benefits for the wastewater treatment plant to be constructed.

  16. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  17. Down-selection and optimization of thermal-sprayed coatings for aluminum mould tool protection and upgrade

    NASA Astrophysics Data System (ADS)

    Gibbons, Gregory John; Hansell, Robert George

    2006-09-01

    This article details the down-selection procedure for thermally sprayed coatings for aluminum injection mould tooling. A down-selection metric was used to rank a wide range of coatings. A range of high-velocity oxyfuel (HVOF) and atmospheric plasma spray (APS) systems was used to identify the optimal coating-process-system combinations. Three coatings were identified as suitable for further study; two CrC NiCr materials and one Fe Ni Cr alloy. No APS-deposited coatings were suitable for the intended application due to poor substrate adhesion (SA) and very high surface roughness (SR). The DJ2700 deposited coating properties were inferior to the coatings deposited using other HVOF systems and thus a Taguchi L18 five parameter, three-level optimization was used to optimize SA of CRC-1 and FE-1. Significant mean increases in bond strength were achieved (147±30% for FE-1 [58±4 MPa] and 12±1% for CRC-1 [67±5 MPa]). An analysis of variance (ANOVA) indicated that the coating bond strengths were primarily dependent on powder flow rate and propane gas flow rate, and also secondarily dependent on spray distance. The optimal deposition parameters identified were: (CRC-1/FE-1) O2 264/264 standard liters per minute (SLPM); C3H8 62/73 SLPM; air 332/311 SLPM; feed rate 30/28 g/min; and spray distance 150/206 mm.

  18. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  19. The effect of cavity tuning on oxygen beam currents of an A-ECR type 14 GHz electron cyclotron resonance ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarvainen, O., E-mail: olli.tarvainen@jyu.fi; Orpana, J.; Kronholm, R.

    2016-09-15

    The efficiency of the microwave-plasma coupling plays a significant role in the production of highly charged ion beams with electron cyclotron resonance ion sources (ECRISs). The coupling properties are affected by the mechanical design of the ion source plasma chamber and microwave launching system, as well as damping of the microwave electric field by the plasma. Several experiments attempting to optimize the microwave-plasma coupling characteristics by fine-tuning the frequency of the injected microwaves have been conducted with varying degrees of success. The inherent difficulty in interpretation of the frequency tuning results is that the effects of microwave coupling system andmore » the cavity behavior of the plasma chamber cannot be separated. A preferable approach to study the effect of the cavity properties of the plasma chamber on extracted beam currents is to adjust the cavity dimensions. The results of such cavity tuning experiments conducted with the JYFL 14 GHz ECRIS are reported here. The cavity properties were adjusted by inserting a conducting tuner rod axially into the plasma chamber. The extracted beam currents of oxygen charge states O{sup 3+}–O{sup 7+} were recorded at various tuner positions and frequencies in the range of 14.00–14.15 GHz. It was observed that the tuner position affects the beam currents of high charge state ions up to several tens of percent. In particular, it was found that at some tuner position / frequency combinations the plasma exhibited “mode-hopping” between two operating regimes. The results improve the understanding of the role of plasma chamber cavity properties on ECRIS performances.« less

  20. Optimal SVM parameter selection for non-separable and unbalanced datasets.

    PubMed

    Jiang, Peng; Missoum, Samy; Chen, Zhao

    2014-10-01

    This article presents a study of three validation metrics used for the selection of optimal parameters of a support vector machine (SVM) classifier in the case of non-separable and unbalanced datasets. This situation is often encountered when the data is obtained experimentally or clinically. The three metrics selected in this work are the area under the ROC curve (AUC), accuracy, and balanced accuracy. These validation metrics are tested using computational data only, which enables the creation of fully separable sets of data. This way, non-separable datasets, representative of a real-world problem, can be created by projection onto a lower dimensional sub-space. The knowledge of the separable dataset, unknown in real-world problems, provides a reference to compare the three validation metrics using a quantity referred to as the "weighted likelihood". As an application example, the study investigates a classification model for hip fracture prediction. The data is obtained from a parameterized finite element model of a femur. The performance of the various validation metrics is studied for several levels of separability, ratios of unbalance, and training set sizes.

  1. Drug efficiency: a new concept to guide lead optimization programs towards the selection of better clinical candidates.

    PubMed

    Braggio, Simone; Montanari, Dino; Rossi, Tino; Ratti, Emiliangelo

    2010-07-01

    As a result of their wide acceptance and conceptual simplicity, drug-like concepts are having a major influence on the drug discovery process, particularly in the selection of the 'optimal' absorption, distribution, metabolism, excretion and toxicity and physicochemical parameters space. While they have an undisputable value when assessing the potential of lead series or in evaluating inherent risk of a portfolio of drug candidates, they result much less useful in weighing up compounds for the selection of the best potential clinical candidate. We introduce the concept of drug efficiency as a new tool both to guide the drug discovery program teams during the lead optimization phase and to better assess the developability potential of a drug candidate.

  2. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction.

    PubMed

    Stassi, D; Dutta, S; Ma, H; Soderman, A; Pazzani, D; Gros, E; Okerlund, D; Schmidt, T G

    2016-01-01

    Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five

  3. Dynamic optimization approach for integrated supplier selection and tracking control of single product inventory system with product discount

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Heru Tjahjana, R.

    2017-01-01

    In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.

  4. Selection and evaluation of optimal two-dimensional CAIPIRINHA kernels applied to time-resolved three-dimensional CE-MRA.

    PubMed

    Weavers, Paul T; Borisch, Eric A; Riederer, Stephen J

    2015-06-01

    To develop and validate a method for choosing the optimal two-dimensional CAIPIRINHA kernel for subtraction contrast-enhanced MR angiography (CE-MRA) and estimate the degree of image quality improvement versus that of some reference acceleration parameter set at R ≥ 8. A metric based on patient-specific coil calibration information was defined for evaluating optimality of CAIPIRINHA kernels as applied to subtraction CE-MRA. Evaluation in retrospective studies using archived coil calibration data from abdomen, calf, foot, and hand CE-MRA exams was accomplished with an evaluation metric comparing the geometry factor (g-factor) histograms. Prospective calf, foot, and hand CE-MRA studies were evaluated with vessel signal-to-noise ratio (SNR). Retrospective studies show g-factor improvement for the selected CAIPIRINHA kernels was significant in the feet, moderate in the abdomen, and modest in the calves and hands. Prospective CE-MRA studies using optimal CAIPIRINHA show reduced noise amplification with identical acquisition time in studies of the feet, with minor improvements in the hands and calves. A method for selection of the optimal CAIPIRINHA kernel for high (R ≥ 8) acceleration CE-MRA exams given a specific patient and receiver array was demonstrated. CAIPIRINHA optimization appears valuable in accelerated CE-MRA of the feet and to a lesser extent in the abdomen. © 2014 Wiley Periodicals, Inc.

  5. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less

  6. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    NASA Astrophysics Data System (ADS)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the

  7. Switches in Genomic GC Content Drive Shifts of Optimal Codons under Sustained Selection on Synonymous Sites

    PubMed Central

    Sun, Yu; Tamarit, Daniel

    2017-01-01

    Abstract The major codon preference model suggests that codons read by tRNAs in high concentrations are preferentially utilized in highly expressed genes. However, the identity of the optimal codons differs between species although the forces driving such changes are poorly understood. We suggest that these questions can be tackled by placing codon usage studies in a phylogenetic framework and that bacterial genomes with extreme nucleotide composition biases provide informative model systems. Switches in the background substitution biases from GC to AT have occurred in Gardnerella vaginalis (GC = 32%), and from AT to GC in Lactobacillus delbrueckii (GC = 62%) and Lactobacillus fermentum (GC = 63%). We show that despite the large effects on codon usage patterns by these switches, all three species evolve under selection on synonymous sites. In G. vaginalis, the dramatic codon frequency changes coincide with shifts of optimal codons. In contrast, the optimal codons have not shifted in the two Lactobacillus genomes despite an increased fraction of GC-ending codons. We suggest that all three species are in different phases of an on-going shift of optimal codons, and attribute the difference to a stronger background substitution bias and/or longer time since the switch in G. vaginalis. We show that comparative and correlative methods for optimal codon identification yield conflicting results for genomes in flux and discuss possible reasons for the mispredictions. We conclude that switches in the direction of the background substitution biases can drive major shifts in codon preference patterns even under sustained selection on synonymous codon sites. PMID:27540085

  8. An efficient swarm intelligence approach to feature selection based on invasive weed optimization: Application to multivariate calibration and classification using spectroscopic data

    NASA Astrophysics Data System (ADS)

    Sheykhizadeh, Saheleh; Naseri, Abdolhossein

    2018-04-01

    Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively.

  9. An efficient swarm intelligence approach to feature selection based on invasive weed optimization: Application to multivariate calibration and classification using spectroscopic data.

    PubMed

    Sheykhizadeh, Saheleh; Naseri, Abdolhossein

    2018-04-05

    Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Selective arylsulfonamide inhibitors of ADAM-17: hit optimization and activity in ovarian cancer cell models.

    PubMed

    Nuti, Elisa; Casalini, Francesca; Santamaria, Salvatore; Fabbi, Marina; Carbotti, Grazia; Ferrini, Silvano; Marinelli, Luciana; La Pietra, Valeria; Novellino, Ettore; Camodeca, Caterina; Orlandini, Elisabetta; Nencetti, Susanna; Rossello, Armando

    2013-10-24

    Activated leukocyte cell adhesion molecule (ALCAM) is expressed at the surface of epithelial ovarian cancer (EOC) cells and is released in a soluble form (sALCAM) by ADAM-17-mediated shedding. This process is relevant to EOC cell motility and invasiveness, which is reduced by inhibitors of ADAM-17. In addition, ADAM-17 plays a key role in EGFR signaling and thus may represent a useful target in anticancer therapy. Herein we report our hit optimization effort to identify potent and selective ADAM-17 inhibitors, starting with previously identified inhibitor 1. A new series of secondary sulfonamido-based hydroxamates was designed and synthesized. The biological activity of the newly synthesized compounds was tested in vitro on isolated enzymes and human EOC cell lines. The optimization process led to compound 21, which showed an IC50 of 1.9 nM on ADAM-17 with greatly increased selectivity. This compound maintained good inhibitory properties on sALCAM shedding in several in vitro assays.

  11. Optimization of the excitation light sheet in selective plane illumination microscopy

    PubMed Central

    Gao, Liang

    2015-01-01

    Selective plane illumination microscopy (SPIM) allows rapid 3D live fluorescence imaging on biological specimens with high 3D spatial resolution, good optical sectioning capability and minimal photobleaching and phototoxic effect. SPIM gains its advantage by confining the excitation light near the detection focal plane, and its performance is determined by the ability to create a thin, large and uniform excitation light sheet. Several methods have been developed to create such an excitation light sheet for SPIM. However, each method has its own strengths and weaknesses, and tradeoffs must be made among different aspects in SPIM imaging. In this work, we present a strategy to select the excitation light sheet among the latest SPIM techniques, and to optimize its geometry based on spatial resolution, field of view, optical sectioning capability, and the sample to be imaged. Besides the light sheets discussed in this work, the proposed strategy is also applicable to estimate the SPIM performance using other excitation light sheets. PMID:25798312

  12. Selection of optimal welding condition for GTA pulse welding in root-pass of V-groove butt joint

    NASA Astrophysics Data System (ADS)

    Yun, Seok-Chul; Kim, Jae-Woong

    2010-12-01

    In the manufacture of high-quality welds or pipeline, a full-penetration weld has to be made along the weld joint. Therefore, root-pass welding is very important, and its conditions have to be selected carefully. In this study, an experimental method for the selection of optimal welding conditions is proposed for gas tungsten arc (GTA) pulse welding in the root pass which is done along the V-grooved butt-weld joint. This method uses response surface analysis in which the width and height of back bead are chosen as quality variables of the weld. The overall desirability function, which is the combined desirability function for the two quality variables, is used as the objective function to obtain the optimal welding conditions. In our experiments, the target values of back bead width and height are 4 mm and zero, respectively, for a V-grooved butt-weld joint of a 7-mm-thick steel plate. The optimal welding conditions could determine the back bead profile (bead width and height) as 4.012 mm and 0.02 mm. From a series of welding tests, it was revealed that a uniform and full-penetration weld bead can be obtained by adopting the optimal welding conditions determined according to the proposed method.

  13. Nonparametric density estimation and optimal bandwidth selection for protein unfolding and unbinding data

    NASA Astrophysics Data System (ADS)

    Bura, E.; Zhmurov, A.; Barsegov, V.

    2009-01-01

    Dynamic force spectroscopy and steered molecular simulations have become powerful tools for analyzing the mechanical properties of proteins, and the strength of protein-protein complexes and aggregates. Probability density functions of the unfolding forces and unfolding times for proteins, and rupture forces and bond lifetimes for protein-protein complexes allow quantification of the forced unfolding and unbinding transitions, and mapping the biomolecular free energy landscape. The inference of the unknown probability distribution functions from the experimental and simulated forced unfolding and unbinding data, as well as the assessment of analytically tractable models of the protein unfolding and unbinding requires the use of a bandwidth. The choice of this quantity is typically subjective as it draws heavily on the investigator's intuition and past experience. We describe several approaches for selecting the "optimal bandwidth" for nonparametric density estimators, such as the traditionally used histogram and the more advanced kernel density estimators. The performance of these methods is tested on unimodal and multimodal skewed, long-tailed distributed data, as typically observed in force spectroscopy experiments and in molecular pulling simulations. The results of these studies can serve as a guideline for selecting the optimal bandwidth to resolve the underlying distributions from the forced unfolding and unbinding data for proteins.

  14. Navigating the auditory scene: an expert role for the hippocampus.

    PubMed

    Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C Rebecca; Moore, Brian C J; Capleton, Brian; Griffiths, Timothy D

    2012-08-29

    Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner.

  15. Selection of optimal spectral sensitivity functions for color filter arrays.

    PubMed

    Parmar, Manu; Reeves, Stanley J

    2010-12-01

    A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.

  16. Parallel medicinal chemistry approaches to selective HDAC1/HDAC2 inhibitor (SHI-1:2) optimization.

    PubMed

    Kattar, Solomon D; Surdi, Laura M; Zabierek, Anna; Methot, Joey L; Middleton, Richard E; Hughes, Bethany; Szewczak, Alexander A; Dahlberg, William K; Kral, Astrid M; Ozerova, Nicole; Fleming, Judith C; Wang, Hongmei; Secrist, Paul; Harsch, Andreas; Hamill, Julie E; Cruz, Jonathan C; Kenific, Candia M; Chenard, Melissa; Miller, Thomas A; Berk, Scott C; Tempest, Paul

    2009-02-15

    The successful application of both solid and solution phase library synthesis, combined with tight integration into the medicinal chemistry effort, resulted in the efficient optimization of a novel structural series of selective HDAC1/HDAC2 inhibitors by the MRL-Boston Parallel Medicinal Chemistry group. An initial lead from a small parallel library was found to be potent and selective in biochemical assays. Advanced compounds were the culmination of iterative library design and possess excellent biochemical and cellular potency, as well as acceptable PK and efficacy in animal models.

  17. Optimization of Cu-Zn Massive Sulphide Flotation by Selective Reagents

    NASA Astrophysics Data System (ADS)

    Soltani, F.; Koleini, S. M. J.; Abdollahy, M.

    2014-10-01

    Selective floatation of base metal sulphide minerals can be achieved by using selective reagents. Sequential floatation of chalcopyrite-sphalerite from Taknar (Iran) massive sulphide ore with 3.5 % Zn and 1.26 % Cu was studied. D-optimal design of response surface methodology was used. Four mixed collector types (Aer238 + SIPX, Aero3477 + SIPX, TC1000 + SIPX and X231 + SIPX), two depressant systems (CuCN-ZnSO4 and dextrin-ZnSO4), pH and ZnSO4 dosage were considered as operational factors in the first stage of flotation. Different conditions of pH, CuSO4 dosage and SIPX dosage were studied for sphalerite flotation from first stage tailings. Aero238 + SIPX induced better selectivity for chalcopyrite against pyrite and sphalerite. Dextrin-ZnSO4 was as effective as CuCN-ZnSO4 in sphalerite-pyrite depression. Under optimum conditions, Cu recovery, Zn recovery and pyrite content in Cu concentrate were 88.99, 33.49 and 1.34 % by using Aero238 + SIPX as mixed collector, CuCN-ZnSO4 as depressant system, at ZnSO4 dosage of 200 g/t and pH 10.54. When CuCN was used at the first stage, CuSO4 consumption increased and Zn recovery decreased during the second stage. Maximum Zn recovery was 72.19 % by using 343.66 g/t of CuSO4, 22.22 g/t of SIPX and pH 9.99 at the second stage.

  18. State-selective optimization of local excited electronic states in extended systems

    NASA Astrophysics Data System (ADS)

    Kovyrshin, Arseny; Neugebauer, Johannes

    2010-11-01

    Standard implementations of time-dependent density-functional theory (TDDFT) for the calculation of excitation energies give access to a number of the lowest-lying electronic excitations of a molecule under study. For extended systems, this can become cumbersome if a particular excited state is sought-after because many electronic transitions may be present. This often means that even for systems of moderate size, a multitude of excited states needs to be calculated to cover a certain energy range. Here, we present an algorithm for the selective determination of predefined excited electronic states in an extended system. A guess transition density in terms of orbital transitions has to be provided for the excitation that shall be optimized. The approach employs root-homing techniques together with iterative subspace diagonalization methods to optimize the electronic transition. We illustrate the advantages of this method for solvated molecules, core-excitations of metal complexes, and adsorbates at cluster surfaces. In particular, we study the local π →π∗ excitation of a pyridine molecule adsorbed at a silver cluster. It is shown that the method works very efficiently even for high-lying excited states. We demonstrate that the assumption of a single, well-defined local excitation is, in general, not justified for extended systems, which can lead to root-switching during optimization. In those cases, the method can give important information about the spectral distribution of the orbital transition employed as a guess.

  19. Heuristic Optimization Approach to Selecting a Transport Connection in City Public Transport

    NASA Astrophysics Data System (ADS)

    Kul'ka, Jozef; Mantič, Martin; Kopas, Melichar; Faltinová, Eva; Kachman, Daniel

    2017-02-01

    The article presents a heuristic optimization approach to select a suitable transport connection in the framework of a city public transport. This methodology was applied on a part of the public transport in Košice, because it is the second largest city in the Slovak Republic and its network of the public transport creates a complex transport system, which consists of three different transport modes, namely from the bus transport, tram transport and trolley-bus transport. This solution focused on examining the individual transport services and their interconnection in relevant interchange points.

  20. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  1. Imidazopyridine CB2 agonists: optimization of CB2/CB1 selectivity and implications for in vivo analgesic efficacy.

    PubMed

    Trotter, B Wesley; Nanda, Kausik K; Burgey, Christopher S; Potteiger, Craig M; Deng, James Z; Green, Ahren I; Hartnett, John C; Kett, Nathan R; Wu, Zhicai; Henze, Darrell A; Della Penna, Kimberly; Desai, Reshma; Leitl, Michael D; Lemaire, Wei; White, Rebecca B; Yeh, Suzie; Urban, Mark O; Kane, Stefanie A; Hartman, George D; Bilodeau, Mark T

    2011-04-15

    A new series of imidazopyridine CB2 agonists is described. Structural optimization improved CB2/CB1 selectivity in this series and conferred physical properties that facilitated high in vivo exposure, both centrally and peripherally. Administration of a highly selective CB2 agonist in a rat model of analgesia was ineffective despite substantial CNS exposure, while administration of a moderately selective CB2/CB1 agonist exhibited significant analgesic effects. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Optimization of fermentation parameters to study the behavior of selected lactic cultures on soy solid state fermentation.

    PubMed

    Rodríguez de Olmos, A; Bru, E; Garro, M S

    2015-03-02

    The use of solid fermentation substrate (SSF) has been appreciated by the demand for natural and healthy products. Lactic acid bacteria and bifidobacteria play a leading role in the production of novel functional foods and their behavior is practically unknown in these systems. Soy is an excellent substrate for the production of functional foods for their low cost and nutritional value. The aim of this work was to optimize different parameters involved in solid state fermentation (SSF) using selected lactic cultures to improve soybean substrate as a possible strategy for the elaboration of new soy food with enhanced functional and nutritional properties. Soy flour and selected lactic cultures were used under different conditions to optimize the soy SSF. The measured responses were bacterial growth, free amino acids and β-glucosidase activity, which were analyzed by applying response surface methodology. Based on the proposed statistical model, different fermentation conditions were raised by varying the moisture content (50-80%) of the soy substrate and temperature of incubation (31-43°C). The effect of inoculum amount was also investigated. These studies demonstrated the ability of selected strains (Lactobacillus paracasei subsp. paracasei and Bifidobacterium longum) to grow with strain-dependent behavior on the SSF system. β-Glucosidase activity was evident in both strains and L. paracasei subsp. paracasei was able to increase the free amino acids at the end of fermentation under assayed conditions. The used statistical model has allowed the optimization of fermentation parameters on soy SSF by selected lactic strains. Besides, the possibility to work with lower initial bacterial amounts to obtain results with significant technological impact was demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Influence of monitoring data selection for optimization of a steady state multimedia model on the magnitude and nature of the model prediction bias.

    PubMed

    Kim, Hee Seok; Lee, Dong Soo

    2017-11-01

    SimpleBox is an important multimedia model used to estimate the predicted environmental concentration for screening-level exposure assessment. The main objectives were (i) to quantitatively assess how the magnitude and nature of prediction bias of SimpleBox vary with the selection of observed concentration data set for optimization and (ii) to present the prediction performance of the optimized SimpleBox. The optimization was conducted using a total of 9604 observed multimedia data for 42 chemicals of four groups (i.e., polychlorinated dibenzo-p-dioxins/furans (PCDDs/Fs), polybrominated diphenyl ethers (PBDEs), phthalates, and polycyclic aromatic hydrocarbons (PAHs)). The model performance was assessed based on the magnitude and skewness of prediction bias. Monitoring data selection in terms of number of data and kind of chemicals plays a significant role in optimization of the model. The coverage of the physicochemical properties was found to be very important to reduce the prediction bias. This suggests that selection of observed data should be made such that the physicochemical property (such as vapor pressure, octanol-water partition coefficient, octanol-air partition coefficient, and Henry's law constant) range of the selected chemical groups be as wide as possible. With optimization, about 55%, 90%, and 98% of the total number of the observed concentration ratios were predicted within factors of three, 10, and 30, respectively, with negligible skewness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Design-Optimization and Material Selection for a Proximal Radius Fracture-Fixation Implant

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Xie, X.; Arakere, G.; Grujicic, A.; Wagner, D. W.; Vallejo, A.

    2010-11-01

    The problem of optimal size, shape, and placement of a proximal radius-fracture fixation-plate is addressed computationally using a combined finite-element/design-optimization procedure. To expand the set of physiological loading conditions experienced by the implant during normal everyday activities of the patient, beyond those typically covered by the pre-clinical implant-evaluation testing procedures, the case of a wheel-chair push exertion is considered. Toward that end, a musculoskeletal multi-body inverse-dynamics analysis is carried out of a human propelling a wheelchair. The results obtained are used as input to a finite-element structural analysis for evaluation of the maximum stress and fatigue life of the parametrically defined implant design. While optimizing the design of the radius-fracture fixation-plate, realistic functional requirements pertaining to the attainment of the required level of the devise safety factor and longevity/lifecycle were considered. It is argued that the type of analyses employed in the present work should be: (a) used to complement the standard experimental pre-clinical implant-evaluation tests (the tests which normally include a limited number of daily-living physiological loading conditions and which rely on single pass/fail outcomes/decisions with respect to a set of lower-bound implant-performance criteria) and (b) integrated early in the implant design and material/manufacturing-route selection process.

  5. Selection of optimal multispectral imaging system parameters for small joint arthritis detection

    NASA Astrophysics Data System (ADS)

    Dolenec, Rok; Laistler, Elmar; Stergar, Jost; Milanic, Matija

    2018-02-01

    Early detection and treatment of arthritis is essential for a successful outcome of the treatment, but it has proven to be very challenging with existing diagnostic methods. Novel methods based on the optical imaging of the affected joints are becoming an attractive alternative. A non-contact multispectral imaging (MSI) system for imaging of small joints of human hands and feet is being developed. In this work, a numerical simulation of the MSI system is presented. The purpose of the simulation is to determine the optimal design parameters. Inflamed and unaffected human joint models were constructed with a realistic geometry and tissue distributions, based on a MRI scan of a human finger with a spatial resolution of 0.2 mm. The light transport simulation is based on a weighted-photon 3D Monte Carlo method utilizing CUDA GPU acceleration. An uniform illumination of the finger within the 400-1100 nm spectral range was simulated and the photons exiting the joint were recorded using different acceptance angles. From the obtained reflectance and transmittance images the spectral and spatial features most indicative of inflammation were identified. Optimal acceptance angle and spectral bands were determined. This study demonstrates that proper selection of MSI system parameters critically affects ability of a MSI system to discriminate the unaffected and inflamed joints. The presented system design optimization approach could be applied to other pathologies.

  6. An analysis of an optimal selection process for characteristics and technical performance of baseball pitchers.

    PubMed

    Lin, Wen-Bin; Tung, I-Wu; Chen, Mei-Jung; Chen, Mei-Yen

    2011-08-01

    Selection of a qualified pitcher has relied previously on qualitative indices; here, both quantitative and qualitative indices including pitching statistics, defense, mental skills, experience, and managers' recognition were collected, and an analytic hierarchy process was used to rank baseball pitchers. The participants were 8 experts who ranked characteristics and statistics of 15 baseball pitchers who comprised the first round of potential representatives for the Chinese Taipei National Baseball team. The results indicated a selection rate that was 91% consistent with the official national team roster, as 11 pitchers with the highest scores who were recommended as optimal choices to be official members of the Chinese Tai-pei National Baseball team actually participated in the 2009 Baseball World Cup. An analytic hierarchy can aid in selection of qualified pitchers, depending on situational and practical needs; the method could be extended to other sports and team-selection situations.

  7. A multi-objective optimization approach for the selection of working fluids of geothermal facilities: Economic, environmental and social aspects.

    PubMed

    Martínez-Gomez, Juan; Peña-Lamas, Javier; Martín, Mariano; Ponce-Ortega, José María

    2017-12-01

    The selection of the working fluid for Organic Rankine Cycles has traditionally been addressed from systematic heuristic methods, which perform a characterization and prior selection considering mainly one objective, thus avoiding a selection considering simultaneously the objectives related to sustainability and safety. The objective of this work is to propose a methodology for the optimal selection of the working fluid for Organic Rankine Cycles. The model is presented as a multi-objective approach, which simultaneously considers the economic, environmental and safety aspects. The economic objective function considers the profit obtained by selling the energy produced. Safety was evaluated in terms of individual risk for each of the components of the Organic Rankine Cycles and it was formulated as a function of the operating conditions and hazardous properties of each working fluid. The environmental function is based on carbon dioxide emissions, considering carbon dioxide mitigation, emission due to the use of cooling water as well emissions due material release. The methodology was applied to the case of geothermal facilities to select the optimal working fluid although it can be extended to waste heat recovery. The results show that the hydrocarbons represent better solutions, thus among a list of 24 working fluids, toluene is selected as the best fluid. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Selected Bibliography on Optimizing Techniques in Statistics

    DTIC Science & Technology

    1981-08-01

    problems in business, industry and .ogovern nt ae f rmulated as optimization problem. Topics in optimization constitute an essential area of study in...numerical, iii) mathematical programming, and (iv) variational. We provide pertinent references with statistical applications Sin the above areas in Part I...TMS Advanced Studies in Managentnt Sciences, North-Holland PIIENli iiiany, Amsterdam. (To appear.) Spang, H. A. (1962). A review of minimization

  9. Optimal selection of biochars for remediating metals ...

    EPA Pesticide Factsheets

    Approximately 500,000 abandoned mines across the U.S. pose a considerable, pervasive risk to human health and the environment due to possible exposure to the residuals of heavy metal extraction. Historically, a variety of chemical and biological methods have been used to reduce the bioavailability of the metals at mine sites. Biochar with its potential to complex and immobilize heavy metals, is an emerging alternative for reducing bioavailability. Furthermore, biochar has been reported to improve soil conditions for plant growth and can be used for promoting the establishment of a soil-stabilizing native plant community to reduce offsite movement of metal-laden waste materials. Because biochar properties depend upon feedstock selection, pyrolysis production conditions, and activation procedures used, they can be designed to meet specific remediation needs. As a result biochar with specific properties can be produced to correspond to specific soil remediation situations. However, techniques are needed to optimally match biochar characteristics with metals contaminated soils to effectively reduce metal bioavailability. Here we present experimental results used to develop a generalized method for evaluating the ability of biochar to reduce metals in mine spoil soil from an abandoned Cu and Zn mine. Thirty-eight biochars were produced from approximately 20 different feedstocks and produced via slow pyrolysis or gasification, and were allowed to react with a f

  10. Optimization of individualized graft composition: CD3/CD19 depletion combined with CD34 selection for haploidentical transplantation.

    PubMed

    Huenecke, Sabine; Bremm, Melanie; Cappel, Claudia; Esser, Ruth; Quaiser, Andrea; Bonig, Halvard; Jarisch, Andrea; Soerensen, Jan; Klingebiel, Thomas; Bader, Peter; Koehl, Ulrike

    2016-09-01

    Excessive T-cell depletion (TCD) is a prerequisite for graft manufacturing in haploidentical stem cell (SC) transplantation by using either CD34 selection or direct TCD such as CD3/CD19 depletion. To optimize graft composition we compared 1) direct or indirect TCD only, 2) a combination of CD3/CD19-depleted with CD34-selected grafts, or 3) TCD twice for depletion improvement based on our 10-year experience with 320 separations in graft manufacturing and quality control. SC recovery was significantly higher (85%, n = 187 vs. 73%, n = 115; p < 0.0001), but TCD was inferior (median log depletion, -3.6 vs. -5.2) for CD3/CD19 depletion compared to CD34 selection, respectively. For end products with less than -2.5 log TCD, a second depletion step led to a successful improvement in TCD. Thawing of grafts showed a high viability and recovery of SCs, but low NK-cell yield. To optimize individualized graft engineering, a calculator was developed to estimate the results of the final graft based on the content of CD34+ and CD3+ cells in the leukapheresis product. Finally, calculated splitting of the starting product followed by CD3/19 depletion together with CD34+ graft manipulation may enable the composition of optimized grafts with high CD34+-cell and minimal T-cell content. © 2016 AABB.

  11. Algorithm for selection of optimized EPR distance restraints for de novo protein structure determination

    PubMed Central

    Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.

    2010-01-01

    A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624

  12. Filter Selection for Optimizing the Spectral Sensitivity of Broadband Multispectral Cameras Based on Maximum Linear Independence.

    PubMed

    Li, Sui-Xian

    2018-05-07

    Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI). However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ₂ norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.

  13. Making the optimal decision in selecting protective clothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, J. Mark

    2007-07-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of employee dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered, including - Protection; - Personnel Contamination; - Cost; - Radwaste; - Comfort; - Convenience; - Logistics/Rad Material Considerations; - Reject Rate of Laundered Clothing; - Durability; - Security; - Personnel Safety includingmore » Heat Stress; - Disposition of Gloves and Booties. In addition, over the last several years there has been a trend of nuclear power plants either running trials or switching to Single Use Protective Clothing (SUPC) from traditional protective clothing. In some cases, after trial usage of SUPC, plants have chosen not to switch. In other cases after switching to SUPC for a period of time, some plants have chosen to switch back to laundering. Based on these observations, this paper reviews the 'real' drivers, issues, and interrelating factors regarding the selection and use of protective clothing throughout the nuclear industry. (authors)« less

  14. Optimization of Selective Laser Melting by Evaluation Method of Multiple Quality Characteristics

    NASA Astrophysics Data System (ADS)

    Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.

    2018-01-01

    Article describes the adoption of the Taguchi method in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.

  15. Development of a multiobjective optimization tool for the selection and placement of best management practices for nonpoint source pollution control

    NASA Astrophysics Data System (ADS)

    Maringanti, Chetan; Chaubey, Indrajeet; Popp, Jennie

    2009-06-01

    Best management practices (BMPs) are effective in reducing the transport of agricultural nonpoint source pollutants to receiving water bodies. However, selection of BMPs for placement in a watershed requires optimization of the available resources to obtain maximum possible pollution reduction. In this study, an optimization methodology is developed to select and place BMPs in a watershed to provide solutions that are both economically and ecologically effective. This novel approach develops and utilizes a BMP tool, a database that stores the pollution reduction and cost information of different BMPs under consideration. The BMP tool replaces the dynamic linkage of the distributed parameter watershed model during optimization and therefore reduces the computation time considerably. Total pollutant load from the watershed, and net cost increase from the baseline, were the two objective functions minimized during the optimization process. The optimization model, consisting of a multiobjective genetic algorithm (NSGA-II) in combination with a watershed simulation tool (Soil Water and Assessment Tool (SWAT)), was developed and tested for nonpoint source pollution control in the L'Anguille River watershed located in eastern Arkansas. The optimized solutions provided a trade-off between the two objective functions for sediment, phosphorus, and nitrogen reduction. The results indicated that buffer strips were very effective in controlling the nonpoint source pollutants from leaving the croplands. The optimized BMP plans resulted in potential reductions of 33%, 32%, and 13% in sediment, phosphorus, and nitrogen loads, respectively, from the watershed.

  16. Multi-scale textural feature extraction and particle swarm optimization based model selection for false positive reduction in mammography.

    PubMed

    Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin

    2015-12-01

    The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection

    PubMed Central

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme

    2016-01-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  18. Optimal input selection for neural machine interfaces predicting multiple non-explicit outputs.

    PubMed

    Krepkovich, Eileen T; Perreault, Eric J

    2008-01-01

    This study implemented a novel algorithm that optimally selects inputs for neural machine interface (NMI) devices intended to control multiple outputs and evaluated its performance on systems lacking explicit output. NMIs often incorporate signals from multiple physiological sources and provide predictions for multidimensional control, leading to multiple-input multiple-output systems. Further, NMIs often are used with subjects who have motor disabilities and thus lack explicit motor outputs. Our algorithm was tested on simulated multiple-input multiple-output systems and on electromyogram and kinematic data collected from healthy subjects performing arm reaches. Effects of output noise in simulated systems indicated that the algorithm could be useful for systems with poor estimates of the output states, as is true for systems lacking explicit motor output. To test efficacy on physiological data, selection was performed using inputs from one subject and outputs from a different subject. Selection was effective for these cases, again indicating that this algorithm will be useful for predictions where there is no motor output, as often is the case for disabled subjects. Further, prediction results generalized for different movement types not used for estimation. These results demonstrate the efficacy of this algorithm for the development of neural machine interfaces.

  19. Demonstration optimization analyses of pumping from selected Arapahoe aquifer municipal wells in the west-central Denver Basin, Colorado, 2010–2109

    USGS Publications Warehouse

    Banta, Edward R.; Paschke, Suzanne S.

    2012-01-01

    Declining water levels caused by withdrawals of water from wells in the west-central part of the Denver Basin bedrock-aquifer system have raised concerns with respect to the ability of the aquifer system to sustain production. The Arapahoe aquifer in particular is heavily used in this area. Two optimization analyses were conducted to demonstrate approaches that could be used to evaluate possible future pumping scenarios intended to prolong the productivity of the aquifer and to delay excessive loss of saturated thickness. These analyses were designed as demonstrations only, and were not intended as a comprehensive optimization study. Optimization analyses were based on a groundwater-flow model of the Denver Basin developed as part of a recently published U.S. Geological Survey groundwater-availability study. For each analysis an optimization problem was set up to maximize total withdrawal rate, subject to withdrawal-rate and hydraulic-head constraints, for 119 selected municipal water-supply wells located in 96 model cells. The optimization analyses were based on 50- and 100-year simulations of groundwater withdrawals. The optimized total withdrawal rate for all selected wells for a 50-year simulation time was about 58.8 cubic feet per second. For an analysis in which the simulation time and head-constraint time were extended to 100 years, the optimized total withdrawal rate for all selected wells was about 53.0 cubic feet per second, demonstrating that a reduction in withdrawal rate of about 10 percent may extend the time before the hydraulic-head constraints are violated by 50 years, provided that pumping rates are optimally distributed. Analysis of simulation results showed that initially, the pumping produces water primarily by release of water from storage in the Arapahoe aquifer. However, because confining layers between the Denver and Arapahoe aquifers are thin, in less than 5 years, most of the water removed by managed-flows pumping likely would be supplied

  20. Optimal selection of markers for validation or replication from genome-wide association studies.

    PubMed

    Greenwood, Celia M T; Rangrej, Jagadish; Sun, Lei

    2007-07-01

    With reductions in genotyping costs and the fast pace of improvements in genotyping technology, it is not uncommon for the individuals in a single study to undergo genotyping using several different platforms, where each platform may contain different numbers of markers selected via different criteria. For example, a set of cases and controls may be genotyped at markers in a small set of carefully selected candidate genes, and shortly thereafter, the same cases and controls may be used for a genome-wide single nucleotide polymorphism (SNP) association study. After such initial investigations, often, a subset of "interesting" markers is selected for validation or replication. Specifically, by validation, we refer to the investigation of associations between the selected subset of markers and the disease in independent data. However, it is not obvious how to choose the best set of markers for this validation. There may be a prior expectation that some sets of genotyping data are more likely to contain real associations. For example, it may be more likely for markers in plausible candidate genes to show disease associations than markers in a genome-wide scan. Hence, it would be desirable to select proportionally more markers from the candidate gene set. When a fixed number of markers are selected for validation, we propose an approach for identifying an optimal marker-selection configuration by basing the approach on minimizing the stratified false discovery rate. We illustrate this approach using a case-control study of colorectal cancer from Ontario, Canada, and we show that this approach leads to substantial reductions in the estimated false discovery rates in the Ontario dataset for the selected markers, as well as reductions in the expected false discovery rates for the proposed validation dataset. Copyright 2007 Wiley-Liss, Inc.

  1. Markov Chain Model-Based Optimal Cluster Heads Selection for Wireless Sensor Networks

    PubMed Central

    Ahmed, Gulnaz; Zou, Jianhua; Zhao, Xi; Sadiq Fareed, Mian Muhammad

    2017-01-01

    The longer network lifetime of Wireless Sensor Networks (WSNs) is a goal which is directly related to energy consumption. This energy consumption issue becomes more challenging when the energy load is not properly distributed in the sensing area. The hierarchal clustering architecture is the best choice for these kind of issues. In this paper, we introduce a novel clustering protocol called Markov chain model-based optimal cluster heads (MOCHs) selection for WSNs. In our proposed model, we introduce a simple strategy for the optimal number of cluster heads selection to overcome the problem of uneven energy distribution in the network. The attractiveness of our model is that the BS controls the number of cluster heads while the cluster heads control the cluster members in each cluster in such a restricted manner that a uniform and even load is ensured in each cluster. We perform an extensive range of simulation using five quality measures, namely: the lifetime of the network, stable and unstable region in the lifetime of the network, throughput of the network, the number of cluster heads in the network, and the transmission time of the network to analyze the proposed model. We compare MOCHs against Sleep-awake Energy Efficient Distributed (SEED) clustering, Artificial Bee Colony (ABC), Zone Based Routing (ZBR), and Centralized Energy Efficient Clustering (CEEC) using the above-discussed quality metrics and found that the lifetime of the proposed model is almost 1095, 2630, 3599, and 2045 rounds (time steps) greater than SEED, ABC, ZBR, and CEEC, respectively. The obtained results demonstrate that the MOCHs is better than SEED, ABC, ZBR, and CEEC in terms of energy efficiency and the network throughput. PMID:28241492

  2. Tapping insertional torque allows prediction for better pedicle screw fixation and optimal screw size selection.

    PubMed

    Helgeson, Melvin D; Kang, Daniel G; Lehman, Ronald A; Dmitriev, Anton E; Luhmann, Scott J

    2013-08-01

    There is currently no reliable technique for intraoperative assessment of pedicle screw fixation strength and optimal screw size. Several studies have evaluated pedicle screw insertional torque (IT) and its direct correlation with pullout strength. However, there is limited clinical application with pedicle screw IT as it must be measured during screw placement and rarely causes the spine surgeon to change screw size. To date, no study has evaluated tapping IT, which precedes screw insertion, and its ability to predict pedicle screw pullout strength. The objective of this study was to investigate tapping IT and its ability to predict pedicle screw pullout strength and optimal screw size. In vitro human cadaveric biomechanical analysis. Twenty fresh-frozen human cadaveric thoracic vertebral levels were prepared and dual-energy radiographic absorptiometry scanned for bone mineral density (BMD). All specimens were osteoporotic with a mean BMD of 0.60 ± 0.07 g/cm(2). Five specimens (n=10) were used to perform a pilot study, as there were no previously established values for optimal tapping IT. Each pedicle during the pilot study was measured using a digital caliper as well as computed tomography measurements, and the optimal screw size was determined to be equal to or the first size smaller than the pedicle diameter. The optimal tap size was then selected as the tap diameter 1 mm smaller than the optimal screw size. During optimal tap size insertion, all peak tapping IT values were found to be between 2 in-lbs and 3 in-lbs. Therefore, the threshold tapping IT value for optimal pedicle screw and tap size was determined to be 2.5 in-lbs, and a comparison tapping IT value of 1.5 in-lbs was selected. Next, 15 test specimens (n=30) were measured with digital calipers, probed, tapped, and instrumented using a paired comparison between the two threshold tapping IT values (Group 1: 1.5 in-lbs; Group 2: 2.5 in-lbs), randomly assigned to the left or right pedicle on each

  3. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    PubMed

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Selection and optimization of mooring cables on floating platform for special purposes

    NASA Astrophysics Data System (ADS)

    Ma, Guang-ying; Yao, Yun-long; Zhao, Chen-yao

    2017-08-01

    This paper studied a new type of assembled marine floating platform for special purposes. The selection and optimization of mooring cables on the floating platform are studied. By using ANSYS AQWA software, the hydrodynamic model of the platform was established to calculate the time history response of the platform motion under complex water environments, such as wind, wave, current and mooring. On this basis, motion response and cable tension were calculated with different cable mooring states under the designed environmental load. Finally, the best mooring scheme to meet the cable strength requirements was proposed, which can lower the motion amplitude of the platform effectively.

  5. Room temperature quarter wave resonator re-buncher development for a high power heavy-ion linear accelerator

    NASA Astrophysics Data System (ADS)

    Kim, Hye-Jin; Choi, B. H.; Han, Jaeeun; Hyun, Myung Ook; Park, Bum-Sik; Choi, Ohryoung; Lee, Doyoon; Son, Kitaek

    2018-03-01

    In the medium energy beam transport (MEBT) line system of the RAON which consists of several quadrupole magnets, three normal-conducting re-bunchers, and several diagnostic devices, a quarter wave resonator type re-buncher was chosen for minimizing longitudinal emittance growth and manipulating a longitudinal phase ellipse into the longitudinal acceptance of the low energy linac. The re-buncher has a resonant frequency of 81.25 MHz, geometrical beta (βg) of 0.049, and physical length of 24 cm. Based on the result of numerical calculations of electromagnetic field using CST-MWS and mechanical analysis of the heat distribution and deformation, an internal structure of the re-buncher was optimized to increase the effective voltage and to reduce power losses in the wall. The criteria of the multipacting effect was estimated and it was confirmed by the experiment. The position and specification of cooling channels are designed to recover a heat load up to 15 kW with reasonable margin of 25%. The coaxial and loop type RF power coupler are positioned on the high magnetic field region and two slug tuners are installed perpendicularly to the beam axis. The frequency sensitivity as a function of the tuner depth and cooling water temperature is measured and the frequency shift is in all cases within the provided tuner range. The test with a high power of 10 kW and the continuous wave is performed and the reflection power is observed less than 450 W.

  6. Optimal heavy tail estimation - Part 1: Order selection

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred; Bermejo, Miguel A.

    2017-12-01

    The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  7. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    PubMed

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  8. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    PubMed

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Impact of cultivar selection and process optimization on ethanol yield from different varieties of sugarcane

    PubMed Central

    2014-01-01

    Background The development of ‘energycane’ varieties of sugarcane is underway, targeting the use of both sugar juice and bagasse for ethanol production. The current study evaluated a selection of such ‘energycane’ cultivars for the combined ethanol yields from juice and bagasse, by optimization of dilute acid pretreatment optimization of bagasse for sugar yields. Method A central composite design under response surface methodology was used to investigate the effects of dilute acid pretreatment parameters followed by enzymatic hydrolysis on the combined sugar yield of bagasse samples. The pressed slurry generated from optimum pretreatment conditions (maximum combined sugar yield) was used as the substrate during batch and fed-batch simultaneous saccharification and fermentation (SSF) processes at different solid loadings and enzyme dosages, aiming to reach an ethanol concentration of at least 40 g/L. Results Significant variations were observed in sugar yields (xylose, glucose and combined sugar yield) from pretreatment-hydrolysis of bagasse from different cultivars of sugarcane. Up to 33% difference in combined sugar yield between best performing varieties and industrial bagasse was observed at optimal pretreatment-hydrolysis conditions. Significant improvement in overall ethanol yield after SSF of the pretreated bagasse was also observed from the best performing varieties (84.5 to 85.6%) compared to industrial bagasse (74.5%). The ethanol concentration showed inverse correlation with lignin content and the ratio of xylose to arabinose, but it showed positive correlation with glucose yield from pretreatment-hydrolysis. The overall assessment of the cultivars showed greater improvement in the final ethanol concentration (26.9 to 33.9%) and combined ethanol yields per hectare (83 to 94%) for the best performing varieties with respect to industrial sugarcane. Conclusions These results suggest that the selection of sugarcane variety to optimize ethanol

  10. Insights into the Experiences of Older Workers and Change: Through the Lens of Selection, Optimization, and Compensation

    ERIC Educational Resources Information Center

    Unson, Christine; Richardson, Margaret

    2013-01-01

    Purpose: The study examined the barriers faced, the goals selected, and the optimization and compensation strategies of older workers in relation to career change. Method: Thirty open-ended interviews, 12 in the United States and 18 in New Zealand, were conducted, recorded, transcribed verbatim, and analyzed for themes. Results: Barriers to…

  11. High-Efficiency Nonfullerene Polymer Solar Cell Enabling by Integration of Film-Morphology Optimization, Donor Selection, and Interfacial Engineering.

    PubMed

    Zhang, Xin; Li, Weiping; Yao, Jiannian; Zhan, Chuanlang

    2016-06-22

    Carrier mobility is a vital factor determining the electrical performance of organic solar cells. In this paper we report that a high-efficiency nonfullerene organic solar cell (NF-OSC) with a power conversion efficiency of 6.94 ± 0.27% was obtained by optimizing the hole and electron transportations via following judicious selection of polymer donor and engineering of film-morphology and cathode interlayers: (1) a combination of solvent annealing and solvent vapor annealing optimizes the film morphology and hence both hole and electron mobilities, leading to a trade-off of fill factor and short-circuit current density (Jsc); (2) the judicious selection of polymer donor affords a higher hole and electron mobility, giving a higher Jsc; and (3) engineering the cathode interlayer affords a higher electron mobility, which leads to a significant increase in electrical current generation and ultimately the power conversion efficiency (PCE).

  12. Selection for optimal crew performance - Relative impact of selection and training

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.

    1987-01-01

    An empirical study supporting Helmreich's (1986) theoretical work on the distinct manner in which training and selection impact crew coordination is presented. Training is capable of changing attitudes, while selection screens for stable personality characteristics. Training appears least effective for leadership, an area strongly influenced by personality. Selection is least effective for influencing attitudes about personal vulnerability to stress, which appear to be trained in resource management programs. Because personality correlates with attitudes before and after training, it is felt that selection may be necessary even with a leadership-oriented training cirriculum.

  13. Feature selection and classifier parameters estimation for EEG signals peak detection using particle swarm optimization.

    PubMed

    Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.

  14. Feature Selection and Classifier Parameters Estimation for EEG Signals Peak Detection Using Particle Swarm Optimization

    PubMed Central

    Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236

  15. Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.

    PubMed

    Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles

    2015-11-01

    Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.

  16. Burnout and job performance: the moderating role of selection, optimization, and compensation strategies.

    PubMed

    Demerouti, Evangelia; Bakker, Arnold B; Leiter, Michael

    2014-01-01

    The present study aims to explain why research thus far has found only low to moderate associations between burnout and performance. We argue that employees use adaptive strategies that help them to maintain their performance (i.e., task performance, adaptivity to change) at acceptable levels despite experiencing burnout (i.e., exhaustion, disengagement). We focus on the strategies included in the selective optimization with compensation model. Using a sample of 294 employees and their supervisors, we found that compensation is the most successful strategy in buffering the negative associations of disengagement with supervisor-rated task performance and both disengagement and exhaustion with supervisor-rated adaptivity to change. In contrast, selection exacerbates the negative relationship of exhaustion with supervisor-rated adaptivity to change. In total, 42% of the hypothesized interactions proved to be significant. Our study uncovers successful and unsuccessful strategies that people use to deal with their burnout symptoms in order to achieve satisfactory job performance. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. An Optimization Model for the Selection of Bus-Only Lanes in a City.

    PubMed

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers' route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model.

  18. Human Computer Interface Design Criteria. Volume 1. User Interface Requirements

    DTIC Science & Technology

    2010-03-19

    Television tuners, including tuner cards for use in computers, shall be equipped with secondary audio program playback circuitry. (c) All training...Shelf CSS Cascading Style Sheets DII Defense Information Infrastructure DISA Defense Information Systems Agency DoD Department of Defense

  19. Structural optimization and structure-functional selectivity relationship studies of G protein-biased EP2 receptor agonists.

    PubMed

    Ogawa, Seiji; Watanabe, Toshihide; Moriyuki, Kazumi; Goto, Yoshikazu; Yamane, Shinsaku; Watanabe, Akio; Tsuboi, Kazuma; Kinoshita, Atsushi; Okada, Takuya; Takeda, Hiroyuki; Tani, Kousuke; Maruyama, Toru

    2016-05-15

    The modification of the novel G protein-biased EP2 agonist 1 has been investigated to improve its G protein activity and develop a better understanding of its structure-functional selectivity relationship (SFSR). The optimization of the substituents on the phenyl ring of 1, followed by the inversion of the hydroxyl group on the cyclopentane moiety led to compound 9, which showed a 100-fold increase in its G protein activity compared with 1 without any increase in β-arrestin recruitment. Furthermore, SFSR studies revealed that the combination of meta and para substituents on the phenyl moiety was crucial to the functional selectivity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Web-GIS oriented systems viability for municipal solid waste selective collection optimization in developed and transient economies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rada, E.C., E-mail: Elena.Rada@ing.unitn.it; Ragazzi, M.; Fedrizzi, P.

    Highlights: ► As an appropriate solution for MSW management in developed and transient countries. ► As an option to increase the efficiency of MSW selective collection. ► As an opportunity to integrate MSW management needs and services inventories. ► As a tool to develop Urban Mining actions. - Abstract: Municipal solid waste management is a multidisciplinary activity that includes generation, source separation, storage, collection, transfer and transport, processing and recovery, and, last but not least, disposal. The optimization of waste collection, through source separation, is compulsory where a landfill based management must be overcome. In this paper, a few aspectsmore » related to the implementation of a Web-GIS based system are analyzed. This approach is critically analyzed referring to the experience of two Italian case studies and two additional extra-European case studies. The first case is one of the best examples of selective collection optimization in Italy. The obtained efficiency is very high: 80% of waste is source separated for recycling purposes. In the second reference case, the local administration is going to be faced with the optimization of waste collection through Web-GIS oriented technologies for the first time. The starting scenario is far from an optimized management of municipal solid waste. The last two case studies concern pilot experiences in China and Malaysia. Each step of the Web-GIS oriented strategy is comparatively discussed referring to typical scenarios of developed and transient economies. The main result is that transient economies are ready to move toward Web oriented tools for MSW management, but this opportunity is not yet well exploited in the sector.« less

  1. Modeling surgical tool selection patterns as a "traveling salesman problem" for optimizing a modular surgical tool system.

    PubMed

    Nelson, Carl A; Miller, David J; Oleynikov, Dmitry

    2008-01-01

    As modular systems come into the forefront of robotic telesurgery, streamlining the process of selecting surgical tools becomes an important consideration. This paper presents a method for optimal queuing of tools in modular surgical tool systems, based on patterns in tool-use sequences, in order to minimize time spent changing tools. The solution approach is to model the set of tools as a graph, with tool-change frequency expressed as edge weights in the graph, and to solve the Traveling Salesman Problem for the graph. In a set of simulations, this method has shown superior performance at optimizing tool arrangements for streamlining surgical procedures.

  2. Using natural selection and optimization for smarter vegetation models - challenges and opportunities

    NASA Astrophysics Data System (ADS)

    Franklin, Oskar; Han, Wang; Dieckmann, Ulf; Cramer, Wolfgang; Brännström, Åke; Pietsch, Stephan; Rovenskaya, Elena; Prentice, Iain Colin

    2017-04-01

    Dynamic global vegetation models (DGVMs) are now indispensable for understanding the biosphere and for estimating the capacity of ecosystems to provide services. The models are continuously developed to include an increasing number of processes and to utilize the growing amounts of observed data becoming available. However, while the versatility of the models is increasing as new processes and variables are added, their accuracy suffers from the accumulation of uncertainty, especially in the absence of overarching principles controlling their concerted behaviour. We have initiated a collaborative working group to address this problem based on a 'missing law' - adaptation and optimization principles rooted in natural selection. Even though this 'missing law' constrains relationships between traits, and therefore can vastly reduce the number of uncertain parameters in ecosystem models, it has rarely been applied to DGVMs. Our recent research have shown that optimization- and trait-based models of gross primary production can be both much simpler and more accurate than current models based on fixed functional types, and that observed plant carbon allocations and distributions of plant functional traits are predictable with eco-evolutionary models. While there are also many other examples of the usefulness of these and other theoretical principles, it is not always straight-forward to make them operational in predictive models. In particular on longer time scales, the representation of functional diversity and the dynamical interactions among individuals and species presents a formidable challenge. Here we will present recent ideas on the use of adaptation and optimization principles in vegetation models, including examples of promising developments, but also limitations of the principles and some key challenges.

  3. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    PubMed

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human

  4. Optimization of breast mass classification using sequential forward floating selection (SFFS) and a support vector machine (SVM) model

    PubMed Central

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    Purpose: Improving radiologists’ performance in classification between malignant and benign breast lesions is important to increase cancer detection sensitivity and reduce false-positive recalls. For this purpose, developing computer-aided diagnosis (CAD) schemes has been attracting research interest in recent years. In this study, we investigated a new feature selection method for the task of breast mass classification. Methods: We initially computed 181 image features based on mass shape, spiculation, contrast, presence of fat or calcifications, texture, isodensity, and other morphological features. From this large image feature pool, we used a sequential forward floating selection (SFFS)-based feature selection method to select relevant features, and analyzed their performance using a support vector machine (SVM) model trained for the classification task. On a database of 600 benign and 600 malignant mass regions of interest (ROIs), we performed the study using a ten-fold cross-validation method. Feature selection and optimization of the SVM parameters were conducted on the training subsets only. Results: The area under the receiver operating characteristic curve (AUC) = 0.805±0.012 was obtained for the classification task. The results also showed that the most frequently-selected features by the SFFS-based algorithm in 10-fold iterations were those related to mass shape, isodensity and presence of fat, which are consistent with the image features frequently used by radiologists in the clinical environment for mass classification. The study also indicated that accurately computing mass spiculation features from the projection mammograms was difficult, and failed to perform well for the mass classification task due to tissue overlap within the benign mass regions. Conclusions: In conclusion, this comprehensive feature analysis study provided new and valuable information for optimizing computerized mass classification schemes that may have potential to be

  5. Selecting optimal feast-to-famine ratio for a new polyhydroxyalkanoate (PHA) production system fed by valerate-dominant sludge hydrolysate.

    PubMed

    Hao, Jiuxiao; Wang, Hui; Wang, Xiujin

    2018-04-01

    The feast-to-famine ratio (F/F) represents the extent of selective pressure during polyhydroxyalkanoate (PHA) culture selection. This study evaluated the effects of F/F on a new PHA production system by an enriched culture with valerate-dominant sludge hydrolysate and selected the optimal F/F. After the original F/F 1/3 was modified to 1/1, 1/2, 1/4, and 1/5, F/F did not affect their lengths of feast phase, but affected their biomass growth behaviors during the famine phase and PHA-producing abilities. The optimal F/F was 1/2, and compared with 1/3, it increased the maximal PHA content and the fraction of 3-hydroxyvalerate (3HV) and 3-hydroxy-2-methylvalerate (3H2MV) monomers, with higher productivity and better polymer properties. Although F/F 1/2 impaired the advantage of the dominant genus Delftia, it improved the PHA production rate while decreased biomass growth rate, meanwhile enhancing the utilization and conversion of valerate. These findings indicate that in contrast to previous studies using acetate-dominant substrate for PHA production, the new system fed by valerate-dominant substrate can adopt a higher F/F.

  6. Selecting optimal structure of burners for tubular cylindrical furnaces by the mathematical experiment planning method

    NASA Astrophysics Data System (ADS)

    Katin, Viktor; Kosygin, Vladimir; Akhtiamov, Midkhat

    2017-10-01

    This paper substantiates the method of mathematical planning for experimental research in the process of selecting the most efficient types of burning devices for tubular refinery furnaces of vertical-cylindrical design. This paper provides detailed consideration of an experimental plan of a 4×4 Latin square type when studying the impact of three factors with four levels of variance. On the basis of the experimental research we have developed practical recommendations on the employment of optimal burners for two-step fuel combustion.

  7. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    NASA Astrophysics Data System (ADS)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  8. Commentary: Why Pharmaceutical Scientists in Early Drug Discovery Are Critical for Influencing the Design and Selection of Optimal Drug Candidates.

    PubMed

    Landis, Margaret S; Bhattachar, Shobha; Yazdanian, Mehran; Morrison, John

    2018-01-01

    This commentary reflects the collective view of pharmaceutical scientists from four different organizations with extensive experience in the field of drug discovery support. Herein, engaging discussion is presented on the current and future approaches for the selection of the most optimal and developable drug candidates. Over the past two decades, developability assessment programs have been implemented with the intention of improving physicochemical and metabolic properties. However, the complexity of both new drug targets and non-traditional drug candidates provides continuing challenges for developing formulations for optimal drug delivery. The need for more enabled technologies to deliver drug candidates has necessitated an even more active role for pharmaceutical scientists to influence many key molecular parameters during compound optimization and selection. This enhanced role begins at the early in vitro screening stages, where key learnings regarding the interplay of molecular structure and pharmaceutical property relationships can be derived. Performance of the drug candidates in formulations intended to support key in vivo studies provides important information on chemotype-formulation compatibility relationships. Structure modifications to support the selection of the solid form are also important to consider, and predictive in silico models are being rapidly developed in this area. Ultimately, the role of pharmaceutical scientists in drug discovery now extends beyond rapid solubility screening, early form assessment, and data delivery. This multidisciplinary role has evolved to include the practice of proactively taking part in the molecular design to better align solid form and formulation requirements to enhance developability potential.

  9. Selected chemical composition changes in microwave-convective dried parsley leaves affected by ultrasound and steaming pre-treatments - An optimization approach.

    PubMed

    Dadan, Magdalena; Rybak, Katarzyna; Wiktor, Artur; Nowacka, Malgorzata; Zubernik, Joanna; Witrowa-Rajchert, Dorota

    2018-01-15

    Parsley leaves contain a high amount of bioactive components (especially lutein), therefore it is crucial to select the most appropriate pre-treatment and drying conditions, in order to obtain high quality of dried leaves, which was the aim of this study. The optimization was done using response surface methodology (RSM) for the following factors: microwave power (100, 200, 300W), air temperature (20, 30, 40°C) and pre-treatment variant (ultrasound, steaming and dipping as a control). Total phenolic content (TPC), antioxidant activity, chlorophyll and lutein contents (using UPLC-PDA) were determined in dried leaves. The analysed responses were dependent on the applied drying parameters and the pre-treatment type. The possibility of ultrasound and steam treatment application was proven and the optimal processing conditions were selected. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  11. An artificial system for selecting the optimal surgical team.

    PubMed

    Saberi, Nahid; Mahvash, Mohsen; Zenati, Marco

    2015-01-01

    We introduce an intelligent system to optimize a team composition based on the team's historical outcomes and apply this system to compose a surgical team. The system relies on a record of the procedures performed in the past. The optimal team composition is the one with the lowest probability of unfavorable outcome. We use the theory of probability and the inclusion exclusion principle to model the probability of team outcome for a given composition. A probability value is assigned to each person of database and the probability of a team composition is calculated from them. The model allows to determine the probability of all possible team compositions even if there is no recoded procedure for some team compositions. From an analytical perspective, assembling an optimal team is equivalent to minimizing the overlap of team members who have a recurring tendency to be involved with procedures of unfavorable results. A conceptual example shows the accuracy of the proposed system on obtaining the optimal team.

  12. Applications of Optimal Building Energy System Selection and Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marnay, Chris; Stadler, Michael; Siddiqui, Afzal

    2011-04-01

    Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated bymore » description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.« less

  13. A Conceptual Framework for Procurement Decision Making Model to Optimize Supplier Selection: The Case of Malaysian Construction Industry

    NASA Astrophysics Data System (ADS)

    Chuan, Ngam Min; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Yong, Lee Choon; Ghazali, Azrul; Ezanee Rusli, Mohd; Itam, Zarina Binti; Beddu, Salmia; Liyana Mohd Kamal, Nur

    2016-03-01

    This paper intends to fathom the current state of procurement system in Malaysia specifically in the construction industry in the aspect of supplier selection. This paper propose a comprehensive study on the supplier selection metrics for infrastructure building, weight the importance of each metrics assigned and to find the relationship between the metrics among initiators, decision makers, buyers and users. With the metrics hierarchy of criteria importance, a supplier selection process can be defined, repeated and audited with lesser complications or difficulties. This will help the field of procurement to improve as this research is able to develop and redefine policies and procedures that have been set in supplier selection. Developing this systematic process will enable optimization of supplier selection and thus increasing the value for every stakeholders as the process of selection is greatly simplified. With a new redefined policy and procedure, it does not only increase the company’s effectiveness and profit, but also make it available for the company to reach greater heights in the advancement of procurement in Malaysia.

  14. X-ray backscatter imaging for radiography by selective detection and snapshot: Evolution, development, and optimization

    NASA Astrophysics Data System (ADS)

    Shedlock, Daniel

    Compton backscatter imaging (CBI) is a single-sided imaging technique that uses the penetrating power of radiation and unique interaction properties of radiation with matter to image subsurface features. CBI has a variety of applications that include non-destructive interrogation, medical imaging, security and military applications. Radiography by selective detection (RSD), lateral migration radiography (LMR) and shadow aperture backscatter radiography (SABR) are different CBI techniques that are being optimized and developed. Radiography by selective detection (RSD) is a pencil beam Compton backscatter imaging technique that falls between highly collimated and uncollimated techniques. Radiography by selective detection uses a combination of single- and multiple-scatter photons from a projected area below a collimation plane to generate an image. As a result, the image has a combination of first- and multiple-scatter components. RSD techniques offer greater subsurface resolution than uncollimated techniques, at speeds at least an order of magnitude faster than highly collimated techniques. RSD scanning systems have evolved from a prototype into near market-ready scanning devices for use in a variety of single-sided imaging applications. The design has changed to incorporate state-of-the-art detectors and electronics optimized for backscatter imaging with an emphasis on versatility, efficiency and speed. The RSD system has become more stable, about 4 times faster, and 60% lighter while maintaining or improving image quality and contrast over the past 3 years. A new snapshot backscatter radiography (SBR) CBI technique, shadow aperture backscatter radiography (SABR), has been developed from concept and proof-of-principle to a functional laboratory prototype. SABR radiography uses digital detection media and shaded aperture configurations to generate near-surface Compton backscatter images without scanning, similar to how transmission radiographs are taken. Finally, a

  15. A novel lentiviral scFv display library for rapid optimization and selection of high affinity antibodies.

    PubMed

    Qudsia, Sehar; Merugu, Siva B; Mangukiya, Hitesh B; Hema, Negi; Wu, Zhenghua; Li, Dawei

    2018-04-30

    Antibody display libraries have become a popular technique to screen monoclonal antibodies for therapeutic purposes. An important aspect of display technology is to generate an optimization library by changing antibody affinity to antigen through mutagenesis and screening the high affinity antibody. In this study, we report a novel lentivirus display based optimization library antibody in which Agtuzumab scFv is displayed on cell membrane of HEK-293T cells. To generate an optimization library, hotspot mutagenesis was performed to achieve diverse antibody library. Based on sequence analysis of randomly selected clones, library size was estimated approximately to be 1.6 × 10 6 . Lentivirus display vector was used to display scFv antibody on cell surface and flow cytometery was performed to check the antibody affinity to antigen. Membrane bound scFv antibodies were then converted to secreted antibody through cre/loxP recombination. One of the mutant clones, M8 showed higher affinity to antigen in flow cytometery analysis. Further characterization of cellular and secreted scFv through western blot showed that antibody affinity was increased by three fold after mutagenesis. This study shows successful construction of a novel antibody library and suggests that hotspot mutagenesis could prove a useful and rapid optimization tool to generate similar libraries with various degree of antigen affinity. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Leakage characterization of top select transistor for program disturbance optimization in 3D NAND flash

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Jin, Lei; Jiang, Dandan; Zou, Xingqi; Zhao, Zhiguo; Gao, Jing; Zeng, Ming; Zhou, Wenbin; Tang, Zhaoyun; Huo, Zongliang

    2018-03-01

    In order to optimize program disturbance characteristics effectively, a characterization approach that measures top select transistor (TSG) leakage from bit-line is proposed to quantify TSG leakage under program inhibit condition in 3D NAND flash memory. Based on this approach, the effect of Vth modulation of two-cell TSG on leakage is evaluated. By checking the dependence of leakage and corresponding program disturbance on upper and lower TSG Vth, this approach is validated. The optimal Vth pattern with high upper TSG Vth and low lower TSG Vth has been suggested for low leakage current and high boosted channel potential. It is found that upper TSG plays dominant role in preventing drain induced barrier lowering (DIBL) leakage from boosted channel to bit-line, while lower TSG assists to further suppress TSG leakage by providing smooth potential drop from dummy WL to edge of TSG, consequently suppressing trap assisted band-to-band tunneling current (BTBT) between dummy WL and TSG.

  17. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data

    PubMed Central

    2017-01-01

    In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718

  18. An ILP based Algorithm for Optimal Customer Selection for Demand Response in SmartGrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuppannagari, Sanmukh R.; Kannan, Rajgopal; Prasanna, Viktor K.

    Demand Response (DR) events are initiated by utilities during peak demand periods to curtail consumption. They ensure system reliability and minimize the utility’s expenditure. Selection of the right customers and strategies is critical for a DR event. An effective DR scheduling algorithm minimizes the curtailment error which is the absolute difference between the achieved curtailment value and the target. State-of-the-art heuristics exist for customer selection, however their curtailment errors are unbounded and can be as high as 70%. In this work, we develop an Integer Linear Programming (ILP) formulation for optimally selecting customers and curtailment strategies that minimize the curtailmentmore » error during DR events in SmartGrids. We perform experiments on real world data obtained from the University of Southern California’s SmartGrid and show that our algorithm achieves near exact curtailment values with errors in the range of 10 -7 to 10 -5, which are within the range of numerical errors. We compare our results against the state-of-the-art heuristic being deployed in practice in the USC SmartGrid. We show that for the same set of available customer strategy pairs our algorithm performs 103 to 107 times better in terms of the curtailment errors incurred.« less

  19. A mathematical framework for the selection of an optimal set of peptides for epitope-based vaccines.

    PubMed

    Toussaint, Nora C; Dönnes, Pierre; Kohlbacher, Oliver

    2008-12-01

    Epitope-based vaccines (EVs) have a wide range of applications: from therapeutic to prophylactic approaches, from infectious diseases to cancer. The development of an EV is based on the knowledge of target-specific antigens from which immunogenic peptides, so-called epitopes, are derived. Such epitopes form the key components of the EV. Due to regulatory, economic, and practical concerns the number of epitopes that can be included in an EV is limited. Furthermore, as the major histocompatibility complex (MHC) binding these epitopes is highly polymorphic, every patient possesses a set of MHC class I and class II molecules of differing specificities. A peptide combination effective for one person can thus be completely ineffective for another. This renders the optimal selection of these epitopes an important and interesting optimization problem. In this work we present a mathematical framework based on integer linear programming (ILP) that allows the formulation of various flavors of the vaccine design problem and the efficient identification of optimal sets of epitopes. Out of a user-defined set of predicted or experimentally determined epitopes, the framework selects the set with the maximum likelihood of eliciting a broad and potent immune response. Our ILP approach allows an elegant and flexible formulation of numerous variants of the EV design problem. In order to demonstrate this, we show how common immunological requirements for a good EV (e.g., coverage of epitopes from each antigen, coverage of all MHC alleles in a set, or avoidance of epitopes with high mutation rates) can be translated into constraints or modifications of the objective function within the ILP framework. An implementation of the algorithm outperforms a simple greedy strategy as well as a previously suggested evolutionary algorithm and has runtimes on the order of seconds for typical problem sizes.

  20. Effect of collision energy optimization on the measurement of peptides by selected reaction monitoring (SRM) mass spectrometry.

    PubMed

    Maclean, Brendan; Tomazela, Daniela M; Abbatiello, Susan E; Zhang, Shucha; Whiteaker, Jeffrey R; Paulovich, Amanda G; Carr, Steven A; Maccoss, Michael J

    2010-12-15

    Proteomics experiments based on Selected Reaction Monitoring (SRM, also referred to as Multiple Reaction Monitoring or MRM) are being used to target large numbers of protein candidates in complex mixtures. At present, instrument parameters are often optimized for each peptide, a time and resource intensive process. Large SRM experiments are greatly facilitated by having the ability to predict MS instrument parameters that work well with the broad diversity of peptides they target. For this reason, we investigated the impact of using simple linear equations to predict the collision energy (CE) on peptide signal intensity and compared it with the empirical optimization of the CE for each peptide and transition individually. Using optimized linear equations, the difference between predicted and empirically derived CE values was found to be an average gain of only 7.8% of total peak area. We also found that existing commonly used linear equations fall short of their potential, and should be recalculated for each charge state and when introducing new instrument platforms. We provide a fully automated pipeline for calculating these equations and individually optimizing CE of each transition on SRM instruments from Agilent, Applied Biosystems, Thermo-Scientific and Waters in the open source Skyline software tool ( http://proteome.gs.washington.edu/software/skyline ).

  1. SU-G-BRC-13: Model Based Classification for Optimal Position Selection for Left-Sided Breast Radiotherapy: Free Breathing, DIBH, Or Prone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, H; Liu, T; Xu, X

    Purpose: There are clinical decision challenges to select optimal treatment positions for left-sided breast cancer patients—supine free breathing (FB), supine Deep Inspiration Breath Hold (DIBH) and prone free breathing (prone). Physicians often make the decision based on experiences and trials, which might not always result optimal OAR doses. We herein propose a mathematical model to predict the lowest OAR doses among these three positions, providing a quantitative tool for corresponding clinical decision. Methods: Patients were scanned in FB, DIBH, and prone positions under an IRB approved protocol. Tangential beam plans were generated for each position, and OAR doses were calculated.more » The position with least OAR doses is defined as the optimal position. The following features were extracted from each scan to build the model: heart, ipsilateral lung, breast volume, in-field heart, ipsilateral lung volume, distance between heart and target, laterality of heart, and dose to heart and ipsilateral lung. Principal Components Analysis (PCA) was applied to remove the co-linearity of the input data and also to lower the data dimensionality. Feature selection, another method to reduce dimensionality, was applied as a comparison. Support Vector Machine (SVM) was then used for classification. Thirtyseven patient data were acquired; up to now, five patient plans were available. K-fold cross validation was used to validate the accuracy of the classifier model with small training size. Results: The classification results and K-fold cross validation demonstrated the model is capable of predicting the optimal position for patients. The accuracy of K-fold cross validations has reached 80%. Compared to PCA, feature selection allows causal features of dose to be determined. This provides more clinical insights. Conclusion: The proposed classification system appeared to be feasible. We are generating plans for the rest of the 37 patient images, and more statistically

  2. Five Antiretroviral Drug Class-Resistant HIV-1 in a Treatment-Naïve Patient Successfully Suppressed with Optimized Antiretroviral Drug Selection.

    PubMed

    Volpe, Joseph M; Ward, Douglas J; Napolitano, Laura; Phung, Pham; Toma, Jonathan; Solberg, Owen; Petropoulos, Christos J; Walworth, Charles M

    2015-01-01

    Transmitted HIV-1 exhibiting reduced susceptibility to protease and reverse transcriptase inhibitors is well documented but limited for integrase inhibitors and enfuvirtide. We describe here a case of transmitted 5 drug class-resistance in an antiretroviral (ARV)-naïve patient who was successfully treated based on the optimized selection of an active ARV drug regimen. The value of baseline resistance testing to determine an optimal ARV treatment regimen is highlighted in this case report. © The Author(s) 2015.

  3. A multi-objective model for closed-loop supply chain optimization and efficient supplier selection in a competitive environment considering quantity discount policy

    NASA Astrophysics Data System (ADS)

    Jahangoshai Rezaee, Mustafa; Yousefi, Samuel; Hayati, Jamileh

    2017-06-01

    Supplier selection and allocation of optimal order quantity are two of the most important processes in closed-loop supply chain (CLSC) and reverse logistic (RL). So that providing high quality raw material is considered as a basic requirement for a manufacturer to produce popular products, as well as achieve more market shares. On the other hand, considering the existence of competitive environment, suppliers have to offer customers incentives like discounts and enhance the quality of their products in a competition with other manufacturers. Therefore, in this study, a model is presented for CLSC optimization, efficient supplier selection, as well as orders allocation considering quantity discount policy. It is modeled using multi-objective programming based on the integrated simultaneous data envelopment analysis-Nash bargaining game. In this study, maximizing profit and efficiency and minimizing defective and functions of delivery delay rate are taken into accounts. Beside supplier selection, the suggested model selects refurbishing sites, as well as determining the number of products and parts in each network's sector. The suggested model's solution is carried out using global criteria method. Furthermore, based on related studies, a numerical example is examined to validate it.

  4. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  5. Mechanical Properties of Optimized Diamond Lattice Structure for Bone Scaffolds Fabricated via Selective Laser Melting.

    PubMed

    Liu, Fei; Zhang, David Z; Zhang, Peng; Zhao, Miao; Jafar, Salman

    2018-03-03

    Developments in selective laser melting (SLM) have enabled the fabrication of periodic cellular lattice structures characterized by suitable properties matching the bone tissue well and by fluid permeability from interconnected structures. These multifunctional performances are significantly affected by cell topology and constitutive properties of applied materials. In this respect, a diamond unit cell was designed in particular volume fractions corresponding to the host bone tissue and optimized with a smooth surface at nodes leading to fewer stress concentrations. There were 33 porous titanium samples with different volume fractions, from 1.28 to 18.6%, manufactured using SLM. All of them were performed under compressive load to determine the deformation and failure mechanisms, accompanied by an in-situ approach using digital image correlation (DIC) to reveal stress-strain evolution. The results showed that lattice structures manufactured by SLM exhibited comparable properties to those of trabecular bone, avoiding the effects of stress-shielding and increasing longevity of implants. The curvature of optimized surface can play a role in regulating the relationship between density and mechanical properties. Owing to the release of stress concentration from optimized surface, the failure mechanism of porous titanium has been changed from the pattern of bottom-up collapse by layer (or cell row) to that of the diagonal (45°) shear band, resulting in the significant enhancement of the structural strength.

  6. Mechanical Properties of Optimized Diamond Lattice Structure for Bone Scaffolds Fabricated via Selective Laser Melting

    PubMed Central

    Zhang, David Z.; Zhang, Peng; Zhao, Miao; Jafar, Salman

    2018-01-01

    Developments in selective laser melting (SLM) have enabled the fabrication of periodic cellular lattice structures characterized by suitable properties matching the bone tissue well and by fluid permeability from interconnected structures. These multifunctional performances are significantly affected by cell topology and constitutive properties of applied materials. In this respect, a diamond unit cell was designed in particular volume fractions corresponding to the host bone tissue and optimized with a smooth surface at nodes leading to fewer stress concentrations. There were 33 porous titanium samples with different volume fractions, from 1.28 to 18.6%, manufactured using SLM. All of them were performed under compressive load to determine the deformation and failure mechanisms, accompanied by an in-situ approach using digital image correlation (DIC) to reveal stress–strain evolution. The results showed that lattice structures manufactured by SLM exhibited comparable properties to those of trabecular bone, avoiding the effects of stress-shielding and increasing longevity of implants. The curvature of optimized surface can play a role in regulating the relationship between density and mechanical properties. Owing to the release of stress concentration from optimized surface, the failure mechanism of porous titanium has been changed from the pattern of bottom-up collapse by layer (or cell row) to that of the diagonal (45°) shear band, resulting in the significant enhancement of the structural strength. PMID:29510492

  7. Polyhedral Interpolation for Optimal Reaction Control System Jet Selection

    NASA Technical Reports Server (NTRS)

    Gefert, Leon P.; Wright, Theodore

    2014-01-01

    An efficient algorithm is described for interpolating optimal values for spacecraft Reaction Control System jet firing duty cycles. The algorithm uses the symmetrical geometry of the optimal solution to reduce the number of calculations and data storage requirements to a level that enables implementation on the small real time flight control systems used in spacecraft. The process minimizes acceleration direction errors, maximizes control authority, and minimizes fuel consumption.

  8. Self-Selection, Optimal Income Taxation, and Redistribution

    ERIC Educational Resources Information Center

    Amegashie, J. Atsu

    2009-01-01

    The author makes a pedagogical contribution to optimal income taxation. Using a very simple model adapted from George A. Akerlof (1978), he demonstrates a key result in the approach to public economics and welfare economics pioneered by Nobel laureate James Mirrlees. He shows how incomplete information, in addition to the need to preserve…

  9. Gene selection using hybrid binary black hole algorithm and modified binary particle swarm optimization.

    PubMed

    Pashaei, Elnaz; Pashaei, Elham; Aydin, Nizamettin

    2018-04-14

    In cancer classification, gene selection is an important data preprocessing technique, but it is a difficult task due to the large search space. Accordingly, the objective of this study is to develop a hybrid meta-heuristic Binary Black Hole Algorithm (BBHA) and Binary Particle Swarm Optimization (BPSO) (4-2) model that emphasizes gene selection. In this model, the BBHA is embedded in the BPSO (4-2) algorithm to make the BPSO (4-2) more effective and to facilitate the exploration and exploitation of the BPSO (4-2) algorithm to further improve the performance. This model has been associated with Random Forest Recursive Feature Elimination (RF-RFE) pre-filtering technique. The classifiers which are evaluated in the proposed framework are Sparse Partial Least Squares Discriminant Analysis (SPLSDA); k-nearest neighbor and Naive Bayes. The performance of the proposed method was evaluated on two benchmark and three clinical microarrays. The experimental results and statistical analysis confirm the better performance of the BPSO (4-2)-BBHA compared with the BBHA, the BPSO (4-2) and several state-of-the-art methods in terms of avoiding local minima, convergence rate, accuracy and number of selected genes. The results also show that the BPSO (4-2)-BBHA model can successfully identify known biologically and statistically significant genes from the clinical datasets. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.

  11. Optimal design of compact spur gear reductions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.

    1992-01-01

    The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.

  12. Helping the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection through the organization of a pilot health care provider research system.

    PubMed

    Tang, Liyang

    2013-04-04

    The main aim of China's Health Care System Reform was to help the decision maker find the optimal solution to China's institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China's health care system, and it could efficiently collect the data for determining the optimal solution to China's institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts' views into various optimal solutions to this problem under the support of this pilot system. After the general framework of China's institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. The market-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the doctors' point of view; the traditional government's regulation-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the pharmacists' point of view, the hospital administrators' point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China's institutional problem of health care provider selection from the nurses' point of view, the point of view of officials in medical insurance agencies, and the health care researchers' point of view. The data collected through a pilot health care

  13. Adaptive Optimal Control Using Frequency Selective Information of the System Uncertainty With Application to Unmanned Aircraft.

    PubMed

    Maity, Arnab; Hocht, Leonhard; Heise, Christian; Holzapfel, Florian

    2018-01-01

    A new efficient adaptive optimal control approach is presented in this paper based on the indirect model reference adaptive control (MRAC) architecture for improvement of adaptation and tracking performance of the uncertain system. The system accounts here for both matched and unmatched unknown uncertainties that can act as plant as well as input effectiveness failures or damages. For adaptation of the unknown parameters of these uncertainties, the frequency selective learning approach is used. Its idea is to compute a filtered expression of the system uncertainty using multiple filters based on online instantaneous information, which is used for augmentation of the update law. It is capable of adjusting a sudden change in system dynamics without depending on high adaptation gains and can satisfy exponential parameter error convergence under certain conditions in the presence of structured matched and unmatched uncertainties as well. Additionally, the controller of the MRAC system is designed using a new optimal control method. This method is a new linear quadratic regulator-based optimal control formulation for both output regulation and command tracking problems. It provides a closed-form control solution. The proposed overall approach is applied in a control of lateral dynamics of an unmanned aircraft problem to show its effectiveness.

  14. Web-GIS oriented systems viability for municipal solid waste selective collection optimization in developed and transient economies.

    PubMed

    Rada, E C; Ragazzi, M; Fedrizzi, P

    2013-04-01

    Municipal solid waste management is a multidisciplinary activity that includes generation, source separation, storage, collection, transfer and transport, processing and recovery, and, last but not least, disposal. The optimization of waste collection, through source separation, is compulsory where a landfill based management must be overcome. In this paper, a few aspects related to the implementation of a Web-GIS based system are analyzed. This approach is critically analyzed referring to the experience of two Italian case studies and two additional extra-European case studies. The first case is one of the best examples of selective collection optimization in Italy. The obtained efficiency is very high: 80% of waste is source separated for recycling purposes. In the second reference case, the local administration is going to be faced with the optimization of waste collection through Web-GIS oriented technologies for the first time. The starting scenario is far from an optimized management of municipal solid waste. The last two case studies concern pilot experiences in China and Malaysia. Each step of the Web-GIS oriented strategy is comparatively discussed referring to typical scenarios of developed and transient economies. The main result is that transient economies are ready to move toward Web oriented tools for MSW management, but this opportunity is not yet well exploited in the sector. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Optimal location selection for the installation of urban green roofs considering honeybee habitats along with socio-economic and environmental effects.

    PubMed

    Gwak, Jae Ha; Lee, Bo Kyeong; Lee, Won Kyung; Sohn, So Young

    2017-03-15

    This study proposes a new framework for the selection of optimal locations for green roofs to achieve a sustainable urban ecosystem. The proposed framework selects building sites that can maximize the benefits of green roofs, based not only on the socio-economic and environmental benefits to urban residents, but also on the provision of urban foraging sites for honeybees. The framework comprises three steps. First, building candidates for green roofs are selected considering the building type. Second, the selected building candidates are ranked in terms of their expected socio-economic and environmental effects. The benefits of green roofs are improved energy efficiency and air quality, reduction of urban flood risk and infrastructure improvement costs, reuse of storm water, and creation of space for education and leisure. Furthermore, the estimated cost of installing green roofs is also considered. We employ spatial data to determine the expected effects of green roofs on each building unit, because the benefits and costs may vary depending on the location of the building. This is due to the heterogeneous spatial conditions. In the third step, the final building sites are proposed by solving the maximal covering location problem (MCLP) to determine the optimal locations for green roofs as urban honeybee foraging sites. As an illustrative example, we apply the proposed framework in Seoul, Korea. This new framework is expected to contribute to sustainable urban ecosystems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. (Too) optimistic about optimism: the belief that optimism improves performance.

    PubMed

    Tenney, Elizabeth R; Logg, Jennifer M; Moore, Don A

    2015-03-01

    A series of experiments investigated why people value optimism and whether they are right to do so. In Experiments 1A and 1B, participants prescribed more optimism for someone implementing decisions than for someone deliberating, indicating that people prescribe optimism selectively, when it can affect performance. Furthermore, participants believed optimism improved outcomes when a person's actions had considerable, rather than little, influence over the outcome (Experiment 2). Experiments 3 and 4 tested the accuracy of this belief; optimism improved persistence, but it did not improve performance as much as participants expected. Experiments 5A and 5B found that participants overestimated the relationship between optimism and performance even when their focus was not on optimism exclusively. In summary, people prescribe optimism when they believe it has the opportunity to improve the chance of success-unfortunately, people may be overly optimistic about just how much optimism can do. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  17. The optimal hormonal replacement modality selection for multiple organ procurement from brain-dead organ donors

    PubMed Central

    Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David KC

    2015-01-01

    The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu’s multiple comparisons with the best (MCB) – adapted from the Dunnett’s multiple comparisons with control (MCC) – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. PMID:25565890

  18. Maximizing power generation from dark fermentation effluents in microbial fuel cell by selective enrichment of exoelectrogens and optimization of anodic operational parameters.

    PubMed

    Varanasi, Jhansi L; Sinha, Pallavi; Das, Debabrata

    2017-05-01

    To selectively enrich an electrogenic mixed consortium capable of utilizing dark fermentative effluents as substrates in microbial fuel cells and to further enhance the power outputs by optimization of influential anodic operational parameters. A maximum power density of 1.4 W/m 3 was obtained by an enriched mixed electrogenic consortium in microbial fuel cells using acetate as substrate. This was further increased to 5.43 W/m 3 by optimization of influential anodic parameters. By utilizing dark fermentative effluents as substrates, the maximum power densities ranged from 5.2 to 6.2 W/m 3 with an average COD removal efficiency of 75% and a columbic efficiency of 10.6%. A simple strategy is provided for selective enrichment of electrogenic bacteria that can be used in microbial fuel cells for generating power from various dark fermentative effluents.

  19. Increased genetic gains in sheep, beef and dairy breeding programs from using female reproductive technologies combined with optimal contribution selection and genomic breeding values.

    PubMed

    Granleese, Tom; Clark, Samuel A; Swan, Andrew A; van der Werf, Julius H J

    2015-09-14

    Female reproductive technologies such as multiple ovulation and embryo transfer (MOET) and juvenile in vitro embryo production and embryo transfer (JIVET) can boost rates of genetic gain but they can also increase rates of inbreeding. Inbreeding can be managed using the principles of optimal contribution selection (OCS), which maximizes genetic gain while placing a penalty on the rate of inbreeding. We evaluated the potential benefits and synergies that exist between genomic selection (GS) and reproductive technologies under OCS for sheep and cattle breeding programs. Various breeding program scenarios were simulated stochastically including: (1) a sheep breeding program for the selection of a single trait that could be measured either early or late in life; (2) a beef breeding program with an early or late trait; and (3) a dairy breeding program with a sex limited trait. OCS was applied using a range of penalties (severe to no penalty) on co-ancestry of selection candidates, with the possibility of using multiple ovulation and embryo transfer (MOET) and/or juvenile in vitro embryo production and embryo transfer (JIVET) for females. Each breeding program was simulated with and without genomic selection. All breeding programs could be penalized to result in an inbreeding rate of 1 % increase per generation. The addition of MOET to artificial insemination or natural breeding (AI/N), without the use of GS yielded an extra 25 to 60 % genetic gain. The further addition of JIVET did not yield an extra genetic gain. When GS was used, MOET and MOET + JIVET programs increased rates of genetic gain by 38 to 76 % and 51 to 81 % compared to AI/N, respectively. Large increases in genetic gain were found across species when female reproductive technologies combined with genomic selection were applied and inbreeding was managed, especially for breeding programs that focus on the selection of traits measured late in life or that are sex-limited. Optimal contribution selection was

  20. Genome-wide characterization and selection of expressed sequence tag simple sequence repeat primers for optimized marker distribution and reliability in peach

    USDA-ARS?s Scientific Manuscript database

    Expressed sequence tag (EST) simple sequence repeats (SSRs) in Prunus were mined, and flanking primers designed and used for genome-wide characterization and selection of primers to optimize marker distribution and reliability. A total of 12,618 contigs were assembled from 84,727 ESTs, along with 34...

  1. Optimization of passively mode-locked Nd:GdVO4 laser with the selectable pulse duration 15-70 ps

    NASA Astrophysics Data System (ADS)

    Frank, Milan; Jelínek, Michal; Vyhlídal, David; Kubeček, Václav

    2016-12-01

    In this paper the optimization of a continuously diode-pumped Nd:GdVO4 laser oscillator in bounce geometry passively mode-locked using semiconductor saturable absorber mirror is presented. In the previous results the Nd:GdVO4 laser system generating 30 ps pulses with the average output power of 6.9 W at the repetition rate of 200 MHz at the wavelength of 1063 nm was reported. Now we are demonstrating up to three times increase of peak power due to the optimization of mode-matching in the laser resonator. Depending on the oscillator configuration we obtained the stable continuously mode-locked operation with pulses having selectable duration from 15 ps to 70 ps with the average output power of 7 W and the repetition rate of 150 MHz.

  2. Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard

    2002-01-01

    The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.

  3. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    PubMed

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  4. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization

    PubMed Central

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung

    2017-01-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617

  5. Nonparametric variational optimization of reaction coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banushkina, Polina V.; Krivov, Sergei V., E-mail: s.krivov@leeds.ac.uk

    State of the art realistic simulations of complex atomic processes commonly produce trajectories of large size, making the development of automated analysis tools very important. A popular approach aimed at extracting dynamical information consists of projecting these trajectories into optimally selected reaction coordinates or collective variables. For equilibrium dynamics between any two boundary states, the committor function also known as the folding probability in protein folding studies is often considered as the optimal coordinate. To determine it, one selects a functional form with many parameters and trains it on the trajectories using various criteria. A major problem with such anmore » approach is that a poor initial choice of the functional form may lead to sub-optimal results. Here, we describe an approach which allows one to optimize the reaction coordinate without selecting its functional form and thus avoiding this source of error.« less

  6. Spatially aggregated multiclass pattern classification in functional MRI using optimally selected functional brain areas.

    PubMed

    Zheng, Weili; Ackley, Elena S; Martínez-Ramón, Manel; Posse, Stefan

    2013-02-01

    In previous works, boosting aggregation of classifier outputs from discrete brain areas has been demonstrated to reduce dimensionality and improve the robustness and accuracy of functional magnetic resonance imaging (fMRI) classification. However, dimensionality reduction and classification of mixed activation patterns of multiple classes remain challenging. In the present study, the goals were (a) to reduce dimensionality by combining feature reduction at the voxel level and backward elimination of optimally aggregated classifiers at the region level, (b) to compare region selection for spatially aggregated classification using boosting and partial least squares regression methods and (c) to resolve mixed activation patterns using probabilistic prediction of individual tasks. Brain activation maps from interleaved visual, motor, auditory and cognitive tasks were segmented into 144 functional regions. Feature selection reduced the number of feature voxels by more than 50%, leaving 95 regions. The two aggregation approaches further reduced the number of regions to 30, resulting in more than 75% reduction of classification time and misclassification rates of less than 3%. Boosting and partial least squares (PLS) were compared to select the most discriminative and the most task correlated regions, respectively. Successful task prediction in mixed activation patterns was feasible within the first block of task activation in real-time fMRI experiments. This methodology is suitable for sparsifying activation patterns in real-time fMRI and for neurofeedback from distributed networks of brain activation. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  8. [Optimization of diagnosis indicator selection and inspection plan by 3.0T MRI in breast cancer].

    PubMed

    Jiang, Zhongbiao; Wang, Yunhua; He, Zhong; Zhang, Lejun; Zheng, Kai

    2013-08-01

    To optimize 3.0T MRI diagnosis indicator in breast cancer and to select the best MRI scan program. Totally 45 patients with breast cancers were collected, and another 35 patients with benign breast tumor served as the control group. All patients underwent 3.0T MRI, including T1- weighted imaging (T1WI), fat suppression of the T2-weighted imaging (T2WI), diffusion weighted imaging (DWI), 1H magnetic resonance spectroscopy (1H-MRS) and dynamic contrast enhanced (DCE) sequence. With operation pathology results as the gold standard in the diagnosis of breast diseases, the pathological results of benign and malignant served as dependent variables, and the diagnostic indicators of MRI were taken as independent variables. We put all the indicators of MRI examination under Logistic regression analysis, established the Logistic model, and optimized the diagnosis indicators of MRI examination to further improve MRI scan of breast cancer. By Logistic regression analysis, some indicators were selected in the equation, including the edge feature of the tumor, the time-signal intensity curve (TIC) type and the apparent diffusion coefficient (ADC) value when b=500 s/mm2. The regression equation was Logit (P)=-21.936+20.478X6+3.267X7+ 21.488X3. Valuable indicators in the diagnosis of breast cancer are the edge feature of the tumor, the TIC type and the ADC value when b=500 s/mm2. Combining conventional MRI scan, DWI and dynamic enhanced MRI is a better examination program, while MRS is the complementary program when diagnosis is difficult.

  9. Optimization of the aerosolization properties of an inhalation dry powder based on selection of excipients.

    PubMed

    Minne, Antoine; Boireau, Hélène; Horta, Maria Joao; Vanbever, Rita

    2008-11-01

    The aim of this study was to investigate the influence of formulation excipients on physical characteristics of inhalation dry powders prepared by spray-drying. The excipients used were a series of amino acids (glycine, alanine, leucine, isoleucine), trehalose and dipalmitoylphosphatidylcholine (DPPC). The particle diameter and the powder density were assessed by laser diffraction and tap density measurements, respectively. The aerosol behaviour of the powders was studied in a Multi-Stage Liquid Impinger. The nature and the relative proportion of the excipients affected the aerosol performance of the powders, mainly by altering powder tap density and degree of particle aggregation. The alanine/trehalose/DPPC (30/10/60 w/w/w) formulation showed optimal aerodynamic behaviour with a mass median aerodynamic diameter of 4.7 microm, an emitted dose of 94% and a fine particle fraction of 54% at an airflow rate of 100 L/min using a Spinhaler inhaler device. The powder had a tap density of 0.10 g/cm(3). The particles were spherical with a granular surface and had a 4 microm volume median diameter. In conclusion, optimization of the aerosolization properties of inhalation dry powders could be achieved by appropriately selecting the composition of the particles.

  10. The Use of Variable Q1 Isolation Windows Improves Selectivity in LC-SWATH-MS Acquisition.

    PubMed

    Zhang, Ying; Bilbao, Aivett; Bruderer, Tobias; Luban, Jeremy; Strambio-De-Castillia, Caterina; Lisacek, Frédérique; Hopfgartner, Gérard; Varesio, Emmanuel

    2015-10-02

    As tryptic peptides and metabolites are not equally distributed along the mass range, the probability of cross fragment ion interference is higher in certain windows when fixed Q1 SWATH windows are applied. We evaluated the benefits of utilizing variable Q1 SWATH windows with regards to selectivity improvement. Variable windows based on equalizing the distribution of either the precursor ion population (PIP) or the total ion current (TIC) within each window were generated by an in-house software, swathTUNER. These two variable Q1 SWATH window strategies outperformed, with respect to quantification and identification, the basic approach using a fixed window width (FIX) for proteomic profiling of human monocyte-derived dendritic cells (MDDCs). Thus, 13.8 and 8.4% additional peptide precursors, which resulted in 13.1 and 10.0% more proteins, were confidently identified by SWATH using the strategy PIP and TIC, respectively, in the MDDC proteomic sample. On the basis of the spectral library purity score, some improvement warranted by variable Q1 windows was also observed, albeit to a lesser extent, in the metabolomic profiling of human urine. We show that the novel concept of "scheduled SWATH" proposed here, which incorporates (i) variable isolation windows and (ii) precursor retention time segmentation further improves both peptide and metabolite identifications.

  11. Garnet Ring Measurements for the Fermilab Booster 2nd Harmonic Cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuharik, J.; Dey, J.; Duel, K.

    A perpendicularly biased tuneable 2nd harmonic cavity is being constructed for use in the Fermilab Booster. The cavity's tuner uses National Magnetics AL800 garnet as the tuning media. For quality control, the magnetic properties of the material and the uniformity of the properties within the tuner must be assessed. We describe two tests which are performed on the rings and on their corresponding witness samples.

  12. a Geographic Analysis of Optimal Signage Location Selection in Scenic Area

    NASA Astrophysics Data System (ADS)

    Ruan, Ling; Long, Ying; Zhang, Ling; Wu, Xiao Ling

    2016-06-01

    As an important part of the scenic area infrastructure services, signage guiding system plays an indispensable role in guiding the way and improving the quality of tourism experience. This paper proposes an optimal method in signage location selection and direction content design in a scenic area based on geographic analysis. The object of the research is to provide a best solution to arrange limited guiding boards in a tourism area to show ways arriving at any scenic spot from any entrance. There are four steps to achieve the research object. First, the spatial distribution of the junction of the scenic road, the passageway and the scenic spots is analyzed. Then, the count of scenic roads intersection on the shortest path between all entrances and all scenic spots is calculated. Next, combing with the grade of the scenic road and scenic spots, the importance of each road intersection is estimated quantitatively. Finally, according to the importance of all road intersections, the most suitable layout locations of signage guiding boards can be provided. In addition, the method is applied in the Ming Tomb scenic area in China and the result is compared with the existing signage guiding space layout.

  13. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  14. Process Optimization and Microstructure Characterization of Ti6Al4V Manufactured by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    junfeng, Li; zhengying, Wei

    2017-11-01

    Process optimization and microstructure characterization of Ti6Al4V manufactured by selective laser melting (SLM) were investigated in this article. The relative density of sampled fabricated by SLM is influenced by the main process parameters, including laser power, scan speed and hatch distance. The volume energy density (VED) was defined to account for the combined effect of the main process parameters on the relative density. The results shown that the relative density changed with the change of VED and the optimized process interval is 55˜60J/mm3. Furthermore, compared with laser power, scan speed and hatch distance by taguchi method, it was found that the scan speed had the greatest effect on the relative density. Compared with the microstructure of the cross-section of the specimen at different scanning speeds, it was found that the microstructures at different speeds had similar characteristics, all of them were needle-like martensite distributed in the β matrix, but with the increase of scanning speed, the microstructure is finer and the lower scan speed leads to coarsening of the microstructure.

  15. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods

    NASA Astrophysics Data System (ADS)

    Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.

  16. Responding to home maintenance challenge scenarios: the role of selection, optimization, and compensation in aging-in-place.

    PubMed

    Kelly, Andrew John; Fausset, Cara Bailey; Rogers, Wendy; Fisk, Arthur D

    2014-12-01

    This study examined potential issues faced by older adults in managing their homes and their proposed solutions for overcoming hypothetical difficulties. Forty-four diverse, independently living older adults (66-85) participated in structured group interviews in which they discussed potential solutions to manage difficulties presented in four scenarios: perceptual, mobility, physical, and cognitive difficulties. The proposed solutions were classified using the Selection, Optimization, and Compensation (SOC) model. Participants indicated they would continue performing most tasks and reported a range of strategies to manage home maintenance challenges. Most participants reported that they would manage home maintenance challenges using compensation; the most frequently mentioned compensation strategy was using tools and technologies. There were also differences across the scenarios: Optimization was discussed most frequently with perceptual and cognitive difficulty scenarios. These results provide insights into supporting older adults' potential needs for aging-in-place and provide evidence of the value of the SOC model in applied research. © The Author(s) 2012.

  17. Self-Regulation among Youth in Four Western Cultures: Is There an Adolescence-Specific Structure of the Selection-Optimization-Compensation (SOC) Model?

    ERIC Educational Resources Information Center

    Gestsdottir, Steinunn; Geldhof, G. John; Paus, Tomáš; Freund, Alexandra M.; Adalbjarnardottir, Sigrun; Lerner, Jacqueline V.; Lerner, Richard M.

    2015-01-01

    We address how to conceptualize and measure intentional self-regulation (ISR) among adolescents from four cultures by assessing whether ISR (conceptualized by the SOC model of Selection, Optimization, and Compensation) is represented by three factors (as with adult samples) or as one "adolescence-specific" factor. A total of 4,057 14-…

  18. Closed-form solutions for linear regulator-design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.

  19. Helping the decision maker effectively promote various experts’ views into various optimal solutions to China’s institutional problem of health care provider selection through the organization of a pilot health care provider research system

    PubMed Central

    2013-01-01

    Background The main aim of China’s Health Care System Reform was to help the decision maker find the optimal solution to China’s institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China’s health care system, and it could efficiently collect the data for determining the optimal solution to China’s institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts’ views into various optimal solutions to this problem under the support of this pilot system. Methods After the general framework of China’s institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. Results The market-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the doctors’ point of view; the traditional government’s regulation-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the pharmacists’ point of view, the hospital administrators’ point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China’s institutional problem of health care provider selection from the nurses’ point of view, the point of view of officials in medical insurance agencies, and the health care researchers’ point

  20. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    NASA Astrophysics Data System (ADS)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  1. Maximizing the reliability of genomic selection by optimizing the calibration set of reference individuals: comparison of methods in two diverse groups of maize inbreds (Zea mays L.).

    PubMed

    Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L

    2012-10-01

    Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.

  2. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1999-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  3. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1998-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  4. Reserve selection with land market feedbacks.

    PubMed

    Butsic, Van; Lewis, David J; Radeloff, Volker C

    2013-01-15

    How to best site reserves is a leading question for conservation biologists. Recently, reserve selection has emphasized efficient conservation: maximizing conservation goals given the reality of limited conservation budgets, and this work indicates that land market can potentially undermine the conservation benefits of reserves by increasing property values and development probabilities near reserves. Here we propose a reserve selection methodology which optimizes conservation given both a budget constraint and land market feedbacks by using a combination of econometric models along with stochastic dynamic programming. We show that amenity based feedbacks can be accounted for in optimal reserve selection by choosing property price and land development models which exogenously estimate the effects of reserve establishment. In our empirical example, we use previously estimated models of land development and property prices to select parcels to maximize coarse woody debris along 16 lakes in Vilas County, WI, USA. Using each lake as an independent experiment, we find that including land market feedbacks in the reserve selection algorithm has only small effects on conservation efficacy. Likewise, we find that in our setting heuristic (minloss and maxgain) algorithms perform nearly as well as the optimal selection strategy. We emphasize that land market feedbacks can be included in optimal reserve selection; the extent to which this improves reserve placement will likely vary across landscapes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Optimal feature selection using a modified differential evolution algorithm and its effectiveness for prediction of heart disease.

    PubMed

    Vivekanandan, T; Sriman Narayana Iyengar, N Ch

    2017-11-01

    Enormous data growth in multiple domains has posed a great challenge for data processing and analysis techniques. In particular, the traditional record maintenance strategy has been replaced in the healthcare system. It is vital to develop a model that is able to handle the huge amount of e-healthcare data efficiently. In this paper, the challenging tasks of selecting critical features from the enormous set of available features and diagnosing heart disease are carried out. Feature selection is one of the most widely used pre-processing steps in classification problems. A modified differential evolution (DE) algorithm is used to perform feature selection for cardiovascular disease and optimization of selected features. Of the 10 available strategies for the traditional DE algorithm, the seventh strategy, which is represented by DE/rand/2/exp, is considered for comparative study. The performance analysis of the developed modified DE strategy is given in this paper. With the selected critical features, prediction of heart disease is carried out using fuzzy AHP and a feed-forward neural network. Various performance measures of integrating the modified differential evolution algorithm with fuzzy AHP and a feed-forward neural network in the prediction of heart disease are evaluated in this paper. The accuracy of the proposed hybrid model is 83%, which is higher than that of some other existing models. In addition, the prediction time of the proposed hybrid model is also evaluated and has shown promising results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Multicriteria Selection of Optimal Location of TCSC in a Competitive Energy Market

    NASA Astrophysics Data System (ADS)

    Alomoush, Muwaffaq I.

    2010-05-01

    The paper investigates selection of the best location of thyristor-controlled series compensator (TCSC) in a transmission system from many candidate locations in a competitive energy market such that the TCSC causes a net valuable impact on congestion management outcome, transmission utilization, transmission losses, voltage stability, degree of fulfillment of spot market contracts, and system security. The problem is treated as a multicriteria decision-making process such that the candidate locations of TCSC are the alternatives and the conflicting objectives are the outcomes of the dispatch process, which may have different importance weights. The paper proposes some performance indices that the dispatch decision-making entity can use to measure market dispatch outcomes of each alternative. Based on agreed-upon preferences, the measures presented may help the decision maker compare and rank dispatch scenarios to ultimately decide which location is the optimal one. To solve the multicriteria decision, we use the preference ranking organization method for enrichment evaluations (PROMETHEE), which is a multicriteria decision support method that can handle complex conflicting-objective decision-making processes.

  7. Optimal flow for brown trout: Habitat - prey optimization.

    PubMed

    Fornaroli, Riccardo; Cabrini, Riccardo; Sartori, Laura; Marazzi, Francesca; Canobbio, Sergio; Mezzanotte, Valeria

    2016-10-01

    The correct definition of ecosystem needs is essential in order to guide policy and management strategies to optimize the increasing use of freshwater by human activities. Commonly, the assessment of the optimal or minimum flow rates needed to preserve ecosystem functionality has been done by habitat-based models that define a relationship between in-stream flow and habitat availability for various species of fish. We propose a new approach for the identification of optimal flows using the limiting factor approach and the evaluation of basic ecological relationships, considering the appropriate spatial scale for different organisms. We developed density-environment relationships for three different life stages of brown trout that show the limiting effects of hydromorphological variables at habitat scale. In our analyses, we found that the factors limiting the densities of trout were water velocity, substrate characteristics and refugia availability. For all the life stages, the selected models considered simultaneously two variables and implied that higher velocities provided a less suitable habitat, regardless of other physical characteristics and with different patterns. We used these relationships within habitat based models in order to select a range of flows that preserve most of the physical habitat for all the life stages. We also estimated the effect of varying discharge flows on macroinvertebrate biomass and used the obtained results to identify an optimal flow maximizing habitat and prey availability. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Surface stability and the selection rules of substrate orientation for optimal growth of epitaxial II-VI semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, Wan-Jian; Department of Physics & Astronomy, and Wright Center for Photovoltaics Innovation and Commercialization, The University of Toledo, Toledo, Ohio 43606; Yang, Ji-Hui

    2015-10-05

    The surface structures of ionic zinc-blende CdTe (001), (110), (111), and (211) surfaces are systematically studied by first-principles density functional calculations. Based on the surface structures and surface energies, we identify the detrimental twinning appearing in molecular beam epitaxy (MBE) growth of II-VI compounds as the (111) lamellar twin boundaries. To avoid the appearance of twinning in MBE growth, we propose the following selection rules for choosing optimal substrate orientations: (1) the surface should be nonpolar so that there is no large surface reconstructions that could act as a nucleation center and promote the formation of twins; (2) the surfacemore » structure should have low symmetry so that there are no multiple equivalent directions for growth. These straightforward rules, in consistent with experimental observations, provide guidelines for selecting proper substrates for high-quality MBE growth of II-VI compounds.« less

  9. Selecting optimal second-generation antihistamines for allergic rhinitis and urticaria in Asia.

    PubMed

    Recto, Marysia Tiongco; Gabriel, Ma Teresita; Kulthanan, Kanokvalai; Tantilipikorn, Pongsakorn; Aw, Derrick Chen-Wee; Lee, Tak Hong; Chwen, Ch'ng Chin; Mutusamy, Somasundran; Hao, Nguyen Trong; Quang, Vo Thanh; Canonica, Giorgio Walter

    2017-01-01

    Allergic diseases are on the rise in many parts of the world, including the Asia-Pacific (APAC) region. Second-generation antihistamines are the first-line treatment option in the management of allergic rhinitis and urticaria. International guidelines describe the management of these conditions; however, clinicians perceive the additional need to tailor treatment according to patient profiles. This study serves as a consensus of experts from several countries in APAC (Hong Kong, Malaysia, the Philippines, Singapore, Thailand, Vietnam), which aims to describe the unmet needs, practical considerations, challenges, and key decision factors when determining optimal second-generation antihistamines for patients with allergic rhinitis and/or urticaria. Specialists from allergology, dermatology, and otorhinolaryngology were surveyed on practical considerations and key decision points when treating patients with allergic rhinitis and/or urticaria. Clinicians felt the need for additional tools for diagnosis of these diseases and a single drug with all preferred features of an antihistamine. Challenges in treatment include lack of clinician and patient awareness and compliance, financial constraints, and treatment for special patient populations such as those with concomitant disease. Selection of optimal second-generation antihistamines depends on many factors, particularly drug safety and efficacy, impact on psychomotor abilities, and sedation. Country-specific considerations include drug availability and cost-effectiveness. Survey results reveal bilastine as a preferred choice due to its high efficacy and safety, suitability for special patient populations, and the lack of sedative effects. Compliance to the international guidelines is present among allergists, dermatologists and otorhinolaryngologists; however, this is lower amongst general practitioners (GPs). To increase awareness, allergy education programs targeted at GPs and patients may be beneficial. Updates to

  10. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  11. A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface.

    PubMed

    Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin

    2016-01-01

    Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system.

  12. Hop Optimization and Relay Node Selection in Multi-hop Wireless Ad-Hoc Networks

    NASA Astrophysics Data System (ADS)

    Li, Xiaohua(Edward)

    In this paper we propose an efficient approach to determine the optimal hops for multi-hop ad hoc wireless networks. Based on the assumption that nodes use successive interference cancellation (SIC) and maximal ratio combining (MRC) to deal with mutual interference and to utilize all the received signal energy, we show that the signal-to-interference-plus-noise ratio (SINR) of a node is determined only by the nodes before it, not the nodes after it, along a packet forwarding path. Based on this observation, we propose an iterative procedure to select the relay nodes and to calculate the path SINR as well as capacity of an arbitrary multi-hop packet forwarding path. The complexity of the algorithm is extremely low, and scaling well with network size. The algorithm is applicable in arbitrarily large networks. Its performance is demonstrated as desirable by simulations. The algorithm can be helpful in analyzing the performance of multi-hop wireless networks.

  13. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  14. The expanded invasive weed optimization metaheuristic for solving continuous and discrete optimization problems.

    PubMed

    Josiński, Henryk; Kostrzewa, Daniel; Michalczuk, Agnieszka; Switoński, Adam

    2014-01-01

    This paper introduces an expanded version of the Invasive Weed Optimization algorithm (exIWO) distinguished by the hybrid strategy of the search space exploration proposed by the authors. The algorithm is evaluated by solving three well-known optimization problems: minimization of numerical functions, feature selection, and the Mona Lisa TSP Challenge as one of the instances of the traveling salesman problem. The achieved results are compared with analogous outcomes produced by other optimization methods reported in the literature.

  15. System Architecture of Explorer Class Spaceborne Telescopes: A look at Optimization of Cost, Testability, Risk and Operational Duty Cycle from the Perspective of Primary Mirror Material Selection

    NASA Astrophysics Data System (ADS)

    Hull, Anthony B.; Westerhoff, Thomas

    2015-01-01

    Management of cost and risk have become the key enabling elements for compelling science to be done within Explorer or M-Class Missions. We trace how optimal primary mirror selection may be co-optimized with orbit selection. And then trace the cost and risk implications of selecting a low diffusivity low thermal expansion material for low and medium earth orbits, vs. high diffusivity high thermal expansion materials for the same orbits. We will discuss that ZERODUR®, a material that has been in space for over 30 years, is now available as highly lightweighted open-back mirrors, and the attributes of these mirrors in spaceborne optical telescope assemblies. Lightweight ZERODUR® solutions are practical from mirrors < 0.3m in diameter to >4m in diameter. An example of a 1.2m lightweight ZERODUR® mirror will be discussed.

  16. Automatic selection of optimal Savitzky-Golay filter parameters for Coronary Wave Intensity Analysis.

    PubMed

    Rivolo, Simone; Nagel, Eike; Smith, Nicolas P; Lee, Jack

    2014-01-01

    Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. The cWIA ability to establish a mechanistic link between coronary haemodynamics measurements and the underlying pathophysiology has been widely demonstrated. Moreover, the prognostic value of a cWIA-derived metric has been recently proved. However, the clinical application of cWIA has been hindered due to the strong dependence on the practitioners, mainly ascribable to the cWIA-derived indices sensitivity to the pre-processing parameters. Specifically, as recently demonstrated, the cWIA-derived metrics are strongly sensitive to the Savitzky-Golay (S-G) filter, typically used to smooth the acquired traces. This is mainly due to the inability of the S-G filter to deal with the different timescale features present in the measured waveforms. Therefore, we propose to apply an adaptive S-G algorithm that automatically selects pointwise the optimal filter parameters. The newly proposed algorithm accuracy is assessed against a cWIA gold standard, provided by a newly developed in-silico cWIA modelling framework, when physiological noise is added to the simulated traces. The adaptive S-G algorithm, when used to automatically select the polynomial degree of the S-G filter, provides satisfactory results with ≤ 10% error for all the metrics through all the levels of noise tested. Therefore, the newly proposed method makes cWIA fully automatic and independent from the practitioners, opening the possibility to multi-centre trials.

  17. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods.

    PubMed

    Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Multiobjective optimization of combinatorial libraries.

    PubMed

    Agrafiotis, D K

    2002-01-01

    Combinatorial chemistry and high-throughput screening have caused a fundamental shift in the way chemists contemplate experiments. Designing a combinatorial library is a controversial art that involves a heterogeneous mix of chemistry, mathematics, economics, experience, and intuition. Although there seems to be little agreement as to what constitutes an ideal library, one thing is certain: only one property or measure seldom defines the quality of the design. In most real-world applications, a good experiment requires the simultaneous optimization of several, often conflicting, design objectives, some of which may be vague and uncertain. In this paper, we discuss a class of algorithms for subset selection rooted in the principles of multiobjective optimization. Our approach is to employ an objective function that encodes all of the desired selection criteria, and then use a simulated annealing or evolutionary approach to identify the optimal (or a nearly optimal) subset from among the vast number of possibilities. Many design criteria can be accommodated, including diversity, similarity to known actives, predicted activity and/or selectivity determined by quantitative structure-activity relationship (QSAR) models or receptor binding models, enforcement of certain property distributions, reagent cost and availability, and many others. The method is robust, convergent, and extensible, offers the user full control over the relative significance of the various objectives in the final design, and permits the simultaneous selection of compounds from multiple libraries in full- or sparse-array format.

  19. Optimal timing in biological processes

    USGS Publications Warehouse

    Williams, B.K.; Nichols, J.D.

    1984-01-01

    A general approach for obtaining solutions to a class of biological optimization problems is provided. The general problem is one of determining the appropriate time to take some action, when the action can be taken only once during some finite time frame. The approach can also be extended to cover a number of other problems involving animal choice (e.g., mate selection, habitat selection). Returns (assumed to index fitness) are treated as random variables with time-specific distributions, and can be either observable or unobservable at the time action is taken. In the case of unobservable returns, the organism is assumed to base decisions on some ancillary variable that is associated with returns. Optimal policies are derived for both situations and their properties are discussed. Various extensions are also considered, including objective functions based on functions of returns other than the mean, nonmonotonic relationships between the observable variable and returns; possible death of the organism before action is taken; and discounting of future returns. A general feature of the optimal solutions for many of these problems is that an organism should be very selective (i.e., should act only when returns or expected returns are relatively high) at the beginning of the time frame and should become less and less selective as time progresses. An example of the application of optimal timing to a problem involving the timing of bird migration is discussed, and a number of other examples for which the approach is applicable are described.

  20. Asset Allocation and Optimal Contract for Delegated Portfolio Management

    NASA Astrophysics Data System (ADS)

    Liu, Jingjun; Liang, Jianfeng

    This article studies the portfolio selection and the contracting problems between an individual investor and a professional portfolio manager in a discrete-time principal-agent framework. Portfolio selection and optimal contracts are obtained in closed form. The optimal contract was composed with the fixed fee, the cost, and the fraction of excess expected return. The optimal portfolio is similar to the classical two-fund separation theorem.

  1. A Comparative Study of Optimization Algorithms for Engineering Synthesis.

    DTIC Science & Technology

    1983-03-01

    the ADS program demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular...demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular problem. 4 TABLE OF...algorithm to suit a particular problem. The ADS library of design optimization algorithms was . developed by Vanderplaats in response to the first

  2. A hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization.

    PubMed

    Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein

    2016-06-01

    This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Conformational exchange of aromatic side chains characterized by L-optimized TROSY-selected ¹³C CPMG relaxation dispersion.

    PubMed

    Weininger, Ulrich; Respondek, Michal; Akke, Mikael

    2012-09-01

    Protein dynamics on the millisecond time scale commonly reflect conformational transitions between distinct functional states. NMR relaxation dispersion experiments have provided important insights into biologically relevant dynamics with site-specific resolution, primarily targeting the protein backbone and methyl-bearing side chains. Aromatic side chains represent attractive probes of protein dynamics because they are over-represented in protein binding interfaces, play critical roles in enzyme catalysis, and form an important part of the core. Here we introduce a method to characterize millisecond conformational exchange of aromatic side chains in selectively (13)C labeled proteins by means of longitudinal- and transverse-relaxation optimized CPMG relaxation dispersion. By monitoring (13)C relaxation in a spin-state selective manner, significant sensitivity enhancement can be achieved in terms of both signal intensity and the relative exchange contribution to transverse relaxation. Further signal enhancement results from optimizing the longitudinal relaxation recovery of the covalently attached (1)H spins. We validated the L-TROSY-CPMG experiment by measuring fast folding-unfolding kinetics of the small protein CspB under native conditions. The determined unfolding rate matches perfectly with previous results from stopped-flow kinetics. The CPMG-derived chemical shift differences between the folded and unfolded states are in excellent agreement with those obtained by urea-dependent chemical shift analysis. The present method enables characterization of conformational exchange involving aromatic side chains and should serve as a valuable complement to methods developed for other types of protein side chains.

  4. Accelerating IMRT optimization by voxel sampling

    NASA Astrophysics Data System (ADS)

    Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.

    2007-12-01

    This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.

  5. Selective bond breaking mediated by state specific vibrational excitation in model HOD molecule through optimized femtosecond IR pulse: a simulated annealing based approach.

    PubMed

    Shandilya, Bhavesh K; Sen, Shrabani; Sahoo, Tapas; Talukder, Srijeeta; Chaudhury, Pinaki; Adhikari, Satrajit

    2013-07-21

    The selective control of O-H/O-D bond dissociation in reduced dimensionality model of HOD molecule has been explored through IR+UV femtosecond pulses. The IR pulse has been optimized using simulated annealing stochastic approach to maximize population of a desired low quanta vibrational state. Since those vibrational wavefunctions of the ground electronic states are preferentially localized either along the O-H or O-D mode, the femtosecond UV pulse is used only to transfer vibrationally excited molecule to the repulsive upper surface to cleave specific bond, O-H or O-D. While transferring from the ground electronic state to the repulsive one, the optimization of the UV pulse is not necessarily required except specific case. The results so obtained are analyzed with respect to time integrated flux along with contours of time evolution of probability density on excited potential energy surface. After preferential excitation from [line]0, 0> ([line]m, n> stands for the state having m and n quanta of excitations in O-H and O-D mode, respectively) vibrational level of the ground electronic state to its specific low quanta vibrational state ([line]1, 0> or [line]0, 1> or [line]2, 0> or [line]0, 2>) by using optimized IR pulse, the dissociation of O-D or O-H bond through the excited potential energy surface by UV laser pulse appears quite high namely, 88% (O-H ; [line]1, 0>) or 58% (O-D ; [line]0, 1>) or 85% (O-H ; [line]2, 0>) or 59% (O-D ; [line]0, 2>). Such selectivity of the bond breaking by UV pulse (if required, optimized) together with optimized IR one is encouraging compared to the normal pulses.

  6. Investigations of quantum heuristics for optimization

    NASA Astrophysics Data System (ADS)

    Rieffel, Eleanor; Hadfield, Stuart; Jiang, Zhang; Mandra, Salvatore; Venturelli, Davide; Wang, Zhihui

    We explore the design of quantum heuristics for optimization, focusing on the quantum approximate optimization algorithm, a metaheuristic developed by Farhi, Goldstone, and Gutmann. We develop specific instantiations of the of quantum approximate optimization algorithm for a variety of challenging combinatorial optimization problems. Through theoretical analyses and numeric investigations of select problems, we provide insight into parameter setting and Hamiltonian design for quantum approximate optimization algorithms and related quantum heuristics, and into their implementation on hardware realizable in the near term.

  7. Selecting the selector: Comparison of update rules for discrete global optimization

    DOE PAGES

    Theiler, James; Zimmer, Beate G.

    2017-05-24

    In this paper, we compare some well-known Bayesian global optimization methods in four distinct regimes, corresponding to high and low levels of measurement noise and to high and low levels of “quenched noise” (which term we use to describe the roughness of the function we are trying to optimize). We isolate the two stages of this optimization in terms of a “regressor,” which fits a model to the data measured so far, and a “selector,” which identifies the next point to be measured. Finally, the focus of this paper is to investigate the choice of selector when the regressor ismore » well matched to the data.« less

  8. Managing daily happiness: The relationship between selection, optimization, and compensation strategies and well-being in adulthood.

    PubMed

    Teshale, Salom M; Lachman, Margie E

    2016-11-01

    Past work on selective optimization and compensation (SOC) has focused on between-persons differences and its relationship with global well-being. However, less work examines within-person SOC variation. This study examined whether variation over 7 days in everyday SOC was associated with happiness in a sample of 145 adults ages 22-94. Age differences in this relationship, the moderating effects of health, and lagged effects were also examined. On days in which middle-age and older adults and individuals with lower health used more SOC, they also reported greater happiness. Lagged effects indicated lower happiness led to greater subsequent SOC usage. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Local Feature Selection for Data Classification.

    PubMed

    Armanfard, Narges; Reilly, James P; Komeili, Majid

    2016-06-01

    Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.

  10. Selection and optimization of hits from a high-throughput phenotypic screen against Trypanosoma cruzi.

    PubMed

    Keenan, Martine; Alexander, Paul W; Chaplin, Jason H; Abbott, Michael J; Diao, Hugo; Wang, Zhisen; Best, Wayne M; Perez, Catherine J; Cornwall, Scott M J; Keatley, Sarah K; Thompson, R C Andrew; Charman, Susan A; White, Karen L; Ryan, Eileen; Chen, Gong; Ioset, Jean-Robert; von Geldern, Thomas W; Chatelain, Eric

    2013-10-01

    Inhibitors of Trypanosoma cruzi with novel mechanisms of action are urgently required to diversify the current clinical and preclinical pipelines. Increasing the number and diversity of hits available for assessment at the beginning of the discovery process will help to achieve this aim. We report the evaluation of multiple hits generated from a high-throughput screen to identify inhibitors of T. cruzi and from these studies the discovery of two novel series currently in lead optimization. Lead compounds from these series potently and selectively inhibit growth of T. cruzi in vitro and the most advanced compound is orally active in a subchronic mouse model of T. cruzi infection. High-throughput screening of novel compound collections has an important role to play in diversifying the trypanosomatid drug discovery portfolio. A new T. cruzi inhibitor series with good drug-like properties and promising in vivo efficacy has been identified through this process.

  11. Optimizing drilling performance using a selected drilling fluid

    DOEpatents

    Judzis, Arnis [Salt Lake City, UT; Black, Alan D [Coral Springs, FL; Green, Sidney J [Salt Lake City, UT; Robertson, Homer A [West Jordan, UT; Bland, Ronald G [Houston, TX; Curry, David Alexander [The Woodlands, TX; Ledgerwood, III, Leroy W.

    2011-04-19

    To improve drilling performance, a drilling fluid is selected based on one or more criteria and to have at least one target characteristic. Drilling equipment is used to drill a wellbore, and the selected drilling fluid is provided into the wellbore during drilling with the drilling equipment. The at least one target characteristic of the drilling fluid includes an ability of the drilling fluid to penetrate into formation cuttings during drilling to weaken the formation cuttings.

  12. An adaptive response surface method for crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Yang, Ren-Jye; Zhu, Ping

    2013-11-01

    Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.

  13. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  14. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  15. Qualification and cryogenic performance of cryomodule components at CEBAF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heckman, J.; Macha, K.; Fischer, J.

    1996-12-31

    At CEBAF an electron beam is accelerated by superconducting resonant niobium cavities which are operated submerged in superfluid helium. The accelerator has 42 1/4 cryomodules, each containing eight cavities. The qualification and design of components for the cryomodules under went stringent testing and evaluation for acceptance. Indium wire seals are used between the cavity and helium vessel interface to make a superfluid helium leak tight seal. Each cavity is equipped with a mechanical tuner assembly designed to stretch and compress the cavities. Two rotary feedthroughs are used to operate each mechanical tuner assembly. Ceramic feedthroughs not designed for super-fluid weremore » qualified for tuner and cryogenic instrumentation. To ensure long term integrity of the machine special attention is required for material specifications and machine processes. The following is to share the qualification methods, design and performance of the cryogenic cryomodule components.« less

  16. What is the Optimal Strategy for Adaptive Servo-Ventilation Therapy?

    PubMed

    Imamura, Teruhiko; Kinugawa, Koichiro

    2018-05-23

    Clinical advantages in the adaptive servo-ventilation (ASV) therapy have been reported in selected heart failure patients with/without sleep-disorder breathing, whereas multicenter randomized control trials could not demonstrate such advantages. Considering this discrepancy, optimal patient selection and device setting may be a key for the successful ASV therapy. Hemodynamic and echocardiographic parameters indicating pulmonary congestion such as elevated pulmonary capillary wedge pressure were reported as predictors of good response to ASV therapy. Recently, parameters indicating right ventricular dysfunction also have been reported as good predictors. Optimal device setting with appropriate pressure setting during appropriate time may also be a key. Large-scale prospective trial with optimal patient selection and optimal device setting is warranted.

  17. Enhancement in multiple lignolytic enzymes production for optimized lignin degradation and selectivity in fungal pretreatment of sweet sorghum bagasse.

    PubMed

    Mishra, Vartika; Jana, Asim K; Jana, Mithu Maiti; Gupta, Antriksh

    2017-07-01

    The objective of this work was to study the increase in multiple lignolytic enzyme productions through the use of supplements in combination in pretreatment of sweet sorghum bagasse (SSB) by Coriolus versicolor such that enzymes act synergistically to maximize the lignin degradation and selectivity. Enzyme activities were enhanced by metallic salts and phenolic compound supplements in SSF. Supplement of syringic acid increased the activities of LiP, AAO and laccase; gallic acid increased MnP; CuSO 4 increased laccase and PPO to improve the lignin degradations and selectivity individually, higher than control. Combination of supplements optimized by RSM increased the production of laccase, LiP, MnP, PPO and AAO by 17.2, 45.5, 3.5, 2.4 and 3.6 folds respectively for synergistic action leading to highest lignin degradation (2.3 folds) and selectivity (7.1 folds). Enzymatic hydrolysis of pretreated SSB yielded ∼2.43 times fermentable sugar. This technique could be widely applied for pretreatment and enzyme productions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Energy Optimal Path Planning: Integrating Coastal Ocean Modelling with Optimal Control

    NASA Astrophysics Data System (ADS)

    Subramani, D. N.; Haley, P. J., Jr.; Lermusiaux, P. F. J.

    2016-02-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  19. Constraint programming based biomarker optimization.

    PubMed

    Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng

    2015-01-01

    Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.

  20. FSMRank: feature selection algorithm for learning to rank.

    PubMed

    Lai, Han-Jiang; Pan, Yan; Tang, Yong; Yu, Rong

    2013-06-01

    In recent years, there has been growing interest in learning to rank. The introduction of feature selection into different learning problems has been proven effective. These facts motivate us to investigate the problem of feature selection for learning to rank. We propose a joint convex optimization formulation which minimizes ranking errors while simultaneously conducting feature selection. This optimization formulation provides a flexible framework in which we can easily incorporate various importance measures and similarity measures of the features. To solve this optimization problem, we use the Nesterov's approach to derive an accelerated gradient algorithm with a fast convergence rate O(1/T(2)). We further develop a generalization bound for the proposed optimization problem using the Rademacher complexities. Extensive experimental evaluations are conducted on the public LETOR benchmark datasets. The results demonstrate that the proposed method shows: 1) significant ranking performance gain compared to several feature selection baselines for ranking, and 2) very competitive performance compared to several state-of-the-art learning-to-rank algorithms.

  1. Orbit design and optimization based on global telecommunication performance metrics

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.

    2006-01-01

    The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.

  2. Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Lermusiaux, Pierre F. J.

    2016-04-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.

  3. Importance of double-pole CFS-PML for broad-band seismic wave simulation and optimal parameters selection

    NASA Astrophysics Data System (ADS)

    Feng, Haike; Zhang, Wei; Zhang, Jie; Chen, Xiaofei

    2017-05-01

    The perfectly matched layer (PML) is an efficient absorbing technique for numerical wave simulation. The complex frequency-shifted PML (CFS-PML) introduces two additional parameters in the stretching function to make the absorption frequency dependent. This can help to suppress converted evanescent waves from near grazing incident waves, but does not efficiently absorb low-frequency waves below the cut-off frequency. To absorb both the evanescent wave and the low-frequency wave, the double-pole CFS-PML having two poles in the coordinate stretching function was developed in computational electromagnetism. Several studies have investigated the performance of the double-pole CFS-PML for seismic wave simulations in the case of a narrowband seismic wavelet and did not find significant difference comparing to the CFS-PML. Another difficulty to apply the double-pole CFS-PML for real problems is that a practical strategy to set optimal parameter values has not been established. In this work, we study the performance of the double-pole CFS-PML for broad-band seismic wave simulation. We find that when the maximum to minimum frequency ratio is larger than 16, the CFS-PML will either fail to suppress the converted evanescent waves for grazing incident waves, or produce visible low-frequency reflection, depending on the value of α. In contrast, the double-pole CFS-PML can simultaneously suppress the converted evanescent waves and avoid low-frequency reflections with proper parameter values. We analyse the different roles of the double-pole CFS-PML parameters and propose optimal selections of these parameters. Numerical tests show that the double-pole CFS-PML with the optimal parameters can generate satisfactory results for broad-band seismic wave simulations.

  4. General shape optimization capability

    NASA Technical Reports Server (NTRS)

    Chargin, Mladen K.; Raasch, Ingo; Bruns, Rudolf; Deuermeyer, Dawson

    1991-01-01

    A method is described for calculating shape sensitivities, within MSC/NASTRAN, in a simple manner without resort to external programs. The method uses natural design variables to define the shape changes in a given structure. Once the shape sensitivities are obtained, the shape optimization process is carried out in a manner similar to property optimization processes. The capability of this method is illustrated by two examples: the shape optimization of a cantilever beam with holes, loaded by a point load at the free end (with the shape of the holes and the thickness of the beam selected as the design variables), and the shape optimization of a connecting rod subjected to several different loading and boundary conditions.

  5. Method of generating features optimal to a dataset and classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.

    A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.

  6. A Novel Hybrid Clonal Selection Algorithm with Combinatorial Recombination and Modified Hypermutation Operators for Global Optimization

    PubMed Central

    Lin, Jingjing; Jing, Honglei

    2016-01-01

    Artificial immune system is one of the most recently introduced intelligence methods which was inspired by biological immune system. Most immune system inspired algorithms are based on the clonal selection principle, known as clonal selection algorithms (CSAs). When coping with complex optimization problems with the characteristics of multimodality, high dimension, rotation, and composition, the traditional CSAs often suffer from the premature convergence and unsatisfied accuracy. To address these concerning issues, a recombination operator inspired by the biological combinatorial recombination is proposed at first. The recombination operator could generate the promising candidate solution to enhance search ability of the CSA by fusing the information from random chosen parents. Furthermore, a modified hypermutation operator is introduced to construct more promising and efficient candidate solutions. A set of 16 common used benchmark functions are adopted to test the effectiveness and efficiency of the recombination and hypermutation operators. The comparisons with classic CSA, CSA with recombination operator (RCSA), and CSA with recombination and modified hypermutation operator (RHCSA) demonstrate that the proposed algorithm significantly improves the performance of classic CSA. Moreover, comparison with the state-of-the-art algorithms shows that the proposed algorithm is quite competitive. PMID:27698662

  7. Egg-laying substrate selection for optimal camouflage by quail.

    PubMed

    Lovell, P George; Ruxton, Graeme D; Langridge, Keri V; Spencer, Karen A

    2013-02-04

    Camouflage is conferred by background matching and disruption, which are both affected by microhabitat. However, microhabitat selection that enhances camouflage has only been demonstrated in species with discrete phenotypic morphs. For most animals, phenotypic variation is continuous; here we explore whether such individuals can select microhabitats to best exploit camouflage. We use substrate selection in a ground-nesting bird (Japanese quail, Coturnix japonica). For such species, threat from visual predators is high and egg appearance shows strong between-female variation. In quail, variation in appearance is particularly obvious in the amount of dark maculation on the light-colored shell. When given a choice, birds consistently selected laying substrates that made visual detection of their egg outline most challenging. However, the strategy for maximizing camouflage varied with the degree of egg maculation. Females laying heavily maculated eggs selected the substrate that more closely matched egg maculation color properties, leading to camouflage through disruptive coloration. For lightly maculated eggs, females chose a substrate that best matched their egg background coloration, suggesting background matching. Our results show that quail "know" their individual egg patterning and seek out a nest position that provides most effective camouflage for their individual phenotype. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. DHSpred: support-vector-machine-based human DNase I hypersensitive sites prediction using the optimal features selected by random forest.

    PubMed

    Manavalan, Balachandran; Shin, Tae Hwan; Lee, Gwang

    2018-01-05

    DNase I hypersensitive sites (DHSs) are genomic regions that provide important information regarding the presence of transcriptional regulatory elements and the state of chromatin. Therefore, identifying DHSs in uncharacterized DNA sequences is crucial for understanding their biological functions and mechanisms. Although many experimental methods have been proposed to identify DHSs, they have proven to be expensive for genome-wide application. Therefore, it is necessary to develop computational methods for DHS prediction. In this study, we proposed a support vector machine (SVM)-based method for predicting DHSs, called DHSpred (DNase I Hypersensitive Site predictor in human DNA sequences), which was trained with 174 optimal features. The optimal combination of features was identified from a large set that included nucleotide composition and di- and trinucleotide physicochemical properties, using a random forest algorithm. DHSpred achieved a Matthews correlation coefficient and accuracy of 0.660 and 0.871, respectively, which were 3% higher than those of control SVM predictors trained with non-optimized features, indicating the efficiency of the feature selection method. Furthermore, the performance of DHSpred was superior to that of state-of-the-art predictors. An online prediction server has been developed to assist the scientific community, and is freely available at: http://www.thegleelab.org/DHSpred.html.

  9. DHSpred: support-vector-machine-based human DNase I hypersensitive sites prediction using the optimal features selected by random forest

    PubMed Central

    Manavalan, Balachandran; Shin, Tae Hwan; Lee, Gwang

    2018-01-01

    DNase I hypersensitive sites (DHSs) are genomic regions that provide important information regarding the presence of transcriptional regulatory elements and the state of chromatin. Therefore, identifying DHSs in uncharacterized DNA sequences is crucial for understanding their biological functions and mechanisms. Although many experimental methods have been proposed to identify DHSs, they have proven to be expensive for genome-wide application. Therefore, it is necessary to develop computational methods for DHS prediction. In this study, we proposed a support vector machine (SVM)-based method for predicting DHSs, called DHSpred (DNase I Hypersensitive Site predictor in human DNA sequences), which was trained with 174 optimal features. The optimal combination of features was identified from a large set that included nucleotide composition and di- and trinucleotide physicochemical properties, using a random forest algorithm. DHSpred achieved a Matthews correlation coefficient and accuracy of 0.660 and 0.871, respectively, which were 3% higher than those of control SVM predictors trained with non-optimized features, indicating the efficiency of the feature selection method. Furthermore, the performance of DHSpred was superior to that of state-of-the-art predictors. An online prediction server has been developed to assist the scientific community, and is freely available at: http://www.thegleelab.org/DHSpred.html PMID:29416743

  10. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  11. Profiling Charge Complementarity and Selectivity for Binding at the Protein Surface

    PubMed Central

    Sulea, Traian; Purisima, Enrico O.

    2003-01-01

    A novel analysis and representation of the protein surface in terms of electrostatic binding complementarity and selectivity is presented. The charge optimization methodology is applied in a probe-based approach that simulates the binding process to the target protein. The molecular surface is color coded according to calculated optimal charge or according to charge selectivity, i.e., the binding cost of deviating from the optimal charge. The optimal charge profile depends on both the protein shape and charge distribution whereas the charge selectivity profile depends only on protein shape. High selectivity is concentrated in well-shaped concave pockets, whereas solvent-exposed convex regions are not charge selective. This suggests the synergy of charge and shape selectivity hot spots toward molecular selection and recognition, as well as the asymmetry of charge selectivity at the binding interface of biomolecular systems. The charge complementarity and selectivity profiles map relevant electrostatic properties in a readily interpretable way and encode information that is quite different from that visualized in the standard electrostatic potential map of unbound proteins. PMID:12719221

  12. Profiling charge complementarity and selectivity for binding at the protein surface.

    PubMed

    Sulea, Traian; Purisima, Enrico O

    2003-05-01

    A novel analysis and representation of the protein surface in terms of electrostatic binding complementarity and selectivity is presented. The charge optimization methodology is applied in a probe-based approach that simulates the binding process to the target protein. The molecular surface is color coded according to calculated optimal charge or according to charge selectivity, i.e., the binding cost of deviating from the optimal charge. The optimal charge profile depends on both the protein shape and charge distribution whereas the charge selectivity profile depends only on protein shape. High selectivity is concentrated in well-shaped concave pockets, whereas solvent-exposed convex regions are not charge selective. This suggests the synergy of charge and shape selectivity hot spots toward molecular selection and recognition, as well as the asymmetry of charge selectivity at the binding interface of biomolecular systems. The charge complementarity and selectivity profiles map relevant electrostatic properties in a readily interpretable way and encode information that is quite different from that visualized in the standard electrostatic potential map of unbound proteins.

  13. Particle Swarm Optimization Based Feature Enhancement and Feature Selection for Improved Emotion Recognition in Speech and Glottal Signals

    PubMed Central

    Muthusamy, Hariharan; Polat, Kemal; Yaacob, Sazali

    2015-01-01

    In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature. PMID:25799141

  14. Optimal control of raw timber production processes

    Treesearch

    Ivan Kolenka

    1978-01-01

    This paper demonstrates the possibility of optimal planning and control of timber harvesting activ-ities with mathematical optimization models. The separate phases of timber harvesting are represented by coordinated models which can be used to select the optimal decision for the execution of any given phase. The models form a system whose components are connected and...

  15. 'Outbreak Gold Standard' selection to provide optimized threshold for infectious diseases early-alert based on China Infectious Disease Automated-alert and Response System.

    PubMed

    Wang, Rui-Ping; Jiang, Yong-Gen; Zhao, Gen-Ming; Guo, Xiao-Qin; Michael, Engelgau

    2017-12-01

    The China Infectious Disease Automated-alert and Response System (CIDARS) was successfully implemented and became operational nationwide in 2008. The CIDARS plays an important role in and has been integrated into the routine outbreak monitoring efforts of the Center for Disease Control (CDC) at all levels in China. In the CIDARS, thresholds are determined using the "Mean+2SD‟ in the early stage which have limitations. This study compared the performance of optimized thresholds defined using the "Mean +2SD‟ method to the performance of 5 novel algorithms to select optimal "Outbreak Gold Standard (OGS)‟ and corresponding thresholds for outbreak detection. Data for infectious disease were organized by calendar week and year. The "Mean+2SD‟, C1, C2, moving average (MA), seasonal model (SM), and cumulative sum (CUSUM) algorithms were applied. Outbreak signals for the predicted value (Px) were calculated using a percentile-based moving window. When the outbreak signals generated by an algorithm were in line with a Px generated outbreak signal for each week, this Px was then defined as the optimized threshold for that algorithm. In this study, six infectious diseases were selected and classified into TYPE A (chickenpox and mumps), TYPE B (influenza and rubella) and TYPE C [hand foot and mouth disease (HFMD) and scarlet fever]. Optimized thresholds for chickenpox (P 55 ), mumps (P 50 ), influenza (P 40 , P 55 , and P 75 ), rubella (P 45 and P 75 ), HFMD (P 65 and P 70 ), and scarlet fever (P 75 and P 80 ) were identified. The C1, C2, CUSUM, SM, and MA algorithms were appropriate for TYPE A. All 6 algorithms were appropriate for TYPE B. C1 and CUSUM algorithms were appropriate for TYPE C. It is critical to incorporate more flexible algorithms as OGS into the CIDRAS and to identify the proper OGS and corresponding recommended optimized threshold by different infectious disease types.

  16. Research and proposal on selective catalytic reduction reactor optimization for industrial boiler.

    PubMed

    Yang, Yiming; Li, Jian; He, Hong

    2017-08-24

    The advanced computational fluid dynamics (CFD) software STAR-CCM+ was used to simulate a denitrification (De-NOx) project for a boiler in this paper, and the simulation result was verified based on a physical model. Two selective catalytic reduction (SCR) reactors were developed: reactor 1 was optimized and reactor 2 was developed based on reactor 1. Various indicators, including gas flow field, ammonia concentration distribution, temperature distribution, gas incident angle, and system pressure drop were analyzed. The analysis indicated that reactor 2 was of outstanding performance and could simplify developing greatly. Ammonia injection grid (AIG), the core component of the reactor, was studied; three AIGs were developed and their performances were compared and analyzed. The result indicated that AIG 3 was of the best performance. The technical indicators were proposed for SCR reactor based on the study. Flow filed distribution, gas incident angle, and temperature distribution are subjected to SCR reactor shape to a great extent, and reactor 2 proposed in this paper was of outstanding performance; ammonia concentration distribution is subjected to ammonia injection grid (AIG) shape, and AIG 3 could meet the technical indicator of ammonia concentration without mounting ammonia mixer. The developments above on the reactor and the AIG are both of great application value and social efficiency.

  17. Optimization of Swine Breeding Programs Using Genomic Selection with ZPLAN+

    PubMed Central

    Lopez, B. M.; Kang, H. S.; Kim, T. H.; Viterbo, V. S.; Kim, H. S.; Na, C. S.; Seo, K. S.

    2016-01-01

    The objective of this study was to evaluate the present conventional selection program of a swine nucleus farm and compare it with a new selection strategy employing genomic enhanced breeding value (GEBV) as the selection criteria. The ZPLAN+ software was employed to calculate and compare the genetic gain, total cost, return and profit of each selection strategy. The first strategy reflected the current conventional breeding program, which was a progeny test system (CS). The second strategy was a selection scheme based strictly on genomic information (GS1). The third scenario was the same as GS1, but the selection by GEBV was further supplemented by the performance test (GS2). The last scenario was a mixture of genomic information and progeny tests (GS3). The results showed that the accuracy of the selection index of young boars of GS1 was 26% higher than that of CS. On the other hand, both GS2 and GS3 gave 31% higher accuracy than CS for young boars. The annual monetary genetic gain of GS1, GS2 and GS3 was 10%, 12%, and 11% higher, respectively, than that of CS. As expected, the discounted costs of genomic selection strategies were higher than those of CS. The costs of GS1, GS2 and GS3 were 35%, 73%, and 89% higher than those of CS, respectively, assuming a genotyping cost of $120. As a result, the discounted profit per animal of GS1 and GS2 was 8% and 2% higher, respectively, than that of CS while GS3 was 6% lower. Comparison among genomic breeding scenarios revealed that GS1 was more profitable than GS2 and GS3. The genomic selection schemes, especially GS1 and GS2, were clearly superior to the conventional scheme in terms of monetary genetic gain and profit. PMID:26954222

  18. Optimal Contractor Selection in Construction Industry: The Fuzzy Way

    NASA Astrophysics Data System (ADS)

    Krishna Rao, M. V.; Kumar, V. S. S.; Rathish Kumar, P.

    2018-02-01

    A purely price-based approach to contractor selection has been identified as the root cause for many serious project delivery problems. Therefore, the capability of the contractor to execute the project should be evaluated using a multiple set of selection criteria including reputation, past performance, performance potential, financial soundness and other project specific criteria. An industry-wide questionnaire survey was conducted with the objective of identifying the important criteria for adoption in the selection process. In this work, a fuzzy set based model was developed for contractor prequalification/evaluation, by using effective criteria obtained from the percept of construction professionals, taking subjective judgments of decision makers also into consideration. A case study consisting of four alternatives (contractors in the present case) solicited from a public works department of Pondicherry in India, is used to illustrate the effectiveness of the proposed approach. The final selection of contractor is made based on the integrated score or Overall Evaluation Score of the decision alternative in prequalification as well as bid evaluation stages.

  19. iVAX: An integrated toolkit for the selection and optimization of antigens and the design of epitope-driven vaccines.

    PubMed

    Moise, Leonard; Gutierrez, Andres; Kibria, Farzana; Martin, Rebecca; Tassone, Ryan; Liu, Rui; Terry, Frances; Martin, Bill; De Groot, Anne S

    2015-01-01

    Computational vaccine design, also known as computational vaccinology, encompasses epitope mapping, antigen selection and immunogen design using computational tools. The iVAX toolkit is an integrated set of tools that has been in development since 1998 by De Groot and Martin. It comprises a suite of immunoinformatics algorithms for triaging candidate antigens, selecting immunogenic and conserved T cell epitopes, eliminating regulatory T cell epitopes, and optimizing antigens for immunogenicity and protection against disease. iVAX has been applied to vaccine development programs for emerging infectious diseases, cancer antigens and biodefense targets. Several iVAX vaccine design projects have had success in pre-clinical studies in animal models and are progressing toward clinical studies. The toolkit now incorporates a range of immunoinformatics tools for infectious disease and cancer immunotherapy vaccine design. This article will provide a guide to the iVAX approach to computational vaccinology.

  20. Optimization of dipeptidic inhibitors of cathepsin L for improved Toxoplasma gondii selectivity and CNS permeability.

    PubMed

    Zwicker, Jeffery D; Diaz, Nicolas A; Guerra, Alfredo J; Kirchhoff, Paul D; Wen, Bo; Sun, Duxin; Carruthers, Vern B; Larsen, Scott D

    2018-06-01

    The neurotropic protozoan Toxoplasma gondii is the second leading cause of death due to foodborne illness in the US, and has been designated as one of five neglected parasitic infections by the Center for Disease Control and Prevention. Currently, no treatment options exist for the chronic dormant-phase Toxoplasma infection in the central nervous system (CNS). T. gondii cathepsin L (TgCPL) has recently been implicated as a novel viable target for the treatment of chronic toxoplasmosis. In this study, we report the first body of SAR work aimed at developing potent inhibitors of TgCPL with selectivity vs the human cathepsin L. Starting from a known inhibitor of human cathepsin L, and guided by structure-based design, we were able to modulate the selectivity for Toxoplasma vs human CPL by nearly 50-fold while modifying physiochemical properties to be more favorable for metabolic stability and CNS penetrance. The overall potency of our inhibitors towards TgCPL was improved from 2 μM to as low as 110 nM and we successfully demonstrated that an optimized analog 18b is capable of crossing the BBB (0.5 brain/plasma). This work is an important first step toward development of a CNS-penetrant probe to validate TgCPL as a feasible target for the treatment of chronic toxoplasmosis. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Optimization of Immobilized Gallium (III) Ion Affinity Chromatography for Selective Binding and Recovery of Phosphopeptides from Protein Digests

    PubMed Central

    Aryal, Uma K.; Olson, Douglas J.H.; Ross, Andrew R.S.

    2008-01-01

    Although widely used in proteomics research for the selective enrichment of phosphopeptides from protein digests, immobilized metal-ion affinity chromatography (IMAC) often suffers from low specificity and differential recovery of peptides carrying different numbers of phosphate groups. By systematically evaluating and optimizing different loading, washing, and elution conditions, we have developed an efficient and highly selective procedure for the enrichment of phosphopeptides using a commercially available gallium(III)-IMAC column (PhosphoProfile, Sigma). Phosphopeptide enrichment using the reagents supplied with the column is incomplete and biased toward the recovery and/or detection of smaller, singly phosphorylated peptides. In contrast, elution with base (0.4 M ammonium hydroxide) gives efficient and balanced recovery of both singly and multiply phosphorylated peptides, while loading peptides in a strong acidic solution (1% trifluoracetic acid) further increases selectivity toward phosphopeptides, with minimal carryover of nonphosphorylated peptides. 2,5-Dihydroxybenzoic acid, a matrix commonly used when analyzing phosphopeptides by matrix-assisted laser desorption/ionization mass spectrometry was also evaluated as an additive in loading and eluting solvents. Elution with 50% acetonitrile containing 20 mg/mL dihydroxybenzoic acid and 1% phosphoric acid gave results similar to those obtained using ammonium hydroxide as the eluent, although the latter showed the highest specificity for phosphorylated peptides. PMID:19183793

  2. Systematic Sensor Selection Strategy (S4) User Guide

    NASA Technical Reports Server (NTRS)

    Sowers, T. Shane

    2012-01-01

    This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.

  3. Optimization in Ecology

    ERIC Educational Resources Information Center

    Cody, Martin L.

    1974-01-01

    Discusses the optimality of natural selection, ways of testing for optimum solutions to problems of time - or energy-allocation in nature, optimum patterns in spatial distribution and diet breadth, and how best to travel over a feeding area so that food intake is maximized. (JR)

  4. Development of a novel class of B-RafV600E-selective inhibitors through virtual screening and hierarchical hit optimization

    PubMed Central

    Kong, Xiangqian; Qin, Jie; Li, Zeng; Vultur, Adina; Tong, Linjiang; Feng, Enguang; Rajan, Geena; Liu, Shien; Lu, Junyan; Liang, Zhongjie; Zheng, Mingyue; Zhu, Weiliang; Jiang, Hualiang; Herlyn, Meenhard; Liu, Hong; Marmorstein, Ronen; Luo, Cheng

    2012-01-01

    Oncogenic mutations in critical nodes of cellular signaling pathways have been associated with tumorigenesis and progression. The B-Raf protein kinase, a key hub in the canonical MAPK signaling cascade, is mutated in a broad range of human cancers and especially in malignant melanoma. The most prevalent B-RafV600E mutant exhibits elevated kinase activity and results in constitutive activation of the MAPK pathway, thus making it a promising drug target for cancer therapy. Herein, we described the development of novel B-RafV600E selective inhibitors via multi-step virtual screening and hierarchical hit optimization. Nine hit compounds with low micromolar IC50 values were identified as B-RafV600E inhibitors through virtual screening. Subsequent scaffold-based analogue searching and medicinal chemistry efforts significantly improved both the inhibitor potency and oncogene selectivity. In particular, compounds 22f and 22q possess nanomolar IC50 values with selectivity for B-RafV600E in vitro and exclusive cytotoxicity against B-RafV600E harboring cancer cells. PMID:22875039

  5. Development of a novel class of B-Raf(V600E)-selective inhibitors through virtual screening and hierarchical hit optimization.

    PubMed

    Kong, Xiangqian; Qin, Jie; Li, Zeng; Vultur, Adina; Tong, Linjiang; Feng, Enguang; Rajan, Geena; Liu, Shien; Lu, Junyan; Liang, Zhongjie; Zheng, Mingyue; Zhu, Weiliang; Jiang, Hualiang; Herlyn, Meenhard; Liu, Hong; Marmorstein, Ronen; Luo, Cheng

    2012-09-28

    Oncogenic mutations in critical nodes of cellular signaling pathways have been associated with tumorigenesis and progression. The B-Raf protein kinase, a key hub in the canonical MAPK signaling cascade, is mutated in a broad range of human cancers and especially in malignant melanoma. The most prevalent B-Raf(V600E) mutant exhibits elevated kinase activity and results in constitutive activation of the MAPK pathway, thus making it a promising drug target for cancer therapy. Herein, we describe the development of novel B-Raf(V600E) selective inhibitors via multi-step virtual screening and hierarchical hit optimization. Nine hit compounds with low micromolar IC(50) values were identified as B-Raf(V600E) inhibitors through virtual screening. Subsequent scaffold-based analogue searching and medicinal chemistry efforts significantly improved both the inhibitor potency and oncogene selectivity. In particular, compounds 22f and 22q possess nanomolar IC(50) values with selectivity for B-Raf(V600E)in vitro and exclusive cytotoxicity against B-Raf(V600E) harboring cancer cells.

  6. Relay selection in energy harvesting cooperative networks with rateless codes

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiyan; Wang, Fei

    2018-04-01

    This paper investigates the relay selection in energy harvesting cooperative networks, where the relays harvests energy from the radio frequency (RF) signals transmitted by a source, and the optimal relay is selected and uses the harvested energy to assist the information transmission from the source to its destination. Both source and the selected relay transmit information using rateless code, which allows the destination recover original information after collecting codes bits marginally surpass the entropy of original information. In order to improve transmission performance and efficiently utilize the harvested power, the optimal relay is selected. The optimization problem are formulated to maximize the achievable information rates of the system. Simulation results demonstrate that our proposed relay selection scheme outperform other strategies.

  7. Optimization of biological sulfide removal in a CSTR bioreactor.

    PubMed

    Roosta, Aliakbar; Jahanmiri, Abdolhossein; Mowla, Dariush; Niazi, Ali; Sotoodeh, Hamidreza

    2012-08-01

    In this study, biological sulfide removal from natural gas in a continuous bioreactor is investigated for estimation of the optimal operational parameters. According to the carried out reactions, sulfide can be converted to elemental sulfur, sulfate, thiosulfate, and polysulfide, of which elemental sulfur is the desired product. A mathematical model is developed and was used for investigation of the effect of various parameters on elemental sulfur selectivity. The results of the simulation show that elemental sulfur selectivity is a function of dissolved oxygen, sulfide load, pH, and concentration of bacteria. Optimal parameter values are calculated for maximum elemental sulfur selectivity by using genetic algorithm as an adaptive heuristic search. In the optimal conditions, 87.76% of sulfide loaded to the bioreactor is converted to elemental sulfur.

  8. Optimized positioning of autonomous surgical lamps

    NASA Astrophysics Data System (ADS)

    Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel

    2017-03-01

    We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.

  9. Simultaneous multislice refocusing via time optimal control.

    PubMed

    Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf

    2018-02-09

    Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.

  10. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy

    PubMed Central

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638

  11. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    PubMed

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  12. Removal of toxic metals from vanadium-contaminated soils using a washing method: Reagent selection and parameter optimization.

    PubMed

    Jiang, Jianguo; Yang, Meng; Gao, Yuchen; Wang, Jiaming; Li, Dean; Li, Tianran

    2017-08-01

    Vanadium (V) contamination in soils is an increasing worldwide concern facing human health and environmental conservation. The fractionation of a metal influences its mobility and biological toxicity. We analyzed the fractionations of V and several other metals using the BCR three-step sequential extraction procedure. Among methods for removing metal contamination, soil washing is an effective permanent treatment. We conducted experiments to select the proper reagents and to optimize extraction conditions. Citric acid, tartaric acid, oxalic acid, and Na 2 EDTA all exhibited high removal rates of the extractable state of V. With a liquid-to-solid ratio of 10, washing with 0.4 mol/L citric acid, 0.4 mol/L tartaric acid, 0.4 mol/L oxalic acid, and 0.12 mol/L Na 2 EDTA led to removal rates of 91%, 88%, 88%, and 61%, respectively. The effect of multiple washing on removal rate was also explored. According to the changes observed in metal fractionations, differences in removal rates among reagents is likely associated with their pK a value, pH in solution, and chemical structure. We concluded that treating with appropriate washing reagents under optimal conditions can greatly enhance the remediation of vanadium-contaminated soils. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Design and coverage of high throughput genotyping arrays optimized for individuals of East Asian, African American, and Latino race/ethnicity using imputation and a novel hybrid SNP selection algorithm.

    PubMed

    Hoffmann, Thomas J; Zhan, Yiping; Kvale, Mark N; Hesselson, Stephanie E; Gollub, Jeremy; Iribarren, Carlos; Lu, Yontao; Mei, Gangwu; Purdy, Matthew M; Quesenberry, Charles; Rowell, Sarah; Shapero, Michael H; Smethurst, David; Somkin, Carol P; Van den Eeden, Stephen K; Walter, Larry; Webster, Teresa; Whitmer, Rachel A; Finn, Andrea; Schaefer, Catherine; Kwok, Pui-Yan; Risch, Neil

    2011-12-01

    Four custom Axiom genotyping arrays were designed for a genome-wide association (GWA) study of 100,000 participants from the Kaiser Permanente Research Program on Genes, Environment and Health. The array optimized for individuals of European race/ethnicity was previously described. Here we detail the development of three additional microarrays optimized for individuals of East Asian, African American, and Latino race/ethnicity. For these arrays, we decreased redundancy of high-performing SNPs to increase SNP capacity. The East Asian array was designed using greedy pairwise SNP selection. However, removing SNPs from the target set based on imputation coverage is more efficient than pairwise tagging. Therefore, we developed a novel hybrid SNP selection method for the African American and Latino arrays utilizing rounds of greedy pairwise SNP selection, followed by removal from the target set of SNPs covered by imputation. The arrays provide excellent genome-wide coverage and are valuable additions for large-scale GWA studies. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Contribution of Selective Conditions to Microbial Competition in Four Listeria Selective Enrichment Formulations

    PubMed Central

    Keys, Ashley L.; Hitchins, Anthony D.; Smiley, R. Derike

    2017-01-01

    Microbial competition during selective enrichment negatively affects Listeria monocytogenes populations and may hinder the subsequent detection or recovery of this organism. Competition assays among 10 selected strains of Listeria and Citrobacter braakii were performed in buffered Listeria enrichment broth, 3-(N-morpholino)propanesulfonic acid–buffered Listeria enrichment broth, University of Vermont medium–modified Listeria enrichment broth, and Fraser broth. The individual contributions of each selective agent in these media were also assessed, as well as the contribution of incubation temperature. Acriflavine hydrochloride and sodium nalidixate were ineffective at preventing the overgrowth of C. braakii; this resulted in substantially lower populations of Listeria than when the competitor was absent. At the higher levels, both of these selective agents were detrimental to Listeria populations. The highest enrichment populations of Listeria were observed when either NaCl or LiCl was present. In the absence of selective agents, the final populations of Listeria following competitive growth with C. braakii were not substantially affected by temperature; however, in the presence of selective agents, the Listeria populations were statistically higher at the higher incubation temperature. There are a limited number of selective agents available for use in Listeria-specific enrichment media, resulting in formulations that are only somewhat selective for this species. The optimization of current formulations may help researchers to improve Listeria recovery, particularly from products with a high microbial load. The understanding of the behavior and interactions between target and nontarget microorganisms in the presence of these available selective agents is a necessary step in the optimization of Listeria selective enrichment formulations. PMID:28221922

  15. Contribution of Selective Conditions to Microbial Competition in Four Listeria Selective Enrichment Formulations.

    PubMed

    Keys, Ashley L; Hitchins, Anthony D; Smiley, R Derike

    2016-11-01

    Microbial competition during selective enrichment negatively affects Listeria monocytogenes populations and may hinder the subsequent detection or recovery of this organism. Competition assays among 10 selected strains of Listeria and Citrobacter braakii were performed in buffered Listeria enrichment broth, 3-(N-morpholino)propanesulfonic acid-buffered Listeria enrichment broth, University of Vermont medium-modified Listeria enrichment broth, and Fraser broth. The individual contributions of each selective agent in these media were also assessed, as well as the contribution of incubation temperature. Acriflavine hydrochloride and sodium nalidixate were ineffective at preventing the overgrowth of C. braakii ; this resulted in substantially lower populations of Listeria than when the competitor was absent. At the higher levels, both of these selective agents were detrimental to Listeria populations. The highest enrichment populations of Listeria were observed when either NaCl or LiCl was present. In the absence of selective agents, the final populations of Listeria following competitive growth with C. braakii were not substantially affected by temperature; however, in the presence of selective agents, the Listeria populations were statistically higher at the higher incubation temperature. There are a limited number of selective agents available for use in Listeria -specific enrichment media, resulting in formulations that are only somewhat selective for this species. The optimization of current formulations may help researchers to improve Listeria recovery, particularly from products with a high microbial load. The understanding of the behavior and interactions between target and nontarget microorganisms in the presence of these available selective agents is a necessary step in the optimization of Listeria selective enrichment formulations.

  16. A class of stochastic optimization problems with one quadratic & several linear objective functions and extended portfolio selection model

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Li, Jun

    2002-09-01

    In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.

  17. Postoptimality Analysis in the Selection of Technology Portfolios

    NASA Technical Reports Server (NTRS)

    Adumitroaie, Virgil; Shelton, Kacie; Elfes, Alberto; Weisbin, Charles R.

    2006-01-01

    This slide presentation reviews a process of postoptimally analysing the selection of technology portfolios. The rationale for the analysis stems from the need for consistent, transparent and auditable decision making processes and tools. The methodology is used to assure that project investments are selected through an optimization of net mission value. The main intent of the analysis is to gauge the degree of confidence in the optimal solution and to provide the decision maker with an array of viable selection alternatives which take into account input uncertainties and possibly satisfy non-technical constraints. A few examples of the analysis are reviewed. The goal of the postoptimality study is to enhance and improve the decision-making process by providing additional qualifications and substitutes to the optimal solution.

  18. Combined Optimal Control System for excavator electric drive

    NASA Astrophysics Data System (ADS)

    Kurochkin, N. S.; Kochetkov, V. P.; Platonova, E. V.; Glushkin, E. Y.; Dulesov, A. S.

    2018-03-01

    The article presents a synthesis of the combined optimal control algorithms of the AC drive rotation mechanism of the excavator. Synthesis of algorithms consists in the regulation of external coordinates - based on the theory of optimal systems and correction of the internal coordinates electric drive using the method "technical optimum". The research shows the advantage of optimal combined control systems for the electric rotary drive over classical systems of subordinate regulation. The paper presents a method for selecting the optimality criterion of coefficients to find the intersection of the range of permissible values of the coordinates of the control object. There is possibility of system settings by choosing the optimality criterion coefficients, which allows one to select the required characteristics of the drive: the dynamic moment (M) and the time of the transient process (tpp). Due to the use of combined optimal control systems, it was possible to significantly reduce the maximum value of the dynamic moment (M) and at the same time - reduce the transient time (tpp).

  19. Improvements of the Vis-NIRS Model in the Prediction of Soil Organic Matter Content Using Spectral Pretreatments, Sample Selection, and Wavelength Optimization

    NASA Astrophysics Data System (ADS)

    Lin, Z. D.; Wang, Y. B.; Wang, R. J.; Wang, L. S.; Lu, C. P.; Zhang, Z. Y.; Song, L. T.; Liu, Y.

    2017-07-01

    A total of 130 topsoil samples collected from Guoyang County, Anhui Province, China, were used to establish a Vis-NIR model for the prediction of organic matter content (OMC) in lime concretion black soils. Different spectral pretreatments were applied for minimizing the irrelevant and useless information of the spectra and increasing the spectra correlation with the measured values. Subsequently, the Kennard-Stone (KS) method and sample set partitioning based on joint x-y distances (SPXY) were used to select the training set. Successive projection algorithm (SPA) and genetic algorithm (GA) were then applied for wavelength optimization. Finally, the principal component regression (PCR) model was constructed, in which the optimal number of principal components was determined using the leave-one-out cross validation technique. The results show that the combination of the Savitzky-Golay (SG) filter for smoothing and multiplicative scatter correction (MSC) can eliminate the effect of noise and baseline drift; the SPXY method is preferable to KS in the sample selection; both the SPA and the GA can significantly reduce the number of wavelength variables and favorably increase the accuracy, especially GA, which greatly improved the prediction accuracy of soil OMC with Rcc, RMSEP, and RPD up to 0.9316, 0.2142, and 2.3195, respectively.

  20. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution

    PubMed Central

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M.; Bai, Ruibin

    2016-01-01

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition. PMID:27854324

  1. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution.

    PubMed

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin

    2016-11-16

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.

  2. Optimization-Based Selection of Influential Agents in a Rural Afghan Social Network

    DTIC Science & Technology

    2010-06-01

    nonlethal targeting model, a nonlinear programming ( NLP ) optimization formulation that identifies the k US agent assignment strategy producing the greatest...leader social network, and 3) the nonlethal targeting model, a nonlinear programming ( NLP ) optimization formulation that identifies the k US agent...NATO Coalition in Afghanistan. 55 for Afghanistan ( [54], [31], [48], [55], [30]). While Arab tribes tend to be more hierarchical, Pashtun tribes are

  3. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  4. Optimization of Helium Vessel Design for ILC Cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fratangelo, Enrico

    2009-01-01

    certify the compliance of the Helium vessel and the cavity to the ASME code standard. After briefly recalling to the main contents of the the ASME Code (Sections II and Vlll - Division ll), the procedure used for finding all relevant stresses and comparing the obtained results with the maximum values allowed are explained. This part also includes the buckling verification of the cavity. In Chapter 5 the manufacturing process of the cavity end-caps, whose function is to link the Helium vessel with the cavity, is studied. The present configuration of the dies is described and the manufacturing process is simulated in order to explain the origin of some defects fol.llld on real parts. Finally a new design of the dies is proposed and the resulting deformed piece is compared with the design requirements. Chapter 6 describes a finite elements analysis to assess the efficiency and the stiffness of the Helium vessel. Furthermore the results of the optimization of the Helium vessel (in order to increase the value of the efficiency) are reported. The same stiffness analysis is used in Chapter 7 for the Blade-Tuner study. After a description of this tuner and of its function, the preliminary analyses done to confirm the results provided by the vendor are described and then its limiting load conditions are found. Chapter 8 shows a study of the resistance of all the welds present in between the cavity and the end-cap and between the end-caps and the He vessel for a smaller superconducting cavity operating at 3.9 GHz. Finally Chapter 9 briefly describes some R&D activities in progress at INFN (Section of Pisa) and Fermilab that could produce significant cost reductions of the Helium vessel design. All the finite elements analyses contained and described in this thesis made possible the certification of the whole superconducting cavity-Helium vessel assembly at Fermilab. Furthermore they gave several useful indications to the Fermilab staff to improve the performance of the Helium

  5. Contingency Contractor Optimization Phase 3 Sustainment Software Design Document - Contingency Contractor Optimization Tool - Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durfee, Justin David; Frazier, Christopher Rawls; Bandlow, Alisa

    This document describes the final software design of the Contingency Contractor Optimization Tool - Prototype. Its purpose is to provide the overall architecture of the software and the logic behind this architecture. Documentation for the individual classes is provided in the application Javadoc. The Contingency Contractor Optimization project is intended to address Department of Defense mandates by delivering a centralized strategic planning tool that allows senior decision makers to quickly and accurately assess the impacts, risks, and mitigation strategies associated with utilizing contract support. The Contingency Contractor Optimization Tool - Prototype was developed in Phase 3 of the OSD ATLmore » Contingency Contractor Optimization project to support strategic planning for contingency contractors. The planning tool uses a model to optimize the Total Force mix by minimizing the combined total costs for selected mission scenarios. The model optimizes the match of personnel types (military, DoD civilian, and contractors) and capabilities to meet mission requirements as effectively as possible, based on risk, cost, and other requirements.« less

  6. Personalizing colon cancer adjuvant therapy: selecting optimal treatments for individual patients.

    PubMed

    Dienstmann, Rodrigo; Salazar, Ramon; Tabernero, Josep

    2015-06-01

    For more than three decades, postoperative chemotherapy-initially fluoropyrimidines and more recently combinations with oxaliplatin-has reduced the risk of tumor recurrence and improved survival for patients with resected colon cancer. Although universally recommended for patients with stage III disease, there is no consensus about the survival benefit of postoperative chemotherapy in stage II colon cancer. The most recent adjuvant clinical trials have not shown any value for adding targeted agents, namely bevacizumab and cetuximab, to standard chemotherapies in stage III disease, despite improved outcomes in the metastatic setting. However, biomarker analyses of multiple studies strongly support the feasibility of refining risk stratification in colon cancer by factoring in molecular characteristics with pathologic tumor staging. In stage II disease, for example, microsatellite instability supports observation after surgery. Furthermore, the value of BRAF or KRAS mutations as additional risk factors in stage III disease is greater when microsatellite status and tumor location are taken into account. Validated predictive markers of adjuvant chemotherapy benefit for stage II or III colon cancer are lacking, but intensive research is ongoing. Recent advances in understanding the biologic hallmarks and drivers of early-stage disease as well as the micrometastatic environment are expected to translate into therapeutic strategies tailored to select patients. This review focuses on the pathologic, molecular, and gene expression characterizations of early-stage colon cancer; new insights into prognostication; and emerging predictive biomarkers that could ultimately help define the optimal adjuvant treatments for patients in routine clinical practice. © 2015 by American Society of Clinical Oncology.

  7. Laser dimpling process parameters selection and optimization using surrogate-driven process capability space

    NASA Astrophysics Data System (ADS)

    Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz

    2017-08-01

    Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii

  8. Design of an 81.25 MHz continuous-wave radio-frequency quadrupole accelerator for Low Energy Accelerator Facility

    NASA Astrophysics Data System (ADS)

    Ma, Wei; Lu, Liang; Xu, Xianbo; Sun, Liepeng; Zhang, Zhouli; Dou, Weiping; Li, Chenxing; Shi, Longbo; He, Yuan; Zhao, Hongwei

    2017-03-01

    An 81.25 MHz continuous wave (CW) radio frequency quadrupole (RFQ) accelerator has been designed for the Low Energy Accelerator Facility (LEAF) at the Institute of Modern Physics (IMP) of the Chinese Academy of Science (CAS). In the CW operating mode, the proposed RFQ design adopted the conventional four-vane structure. The main design goals are providing high shunt impendence with low power losses. In the electromagnetic (EM) design, the π-mode stabilizing loops (PISLs) were optimized to produce a good mode separation. The tuners were also designed and optimized to tune the frequency and field flatness of the operating mode. The vane undercuts were optimized to provide a flat field along the RFQ cavity. Additionally, a full length model with modulations was set up for the final EM simulations. Following the EM design, thermal analysis of the structure was carried out. In this paper, detailed EM design and thermal simulations of the LEAF-RFQ will be presented and discussed. Structure error analysis was also studied.

  9. Digital logic optimization using selection operators

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor); Cameron, Eric G. (Inventor); Gambles, Jody W. (Inventor)

    2004-01-01

    According to the invention, a digital design method for manipulating a digital circuit netlist is disclosed. In one step, a first netlist is loaded. The first netlist is comprised of first basic cells that are comprised of first kernel cells. The first netlist is manipulated to create a second netlist. The second netlist is comprised of second basic cells that are comprised of second kernel cells. A percentage of the first and second kernel cells are selection circuits. There is less chip area consumed in the second basic cells than in the first basic cells. The second netlist is stored. In various embodiments, the percentage could be 2% or more, 5% or more, 10% or more, 20% or more, 30% or more, or 40% or more.

  10. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    PubMed Central

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  11. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness.

    PubMed

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  12. Study for the selection of optimal site in northeastern, Mexico for wind power generation using genetic algorithms.

    NASA Astrophysics Data System (ADS)

    Gonzalez, T.; Ruvalcaba, A.; Oliver, L.

    2016-12-01

    The electricity generation from renewable resources has acquired a leading role. Mexico particularrly it has great interest in renewable natural resources for power generation, especially wind energy. Therefore, the country is rapidly entering in the development of wind power generators sites. The development of a wind places as an energy project, does not have a standardized methodology. Techniques vary according to the developer to select the best place to install a wind turbine system. Generally to install the system the developers consider three key factors: 1) the characteristics of the wind, 2) the potential distribution of electricity and 3) transport access to the site. This paper presents a study with a different methodology which is carried out in two stages: the first at regional scale uses "space" and "natural" criteria in order to select a region based on its cartographic features such as politics and physiographic division, location of conservation natural areas, water bodies, urban criteria; and natural criteria such as the amount and direction of the wind, the type and land use, vegetation, topography and biodiversity of the site. The result of the application of these criteria, gives a first optimal selection area. The second part of the methodology includes criteria and variables on detail scale. The analysis of all data information collected will provide new parameters (decision variables) for the site. The overall analysis of the information, based in these criteria, indicates that the best location that the best location of the field would be the southern Coahuila and the central part of Nuevo Leon. The wind power site will contribute to the economy grow of important cities including Monterrey. Finally, computational model of genetic algorithm will be used as a tool to determine the best site selection depending on the parameters considered.

  13. Vast Portfolio Selection with Gross-exposure Constraints*

    PubMed Central

    Fan, Jianqing; Zhang, Jingjin; Yu, Ke

    2012-01-01

    We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404

  14. Optimizing Surveillance Performance of Alpha-Fetoprotein by Selection of Proper Target Population in Chronic Hepatitis B

    PubMed Central

    Chung, Jung Wha; Kim, Beom Hee; Lee, Chung Seop; Kim, Gi Hyun; Sohn, Hyung Rae; Min, Bo Young; Song, Joon Chang; Park, Hyun Kyung; Jang, Eun Sun; Yoon, Hyuk; Kim, Jaihwan; Shin, Cheol Min; Park, Young Soo; Hwang, Jin-Hyeok; Jeong, Sook-Hyang; Kim, Nayoung; Lee, Dong Ho; Lee, Jaebong; Ahn, Soyeon

    2016-01-01

    Although alpha-fetoprotein (AFP) is the most widely used biomarker in hepatocellular carcinoma (HCC) surveillance, disease activity may also increase AFP levels in chronic hepatitis B (CHB). Since nucleos(t)ide analog (NA) therapy may reduce not only HBV viral loads and transaminase levels but also the falsely elevated AFP levels in CHB, we tried to determine whether exposure to NA therapy influences AFP performance and whether selective application can optimize the performance of AFP testing in CHB during HCC surveillance. A retrospective cohort of 6,453 CHB patients who received HCC surveillance was constructed from the electronic clinical data warehouse. Covariates of AFP elevation were determined from 53,137 AFP measurements, and covariate-specific receiver operating characteristics regression analysis revealed that albumin levels and exposure to NA therapy were independent determinants of AFP performance. C statistics were largest in patients with albumin levels ≥ 3.7 g/dL who were followed without NA therapy during study period, whereas AFP performance was poorest when tested in patients with NA therapy during study and albumin levels were < 3.7 g/dL (difference in C statics = 0.35, p < 0.0001). Contrary to expectation, CHB patients with current or recent exposure to NA therapy showed poorer performance of AFP during HCC surveillance. Combination of concomitant albumin levels and status of NA therapy can identify subgroup of CHB patients who will show optimized AFP performance. PMID:27997559

  15. Optimal Target Stars in the Search for Life

    NASA Astrophysics Data System (ADS)

    Lingam, Manasvi; Loeb, Abraham

    2018-04-01

    The selection of optimal targets in the search for life represents a highly important strategic issue. In this Letter, we evaluate the benefits of searching for life around a potentially habitable planet orbiting a star of arbitrary mass relative to a similar planet around a Sun-like star. If recent physical arguments implying that the habitability of planets orbiting low-mass stars is selectively suppressed are correct, we find that planets around solar-type stars may represent the optimal targets.

  16. A New Combinatorial Optimization Approach for Integrated Feature Selection Using Different Datasets: A Prostate Cancer Transcriptomic Study

    PubMed Central

    Puthiyedth, Nisha; Riveros, Carlos; Berretta, Regina; Moscato, Pablo

    2015-01-01

    Background The joint study of multiple datasets has become a common technique for increasing statistical power in detecting biomarkers obtained from smaller studies. The approach generally followed is based on the fact that as the total number of samples increases, we expect to have greater power to detect associations of interest. This methodology has been applied to genome-wide association and transcriptomic studies due to the availability of datasets in the public domain. While this approach is well established in biostatistics, the introduction of new combinatorial optimization models to address this issue has not been explored in depth. In this study, we introduce a new model for the integration of multiple datasets and we show its application in transcriptomics. Methods We propose a new combinatorial optimization problem that addresses the core issue of biomarker detection in integrated datasets. Optimal solutions for this model deliver a feature selection from a panel of prospective biomarkers. The model we propose is a generalised version of the (α,β)-k-Feature Set problem. We illustrate the performance of this new methodology via a challenging meta-analysis task involving six prostate cancer microarray datasets. The results are then compared to the popular RankProd meta-analysis tool and to what can be obtained by analysing the individual datasets by statistical and combinatorial methods alone. Results Application of the integrated method resulted in a more informative signature than the rank-based meta-analysis or individual dataset results, and overcomes problems arising from real world datasets. The set of genes identified is highly significant in the context of prostate cancer. The method used does not rely on homogenisation or transformation of values to a common scale, and at the same time is able to capture markers associated with subgroups of the disease. PMID:26106884

  17. Aerodynamic optimization studies on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana

    1995-01-01

    The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

  18. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2004-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  19. Optimizing Experimental Design for Comparing Models of Brain Function

    PubMed Central

    Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas

    2011-01-01

    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485

  20. Selection of Reserves for Woodland Caribou Using an Optimization Approach

    PubMed Central

    Schneider, Richard R.; Hauer, Grant; Dawe, Kimberly; Adamowicz, Wiktor; Boutin, Stan

    2012-01-01

    Habitat protection has been identified as an important strategy for the conservation of woodland caribou (Rangifer tarandus). However, because of the economic opportunity costs associated with protection it is unlikely that all caribou ranges can be protected in their entirety. We used an optimization approach to identify reserve designs for caribou in Alberta, Canada, across a range of potential protection targets. Our designs minimized costs as well as three demographic risk factors: current industrial footprint, presence of white-tailed deer (Odocoileus virginianus), and climate change. We found that, using optimization, 60% of current caribou range can be protected (including 17% in existing parks) while maintaining access to over 98% of the value of resources on public lands. The trade-off between minimizing cost and minimizing demographic risk factors was minimal because the spatial distributions of cost and risk were similar. The prospects for protection are much reduced if protection is directed towards the herds that are most at risk of near-term extirpation. PMID:22363702

  1. Wind selection and drift compensation optimize migratory pathways in a high-flying moth.

    PubMed

    Chapman, Jason W; Reynolds, Don R; Mouritsen, Henrik; Hill, Jane K; Riley, Joe R; Sivell, Duncan; Smith, Alan D; Woiwod, Ian P

    2008-04-08

    Numerous insect species undertake regular seasonal migrations in order to exploit temporary breeding habitats [1]. These migrations are often achieved by high-altitude windborne movement at night [2-6], facilitating rapid long-distance transport, but seemingly at the cost of frequent displacement in highly disadvantageous directions (the so-called "pied piper" phenomenon [7]). This has lead to uncertainty about the mechanisms migrant insects use to control their migratory directions [8, 9]. Here we show that, far from being at the mercy of the wind, nocturnal moths have unexpectedly complex behavioral mechanisms that guide their migratory flight paths in seasonally-favorable directions. Using entomological radar, we demonstrate that free-flying individuals of the migratory noctuid moth Autographa gamma actively select fast, high-altitude airstreams moving in a direction that is highly beneficial for their autumn migration. They also exhibit common orientation close to the downwind direction, thus maximizing the rectilinear distance traveled. Most unexpectedly, we find that when winds are not closely aligned with the moth's preferred heading (toward the SSW), they compensate for cross-wind drift, thus increasing the probability of reaching their overwintering range. We conclude that nocturnally migrating moths use a compass and an inherited preferred direction to optimize their migratory track.

  2. Optimizing selection of training and auxiliary data for operational land cover classification for the LCMAP initiative

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.

    2016-12-01

    The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow

  3. Shape and Reinforcement Optimization of Underground Tunnels

    NASA Astrophysics Data System (ADS)

    Ghabraie, Kazem; Xie, Yi Min; Huang, Xiaodong; Ren, Gang

    Design of support system and selecting an optimum shape for the opening are two important steps in designing excavations in rock masses. Currently selecting the shape and support design are mainly based on designer's judgment and experience. Both of these problems can be viewed as material distribution problems where one needs to find the optimum distribution of a material in a domain. Topology optimization techniques have proved to be useful in solving these kinds of problems in structural design. Recently the application of topology optimization techniques in reinforcement design around underground excavations has been studied by some researchers. In this paper a three-phase material model will be introduced changing between normal rock, reinforced rock, and void. Using such a material model both problems of shape and reinforcement design can be solved together. A well-known topology optimization technique used in structural design is bi-directional evolutionary structural optimization (BESO). In this paper the BESO technique has been extended to simultaneously optimize the shape of the opening and the distribution of reinforcements. Validity and capability of the proposed approach have been investigated through some examples.

  4. Hash Bit Selection for Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Junfeng He; Shih-Fu Chang

    2017-11-01

    To overcome the barrier of storage and computation when dealing with gigantic-scale data sets, compact hashing has been studied extensively to approximate the nearest neighbor search. Despite the recent advances, critical design issues remain open in how to select the right features, hashing algorithms, and/or parameter settings. In this paper, we address these by posing an optimal hash bit selection problem, in which an optimal subset of hash bits are selected from a pool of candidate bits generated by different features, algorithms, or parameters. Inspired by the optimization criteria used in existing hashing algorithms, we adopt the bit reliability and their complementarity as the selection criteria that can be carefully tailored for hashing performance in different tasks. Then, the bit selection solution is discovered by finding the best tradeoff between search accuracy and time using a modified dynamic programming method. To further reduce the computational complexity, we employ the pairwise relationship among hash bits to approximate the high-order independence property, and formulate it as an efficient quadratic programming method that is theoretically equivalent to the normalized dominant set problem in a vertex- and edge-weighted graph. Extensive large-scale experiments have been conducted under several important application scenarios of hash techniques, where our bit selection framework can achieve superior performance over both the naive selection methods and the state-of-the-art hashing algorithms, with significant accuracy gains ranging from 10% to 50%, relatively.

  5. Faraday anomalous dispersion optical tuners

    NASA Technical Reports Server (NTRS)

    Wanninger, P.; Valdez, E. C.; Shay, T. M.

    1992-01-01

    Common methods for frequency stabilizing diode lasers systems employ gratings, etalons, optical electric double feedback, atomic resonance, and a Faraday cell with low magnetic field. Our method, the Faraday Anomalous Dispersion Optical Transmitter (FADOT) laser locking, is much simpler than other schemes. The FADOT uses commercial laser diodes with no antireflection coatings, an atomic Faraday cell with a single polarizer, and an output coupler to form a compound cavity. This method is vibration insensitive, thermal expansion effects are minimal, and the system has a frequency pull in range of 443.2 GHz (9A). Our technique is based on the Faraday anomalous dispersion optical filter. This method has potential applications in optical communication, remote sensing, and pumping laser excited optical filters. We present the first theoretical model for the FADOT and compare the calculations to our experimental results.

  6. Beam dynamics and electromagnetic studies of a 3 MeV, 325 MHz radio frequency quadrupole accelerator

    NASA Astrophysics Data System (ADS)

    Gaur, Rahul; Kumar, Vinit

    2018-05-01

    We present the beam dynamics and electromagnetic studies of a 3 MeV, 325 MHz H- radio frequency quadrupole (RFQ) accelerator for the proposed Indian Spallation Neutron Source project. We have followed a design approach, where the emittance growth and the losses are minimized by keeping the tune depression ratio larger than 0.5. The transverse cross-section of RFQ is designed at a frequency lower than the operating frequency, so that the tuners have their nominal position inside the RFQ cavity. This has resulted in an improvement of the tuning range, and the efficiency of tuners to correct the field errors in the RFQ. The vane-tip modulations have been modelled in CST-MWS code, and its effect on the field flatness and the resonant frequency has been studied. The deterioration in the field flatness due to vane-tip modulations is reduced to an acceptable level with the help of tuners. Details of the error study and the higher order mode study along with mode stabilization technique are also described in the paper.

  7. Epicardial left ventricular lead placement for cardiac resynchronization therapy: optimal pace site selection with pressure-volume loops.

    PubMed

    Dekker, A L A J; Phelps, B; Dijkman, B; van der Nagel, T; van der Veen, F H; Geskes, G G; Maessen, J G

    2004-06-01

    Patients in heart failure with left bundle branch block benefit from cardiac resynchronization therapy. Usually the left ventricular pacing lead is placed by coronary sinus catheterization; however, this procedure is not always successful, and patients may be referred for surgical epicardial lead placement. The objective of this study was to develop a method to guide epicardial lead placement in cardiac resynchronization therapy. Eleven patients in heart failure who were eligible for cardiac resynchronization therapy were referred for surgery because of failed coronary sinus left ventricular lead implantation. Minithoracotomy or thoracoscopy was performed, and a temporary epicardial electrode was used for biventricular pacing at various sites on the left ventricle. Pressure-volume loops with the conductance catheter were used to select the best site for each individual patient. Relative to the baseline situation, biventricular pacing with an optimal left ventricular lead position significantly increased stroke volume (+39%, P =.01), maximal left ventricular pressure derivative (+20%, P =.02), ejection fraction (+30%, P =.007), and stroke work (+66%, P =.006) and reduced end-systolic volume (-6%, P =.04). In contrast, biventricular pacing at a suboptimal site did not significantly change left ventricular function and even worsened it in some cases. To optimize cardiac resynchronization therapy with epicardial leads, mapping to determine the best pace site is a prerequisite. Pressure-volume loops offer real-time guidance for targeting epicardial lead placement during minimal invasive surgery.

  8. Methodology of shell structure reinforcement layout optimization

    NASA Astrophysics Data System (ADS)

    Szafrański, Tomasz; Małachowski, Jerzy; Damaziak, Krzysztof

    2018-01-01

    This paper presents an optimization process of a reinforced shell diffuser intended for a small wind turbine (rated power of 3 kW). The diffuser structure consists of multiple reinforcement and metal skin. This kind of structure is suitable for optimization in terms of selection of reinforcement density, stringers cross sections, sheet thickness, etc. The optimisation approach assumes the reduction of the amount of work to be done between the optimization process and the final product design. The proposed optimization methodology is based on application of a genetic algorithm to generate the optimal reinforcement layout. The obtained results are the basis for modifying the existing Small Wind Turbine (SWT) design.

  9. Optimal trajectories for hypersonic launch vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas

    1992-01-01

    In this paper, we derive a near-optimal guidance law for the ascent trajectory from Earth surface to Earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optimal flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. The performance objective is a weighted sum of fuel mass and volume, with the weighting factor selected to give minimum gross take-off weight for a specific payload mass and volume.

  10. Structural and mechanical evaluations of a topology optimized titanium interbody fusion cage fabricated by selective laser melting process.

    PubMed

    Lin, Chia-Ying; Wirtz, Tobias; LaMarca, Frank; Hollister, Scott J

    2007-11-01

    A topology optimized lumbar interbody fusion cage was made of Ti-Al6-V4 alloy by the rapid prototyping process of selective laser melting (SLM) to reproduce designed microstructure features. Radiographic characterizations and the mechanical properties were investigated to determine how the structural characteristics of the fabricated cage were reproduced from design characteristics using micro-computed tomography scanning. The mechanical modulus of the designed cage was also measured to compare with tantalum, a widely used porous metal. The designed microstructures can be clearly seen in the micrographs of the micro-CT and scanning electron microscopy examinations, showing the SLM process can reproduce intricate microscopic features from the original designs. No imaging artifacts from micro-CT were found. The average compressive modulus of the tested caged was 2.97+/-0.90 GPa, which is comparable with the reported porous tantalum modulus of 3 GPa and falls between that of cortical bone (15 GPa) and trabecular bone (0.1-0.5 GPa). The new porous Ti-6Al-4V optimal-structure cage fabricated by SLM process gave consistent mechanical properties without artifactual distortion in the imaging modalities and thus it can be a promising alternative as a porous implant for spine fusion. Copyright (c) 2007 Wiley Periodicals, Inc.

  11. selectSNP – An R package for selecting SNPs optimal for genetic evaluation

    USDA-ARS?s Scientific Manuscript database

    There has been a huge increase in the number of SNPs in the public repositories. This has made it a challenge to design low and medium density SNP panels, which requires careful selection of available SNPs considering many criteria, such as map position, allelic frequency, possible biological functi...

  12. Strategies for Fermentation Medium Optimization: An In-Depth Review

    PubMed Central

    Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.

    2017-01-01

    Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566

  13. Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.

    PubMed

    Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S

    2017-01-01

    Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.

  14. Computational Optimization and Characterization of Molecularly Imprinted Polymers

    NASA Astrophysics Data System (ADS)

    Terracina, Jacob J.

    Molecularly imprinted polymers (MIPs) are a class of materials containing sites capable of selectively binding to the imprinted target molecule. Computational chemistry techniques were used to study the effect of different fabrication parameters (the monomer-to-target ratios, pre-polymerization solvent, temperature, and pH) on the formation of the MIP binding sites. Imprinted binding sites were built in silico for the purposes of better characterizing the receptor - ligand interactions. Chiefly, the sites were characterized with respect to their selectivities and the heterogeneity between sites. First, a series of two-step molecular mechanics (MM) and quantum mechanics (QM) computational optimizations of monomer -- target systems was used to determine optimal monomer-to-target ratios for the MIPs. Imidazole- and xanthine-derived target molecules were studied. The investigation included both small-scale models (one-target) and larger scale models (five-targets). The optimal ratios differed between the small and larger scales. For the larger models containing multiple targets, binding-site surface area analysis was used to evaluate the heterogeneity of the sites. The more fully surrounded sites had greater binding energies. Molecular docking was then used to measure the selectivities of the QM-optimized binding sites by comparing the binding energies of the imprinted target to that of a structural analogue. Selectivity was also shown to improve as binding sites become more fully encased by the monomers. For internal sites, docking consistently showed selectivity favoring the molecules that had been imprinted via QM geometry optimizations. The computationally imprinted sites were shown to exhibit size-, shape-, and polarity-based selectivity. This represented a novel approach to investigate the selectivity and heterogeneity of imprinted polymer binding sites, by applying the rapid orientation screening of MM docking to the highly accurate QM-optimized geometries. Next

  15. L-O-S-T: Logging Optimization Selection Technique

    Treesearch

    Jerry L. Koger; Dennis B. Webster

    1984-01-01

    L-O-S-T is a FORTRAN computer program developed to systematically quantify, analyze, and improve user selected harvesting methods. Harvesting times and costs are computed for road construction, landing construction, system move between landings, skidding, and trucking. A linear programming formulation utilizing the relationships among marginal analysis, isoquants, and...

  16. Optimizing risk stratification in heart failure and the selection of candidates for heart transplantation.

    PubMed

    Pereira-da-Silva, Tiago; M Soares, Rui; Papoila, Ana Luísa; Pinto, Iola; Feliciano, Joana; Almeida-Morais, Luís; Abreu, Ana; Cruz Ferreira, Rui

    2018-02-01

    Selecting patients for heart transplantation is challenging. We aimed to identify the most important risk predictors in heart failure and an approach to optimize the selection of candidates for heart transplantation. Ambulatory patients followed in our center with symptomatic heart failure and left ventricular ejection fraction ≤40% prospectively underwent a comprehensive baseline assessment including clinical, laboratory, electrocardiographic, echocardiographic, and cardiopulmonary exercise testing parameters. All patients were followed for 60 months. The combined endpoint was cardiac death, urgent heart transplantation or need for mechanical circulatory support, up to 36 months. In the 263 enrolled patients (75% male, age 54±12 years), 54 events occurred. The independent predictors of adverse outcome were ventilatory efficiency (VE/VCO 2 ) slope (HR 1.14, 95% CI 1.11-1.18), creatinine level (HR 2.23, 95% CI 1.14-4.36), and left ventricular ejection fraction (HR 0.96, 95% CI 0.93-0.99). VE/VCO 2 slope was the most accurate risk predictor at any follow-up time analyzed (up to 60 months). The threshold of 39.0 yielded high specificity (97%), discriminated a worse or better prognosis than that reported for post-heart transplantation, and outperformed peak oxygen consumption thresholds of 10.0 or 12.0 ml/kg/min. For low-risk patients (VE/VCO 2 slope <39.0), sodium and creatinine levels and variations in end-tidal carbon dioxide partial pressure on exercise identified those with excellent prognosis. VE/VCO 2 slope was the most accurate parameter for risk stratification in patients with heart failure and reduced ejection fraction. Those with VE/VCO 2 slope ≥39.0 may benefit from heart transplantation. Copyright © 2018 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  17. Experiments for practical education in process parameter optimization for selective laser sintering to increase workpiece quality

    NASA Astrophysics Data System (ADS)

    Reutterer, Bernd; Traxler, Lukas; Bayer, Natascha; Drauschke, Andreas

    2016-04-01

    Selective Laser Sintering (SLS) is considered as one of the most important additive manufacturing processes due to component stability and its broad range of usable materials. However the influence of the different process parameters on mechanical workpiece properties is still poorly studied, leading to the fact that further optimization is necessary to increase workpiece quality. In order to investigate the impact of various process parameters, laboratory experiments are implemented to improve the understanding of the SLS limitations and advantages on an educational level. Experiments are based on two different workstations, used to teach students the fundamentals of SLS. First of all a 50 W CO2 laser workstation is used to investigate the interaction of the laser beam with the used material in accordance with varied process parameters to analyze a single-layered test piece. Second of all the FORMIGA P110 laser sintering system from EOS is used to print different 3D test pieces in dependence on various process parameters. Finally quality attributes are tested including warpage, dimension accuracy or tensile strength. For dimension measurements and evaluation of the surface structure a telecentric lens in combination with a camera is used. A tensile test machine allows testing of the tensile strength and the interpreting of stress-strain curves. The developed laboratory experiments are suitable to teach students the influence of processing parameters. In this context they will be able to optimize the input parameters depending on the component which has to be manufactured and to increase the overall quality of the final workpiece.

  18. Optimization of Culture Parameters for Maximum Polyhydroxybutyrate Production by Selected Bacterial Strains Isolated from Rhizospheric Soils.

    PubMed

    Lathwal, Priyanka; Nehra, Kiran; Singh, Manpreet; Jamdagni, Pragati; Rana, Jogender S

    2015-01-01

    The enormous applications of conventional non-biodegradable plastics have led towards their increased usage and accumulation in the environment. This has become one of the major causes of global environmental concern in the present century. Polyhydroxybutyrate (PHB), a biodegradable plastic is known to have properties similar to conventional plastics, thus exhibiting a potential for replacing conventional non-degradable plastics. In the present study, a total of 303 different bacterial isolates were obtained from soil samples collected from the rhizospheric area of three crops, viz., wheat, mustard and sugarcane. All the isolates were screened for PHB (Poly-3-hydroxy butyric acid) production using Sudan Black staining method, and 194 isolates were found to be PHB positive. Based upon the amount of PHB produced, the isolates were divided into three categories: high, medium and low producers. Representative isolates from each category were selected for biochemical characterization; and for optimization of various culture parameters (carbon source, nitrogen source, C/N ratio, different pH, temperature and incubation time periods) for maximizing PHB accumulation. The highest PHB yield was obtained when the culture medium was supplemented with glucose as the carbon source, ammonium sulphate at a concentration of 1.0 g/l as the nitrogen source, and by maintaining the C/N ratio of the medium as 20:1. The physical growth parameters which supported maximum PHB accumulation included a pH of 7.0, and an incubation temperature of 30 degrees C for a period of 48 h. A few isolates exhibited high PHB accumulation under optimized conditions, thus showing a potential for their industrial exploitation.

  19. Optimizing Requirements Decisions with KEYS

    NASA Technical Reports Server (NTRS)

    Jalali, Omid; Menzies, Tim; Feather, Martin

    2008-01-01

    Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.

  20. Fast numerical design of spatial-selective rf pulses in MRI using Krotov and quasi-Newton based optimal control methods

    NASA Astrophysics Data System (ADS)

    Vinding, Mads S.; Maximov, Ivan I.; Tošner, Zdeněk; Nielsen, Niels Chr.

    2012-08-01

    The use of increasingly strong magnetic fields in magnetic resonance imaging (MRI) improves sensitivity, susceptibility contrast, and spatial or spectral resolution for functional and localized spectroscopic imaging applications. However, along with these benefits come the challenges of increasing static field (B0) and rf field (B1) inhomogeneities induced by radial field susceptibility differences and poorer dielectric properties of objects in the scanner. Increasing fields also impose the need for rf irradiation at higher frequencies which may lead to elevated patient energy absorption, eventually posing a safety risk. These reasons have motivated the use of multidimensional rf pulses and parallel rf transmission, and their combination with tailoring of rf pulses for fast and low-power rf performance. For the latter application, analytical and approximate solutions are well-established in linear regimes, however, with increasing nonlinearities and constraints on the rf pulses, numerical iterative methods become attractive. Among such procedures, optimal control methods have recently demonstrated great potential. Here, we present a Krotov-based optimal control approach which as compared to earlier approaches provides very fast, monotonic convergence even without educated initial guesses. This is essential for in vivo MRI applications. The method is compared to a second-order gradient ascent method relying on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method, and a hybrid scheme Krotov-BFGS is also introduced in this study. These optimal control approaches are demonstrated by the design of a 2D spatial selective rf pulse exciting the letters "JCP" in a water phantom.

  1. A Novel Automatic Phase Selection Device: Design and Optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Li, Haitao; Li, Na; Zhang, Nan; Lv, Wei; Cui, Xiaojiang

    2018-01-01

    At present, AICD completion is an effective way to slow down the bottom water cone. Effective extension of the period without water production. According on the basis of investigating the AICD both at home and abroad, this paper designed a new type of AICD, and with the help of fluid numerical simulation software, the internal flow field was analysed, and its structure is optimized. The simulation results show that the tool can restrict the flow of water well, and the flow of oil is less.

  2. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  3. Optimization of wireless sensor networks based on chicken swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qingxi; Zhu, Lihua

    2017-05-01

    In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.

  4. Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation

    PubMed Central

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  5. Optimization techniques using MODFLOW-GWM

    USGS Publications Warehouse

    Grava, Anna; Feinstein, Daniel T.; Barlow, Paul M.; Bonomi, Tullia; Buarne, Fabiola; Dunning, Charles; Hunt, Randall J.

    2015-01-01

    An important application of optimization codes such as MODFLOW-GWM is to maximize water supply from unconfined aquifers subject to constraints involving surface-water depletion and drawdown. In optimizing pumping for a fish hatchery in a bedrock aquifer system overlain by glacial deposits in eastern Wisconsin, various features of the GWM-2000 code were used to overcome difficulties associated with: 1) Non-linear response matrices caused by unconfined conditions and head-dependent boundaries; 2) Efficient selection of candidate well and drawdown constraint locations; and 3) Optimizing against water-level constraints inside pumping wells. Features of GWM-2000 were harnessed to test the effects of systematically varying the decision variables and constraints on the optimized solution for managing withdrawals. An important lesson of the procedure, similar to lessons learned in model calibration, is that the optimized outcome is non-unique, and depends on a range of choices open to the user. The modeler must balance the complexity of the numerical flow model used to represent the groundwater-flow system against the range of options (decision variables, objective functions, constraints) available for optimizing the model.

  6. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Svečko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  7. Choosing non-redundant representative subsets of protein sequence data sets using submodular optimization.

    PubMed

    Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford

    2018-04-01

    Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.

  8. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework.

    PubMed

    Yang, Guoxiang; Best, Elly P H

    2015-09-15

    Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Exploring performance and energy tradeoffs for irregular applications: A case study on the Tilera many-core architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.

    High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structuredmore » locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.« less

  11. How unrealistic optimism is maintained in the face of reality.

    PubMed

    Sharot, Tali; Korn, Christoph W; Dolan, Raymond J

    2011-10-09

    Unrealistic optimism is a pervasive human trait that influences domains ranging from personal relationships to politics and finance. How people maintain unrealistic optimism, despite frequently encountering information that challenges those biased beliefs, is unknown. We examined this question and found a marked asymmetry in belief updating. Participants updated their beliefs more in response to information that was better than expected than to information that was worse. This selectivity was mediated by a relative failure to code for errors that should reduce optimism. Distinct regions of the prefrontal cortex tracked estimation errors when those called for positive update, both in individuals who scored high and low on trait optimism. However, highly optimistic individuals exhibited reduced tracking of estimation errors that called for negative update in right inferior prefrontal gyrus. These findings indicate that optimism is tied to a selective update failure and diminished neural coding of undesirable information regarding the future.

  12. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  13. A high power, pulsed, microwave amplifier for a synthetique aperture radar electrical model. Phase 1: Design

    NASA Astrophysics Data System (ADS)

    Atkinson, J. E.; Barker, G. G.; Feltham, S. J.; Gabrielson, S.; Lane, P. C.; Matthews, V. J.; Perring, D.; Randall, J. P.; Saunders, J. W.; Tuck, R. A.

    1982-05-01

    An electrical model klystron amplifier was designed. Its features include a gridded gun, a single stage depressed collector, a rare earth permanent magnet focusing system, an input loop, six rugged tuners and a coaxial line output section incorporating a coaxial-to-waveguide transducer and a pillbox window. At each stage of the design, the thermal and mechanical aspects were investigated and optimized within the framework of the RF specification. Extensive use was made of data from the preliminary design study and from RF measurements on the breadboard model. In an additional study, a comprehensive draft tube specification has been produced. Great emphasis has been laid on a second additional study on space-qualified materials and processes.

  14. An efficient one-step condensation and activation strategy to synthesize porous carbons with optimal micropore sizes for highly selective CO₂ adsorption.

    PubMed

    Wang, Jiacheng; Liu, Qian

    2014-04-21

    A series of microporous carbons (MPCs) were successfully prepared by an efficient one-step condensation and activation strategy using commercially available dialdehyde and diamine as carbon sources. The resulting MPCs have large surface areas (up to 1881 m(2) g(-1)), micropore volumes (up to 0.78 cm(3) g(-1)), and narrow micropore size distributions (0.7-1.1 nm). The CO₂ uptakes of the MPCs prepared at high temperatures (700-750 °C) are higher than those prepared under mild conditions (600-650 °C), because the former samples possess optimal micropore sizes (0.7-0.8 nm) that are highly suitable for CO₂ capture due to enhanced adsorbate-adsorbent interactions. At 1 bar, MPC-750 prepared at 750 °C demonstrates the best CO₂ capture performance and can efficiently adsorb CO₂ molecules at 2.86 mmol g(-1) and 4.92 mmol g(-1) at 25 and 0 °C, respectively. In particular, the MPCs with optimal micropore sizes (0.7-0.8 nm) have extremely high CO₂/N₂ adsorption ratios (47 and 52 at 25 and 0 °C, respectively) at 1 bar, and initial CO₂/N₂ adsorption selectivities of up to 81 and 119 at 25 °C and 0 °C, respectively, which are far superior to previously reported values for various porous solids. These excellent results, combined with good adsorption capacities and efficient regeneration/recyclability, make these carbons amongst the most promising sorbents reported so far for selective CO₂ adsorption in practical applications.

  15. Metal–organic framework with optimally selective xenon adsorption and separation

    DOE PAGES

    Banerjee, Debasis; Simon, Cory M.; Plonka, Anna M.; ...

    2016-06-13

    Nuclear energy is considered among the most viable alternatives to our current fossil fuel based energy economy.1 The mass-deployment of nuclear energy as an emissions-free source requires the reprocessing of used nuclear fuel to mitigate the waste.2 One of the major concerns with reprocessing used nuclear fuel is the release of volatile radionuclides such as Xe and Kr. The most mature process for removing these radionuclides is energy- and capital-intensive cryogenic distillation. Alternatively, porous materials such as metal-organic frameworks (MOFs) have demonstrated the ability to selectively adsorb Xe and Kr at ambient conditions.3-8 High-throughput computational screening of large databases ofmore » porous materials has identified a calcium-based nanoporous MOF, SBMOF-1, as the most selective for Xe over Kr.9,10 Here, we affirm this prediction and report that SBMOF-1 exhibits by far the highest Xe adsorption capacity and a remarkable Xe/Kr selectivity under relevant nuclear reprocessing conditions. The exceptional selectivity of SBMOF-1 is attributed to its pore size tailored to Xe and its dense wall of atoms that constructs a binding site with a high affinity for Xe, as evident by single crystal X-ray diffraction and molecular simulation.« less

  16. Constraining neutron guide optimizations with phase-space considerations

    NASA Astrophysics Data System (ADS)

    Bertelsen, Mads; Lefmann, Kim

    2016-09-01

    We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.

  17. Mixture optimization for mixed gas Joule-Thomson cycle

    NASA Astrophysics Data System (ADS)

    Detlor, J.; Pfotenhauer, J.; Nellis, G.

    2017-12-01

    An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.

  18. Selecting Items for Criterion-Referenced Tests.

    ERIC Educational Resources Information Center

    Mellenbergh, Gideon J.; van der Linden, Wim J.

    1982-01-01

    Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)

  19. G-STRATEGY: Optimal Selection of Individuals for Sequencing in Genetic Association Studies

    PubMed Central

    Wang, Miaoyan; Jakobsdottir, Johanna; Smith, Albert V.; McPeek, Mary Sara

    2017-01-01

    In a large-scale genetic association study, the number of phenotyped individuals available for sequencing may, in some cases, be greater than the study’s sequencing budget will allow. In that case, it can be important to prioritize individuals for sequencing in a way that optimizes power for association with the trait. Suppose a cohort of phenotyped individuals is available, with some subset of them possibly already sequenced, and one wants to choose an additional fixed-size subset of individuals to sequence in such a way that the power to detect association is maximized. When the phenotyped sample includes related individuals, power for association can be gained by including partial information, such as phenotype data of ungenotyped relatives, in the analysis, and this should be taken into account when assessing whom to sequence. We propose G-STRATEGY, which uses simulated annealing to choose a subset of individuals for sequencing that maximizes the expected power for association. In simulations, G-STRATEGY performs extremely well for a range of complex disease models and outperforms other strategies with, in many cases, relative power increases of 20–40% over the next best strategy, while maintaining correct type 1 error. G-STRATEGY is computationally feasible even for large datasets and complex pedigrees. We apply G-STRATEGY to data on HDL and LDL from the AGES-Reykjavik and REFINE-Reykjavik studies, in which G-STRATEGY is able to closely-approximate the power of sequencing the full sample by selecting for sequencing a only small subset of the individuals. PMID:27256766

  20. Portfolio Optimization of Nanomaterial Use in Clean Energy Technologies.

    PubMed

    Moore, Elizabeth A; Babbitt, Callie W; Gaustad, Gabrielle; Moore, Sean T

    2018-04-03

    While engineered nanomaterials (ENMs) are increasingly incorporated in diverse applications, risks of ENM adoption remain difficult to predict and mitigate proactively. Current decision-making tools do not adequately account for ENM uncertainties including varying functional forms, unique environmental behavior, economic costs, unknown supply and demand, and upstream emissions. The complexity of the ENM system necessitates a novel approach: in this study, the adaptation of an investment portfolio optimization model is demonstrated for optimization of ENM use in renewable energy technologies. Where a traditional investment portfolio optimization model maximizes return on investment through optimal selection of stock, ENM portfolio optimization maximizes the performance of energy technology systems by optimizing selective use of ENMs. Cumulative impacts of multiple ENM material portfolios are evaluated in two case studies: organic photovoltaic cells (OPVs) for renewable energy and lithium-ion batteries (LIBs) for electric vehicles. Results indicate ENM adoption is dependent on overall performance and variance of the material, resource use, environmental impact, and economic trade-offs. From a sustainability perspective, improved clean energy applications can help extend product lifespans, reduce fossil energy consumption, and substitute ENMs for scarce incumbent materials.

  1. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2005-01-01

    A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  2. Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin

    2015-11-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  3. Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites

    NASA Astrophysics Data System (ADS)

    Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.

    2015-12-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  4. Gender approaches to evolutionary multi-objective optimization using pre-selection of criteria

    NASA Astrophysics Data System (ADS)

    Kowalczuk, Zdzisław; Białaszewski, Tomasz

    2018-01-01

    A novel idea to perform evolutionary computations (ECs) for solving highly dimensional multi-objective optimization (MOO) problems is proposed. Following the general idea of evolution, it is proposed that information about gender is used to distinguish between various groups of objectives and identify the (aggregate) nature of optimality of individuals (solutions). This identification is drawn out of the fitness of individuals and applied during parental crossover in the processes of evolutionary multi-objective optimization (EMOO). The article introduces the principles of the genetic-gender approach (GGA) and virtual gender approach (VGA), which are not just evolutionary techniques, but constitute a completely new rule (philosophy) for use in solving MOO tasks. The proposed approaches are validated against principal representatives of the EMOO algorithms of the state of the art in solving benchmark problems in the light of recognized EC performance criteria. The research shows the superiority of the gender approach in terms of effectiveness, reliability, transparency, intelligibility and MOO problem simplification, resulting in the great usefulness and practicability of GGA and VGA. Moreover, an important feature of GGA and VGA is that they alleviate the 'curse' of dimensionality typical of many engineering designs.

  5. Optimization of motion control laws for tether crawler or elevator systems

    NASA Technical Reports Server (NTRS)

    Swenson, Frank R.; Von Tiesenhausen, Georg

    1988-01-01

    Based on the proposal of a motion control law by Lorenzini (1987), a method is developed for optimizing motion control laws for tether crawler or elevator systems in terms of the performance measures of travel time, the smoothness of acceleration and deceleration, and the maximum values of velocity and acceleration. The Lorenzini motion control law, based on powers of the hyperbolic tangent function, is modified by the addition of a constant-velocity section, and this modified function is then optimized by parameter selections to minimize the peak acceleration value for a selected travel time or to minimize travel time for the selected peak values of velocity and acceleration. It is shown that the addition of a constant-velocity segment permits further optimization of the motion control law performance.

  6. Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining.

    PubMed

    Salehi, Mojtaba; Bahreininejad, Ardeshir

    2011-08-01

    Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.

  7. Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining

    PubMed Central

    Salehi, Mojtaba

    2010-01-01

    Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously. PMID:21845020

  8. Adaptive algorithm of selecting optimal variant of errors detection system for digital means of automation facility of oil and gas complex

    NASA Astrophysics Data System (ADS)

    Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.

    2018-05-01

    To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].

  9. Deploying response surface methodology (RSM) and glowworm swarm optimization (GSO) in optimizing warpage on a mobile phone cover

    NASA Astrophysics Data System (ADS)

    Lee, X. N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Shazzuan, S.

    2017-09-01

    Plastic injection moulding is a popular manufacturing method not only it is reliable, but also efficient and cost saving. It able to produce plastic part with detailed features and complex geometry. However, defects in injection moulding process degrades the quality and aesthetic of the injection moulded product. The most common defect occur in the process is warpage. Inappropriate process parameter setting of injection moulding machine is one of the reason that leads to the occurrence of warpage. The aims of this study were to improve the quality of injection moulded part by investigating the optimal parameters in minimizing warpage using Response Surface Methodology (RSM) and Glowworm Swarm Optimization (GSO). Subsequent to this, the most significant parameter was identified and recommended parameters setting was compared with the optimized parameter setting using RSM and GSO. In this research, the mobile phone case was selected as case study. The mould temperature, melt temperature, packing pressure, packing time and cooling time were selected as variables whereas warpage in y-direction was selected as responses in this research. The simulation was carried out by using Autodesk Moldflow Insight 2012. In addition, the RSM was performed by using Design Expert 7.0 whereas the GSO was utilized by using MATLAB. The warpage in y direction recommended by RSM were reduced by 70 %. The warpages recommended by GSO were decreased by 61 % in y direction. The resulting warpages under optimal parameter setting by RSM and GSO were validated by simulation in AMI 2012. RSM performed better than GSO in solving warpage issue.

  10. Impact of Chaos Functions on Modern Swarm Optimizers.

    PubMed

    Emary, E; Zawbaa, Hossam M

    2016-01-01

    Exploration and exploitation are two essential components for any optimization algorithm. Much exploration leads to oscillation and premature convergence while too much exploitation slows down the optimization algorithm and the optimizer may be stuck in local minima. Therefore, balancing the rates of exploration and exploitation at the optimization lifetime is a challenge. This study evaluates the impact of using chaos-based control of exploration/exploitation rates against using the systematic native control. Three modern algorithms were used in the study namely grey wolf optimizer (GWO), antlion optimizer (ALO) and moth-flame optimizer (MFO) in the domain of machine learning for feature selection. Results on a set of standard machine learning data using a set of assessment indicators prove advance in optimization algorithm performance when using variational repeated periods of declined exploration rates over using systematically decreased exploration rates.

  11. Development of Advanced Materials for Electro-Ceramic Application Final Report CRADA No. TC-1331-96

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caplan, M.; Olstad, R.; McMillan, L.

    The goal of this project was to further develop and characterize the electrochemical methods originating in Russia for producing ultra high purity organometallic compounds utilized as precursors in the production of high quality electro-ceramic materials. Symetrix planned to use electro-ceramic materials with high dielectric constant for microelectronic memory circuit applications. General Atomics planned to use the barium titanate type ceramics with low loss tangent for producing a high power ferroelectric tuner used to match radio frequency power into their Dill-D fusion machine. Phase I of the project was scheduled to have a large number of organometallic (alkoxides) chemical samples producedmore » using various methods. These would be analyzed by LLNL, Soliton and Symetrix independently to determine the level of chemical impurities thus verifying each other's analysis. The goal was to demonstrate a cost-effective production method, which could be implemented in a large commercial facility to produce high purity organometallic compounds. In addition, various compositions of barium-strontium-titanate ceramics were to be produced and analyzed in order to develop an electroceramic capacitor material having the desired characteristics with respect to dielectric constant, loss tangent, temperature characteristics and non-linear behavior under applied voltage. Upon optimizing the barium titanate material, 50 capacitor preforms would be produced from this material demonstrating the ability to produce, in quantity, the pills ultimately required for the ferroelectric tuner (approx 2000-3000 ceramic pills).« less

  12. SLOPE—ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION

    PubMed Central

    Bogdan, Małgorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; Candès, Emmanuel J.

    2015-01-01

    We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minb∈ℝp12‖y−Xb‖ℓ22+λ1|b|(1)+λ2|b|(2)+⋯+λp|b|(p),where λ1 ≥ λ2 ≥ … ≥ λp ≥ 0 and |b|(1)≥|b|(2)≥⋯≥|b|(p) are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical ℓ1 procedures such as the Lasso. Here, the regularizer is a sorted ℓ1 norm, which penalizes the regression coefficients according to their rank: the higher the rank—that is, stronger the signal—the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289–300] procedure (BH) which compares more significant p-values with more stringent thresholds. One notable choice of the sequence {λi} is given by the BH critical values λBH(i)=z(1−i⋅q/2p), where q ∈ (0, 1) and z(α) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data. PMID:26709357

  13. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres-Focus on Feature Selection.

    PubMed

    Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.

  14. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres—Focus on Feature Selection

    PubMed Central

    Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205

  15. Using Biomechanical Optimization To Interpret Dancers’ Pose Selection For A Partnered Spin

    DTIC Science & Technology

    2009-05-06

    optimized performance of a straight arm backward longswing on the still rings in mens artistic gymnastics . Because gymnasts lose points for excessive swing at...an actual performance and used that as the basis for their search. Yeadon determined that with timing within 15ms, gymnasts can minimize their excess...are moving in an optimal way. 2.5 Body Modeling 2.5.1 Building the Body In his study involving gymnasts on the rings, Yeadon developed a body model com

  16. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  17. Optimal configuration of microstructure in ferroelectric materials by stochastic optimization

    NASA Astrophysics Data System (ADS)

    Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.

    2010-07-01

    An optimization procedure determining the ideal configuration at the microstructural level of ferroelectric (FE) materials is applied to maximize piezoelectricity. Piezoelectricity in ceramic FEs differs significantly from that of single crystals because of the presence of crystallites (grains) possessing crystallographic axes aligned imperfectly. The piezoelectric properties of a polycrystalline (ceramic) FE is inextricably related to the grain orientation distribution (texture). The set of combination of variables, known as solution space, which dictates the texture of a ceramic is unlimited and hence the choice of the optimal solution which maximizes the piezoelectricity is complicated. Thus, a stochastic global optimization combined with homogenization is employed for the identification of the optimal granular configuration of the FE ceramic microstructure with optimum piezoelectric properties. The macroscopic equilibrium piezoelectric properties of polycrystalline FE is calculated using mathematical homogenization at each iteration step. The configuration of grains characterized by its orientations at each iteration is generated using a randomly selected set of orientation distribution parameters. The optimization procedure applied to the single crystalline phase compares well with the experimental data. Apparent enhancement of piezoelectric coefficient d33 is observed in an optimally oriented BaTiO3 single crystal. Based on the good agreement of results with the published data in single crystals, we proceed to apply the methodology in polycrystals. A configuration of crystallites, simultaneously constraining the orientation distribution of the c-axis (polar axis) while incorporating ab-plane randomness, which would multiply the overall piezoelectricity in ceramic BaTiO3 is also identified. The orientation distribution of the c-axes is found to be a narrow Gaussian distribution centered around 45°. The piezoelectric coefficient in such a ceramic is found to

  18. A Degree Distribution Optimization Algorithm for Image Transmission

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  19. Beam-energy-spread minimization using cell-timing optimization

    NASA Astrophysics Data System (ADS)

    Rose, C. R.; Ekdahl, C.; Schulze, M.

    2012-04-01

    Beam energy spread, and related beam motion, increase the difficulty in tuning for multipulse radiographic experiments at the dual-axis radiographic hydrodynamic test facility’s axis-II linear induction accelerator (LIA). In this article, we describe an optimization method to reduce the energy spread by adjusting the timing of the cell voltages (both unloaded and loaded), either advancing or retarding, such that the injector voltage and summed cell voltages in the LIA result in a flatter energy profile. We developed a nonlinear optimization routine which accepts as inputs the 74 cell-voltage, injector voltage, and beam current waveforms. It optimizes cell timing per user-selected groups of cells and outputs timing adjustments, one for each of the selected groups. To verify the theory, we acquired and present data for both unloaded and loaded cell-timing optimizations. For the unloaded cells, the preoptimization baseline energy spread was reduced by 34% and 31% for two shots as compared to baseline. For the loaded-cell case, the measured energy spread was reduced by 49% compared to baseline.

  20. Payment mechanism and GP self-selection: capitation versus fee for service.

    PubMed

    Allard, Marie; Jelovac, Izabela; Léger, Pierre-Thomas

    2014-06-01

    This paper analyzes the consequences of allowing gatekeeping general practitioners (GPs) to select their payment mechanism. We model GPs' behavior under the most common payment schemes (capitation and fee for service) and when GPs can select one among them. Our analysis considers GP heterogeneity in terms of both ability and concern for their patients' health. We show that when the costs of wasteful referrals to costly specialized care are relatively high, fee for service payments are optimal to maximize the expected patients' health net of treatment costs. Conversely, when the losses associated with failed referrals of severely ill patients are relatively high, we show that either GPs' self-selection of a payment form or capitation is optimal. Last, we extend our analysis to endogenous effort and to competition among GPs. In both cases, we show that self-selection is never optimal.

  1. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization.

    PubMed

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

  2. Why 'Optimal' Payment for Healthcare Providers Can Never be Optimal Under Community Rating.

    PubMed

    Zweifel, Peter; Frech, H E

    2016-02-01

    This article extends the existing literature on optimal provider payment by accounting for consumer heterogeneity in preferences for health insurance and healthcare. This heterogeneity breaks down the separation of the relationship between providers and the health insurer and the relationship between consumers and the insurer. Both experimental and market evidence for a high degree of heterogeneity are presented. Given heterogeneity, a uniform policy fails to effectively control moral hazard, while incentives for risk selection created by community rating cannot be neutralized through risk adjustment. Consumer heterogeneity spills over into relationships with providers, such that a uniform contract with providers also cannot be optimal. The decisive condition for ensuring optimality of provider payment is to replace community rating (which violates the principle of marginal cost pricing) with risk rating of contributions combined with subsidization targeted at high risks with low incomes.

  3. Fuel management optimization using genetic algorithms and code independence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1994-12-31

    Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less

  4. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  5. Automated sample plan selection for OPC modeling

    NASA Astrophysics Data System (ADS)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  6. A global optimization approach to multi-polarity sentiment analysis.

    PubMed

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  7. A threshold selection method based on edge preserving

    NASA Astrophysics Data System (ADS)

    Lou, Liantang; Dan, Wei; Chen, Jiaqi

    2015-12-01

    A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.

  8. Design Optimization of a Centrifugal Fan with Splitter Blades

    NASA Astrophysics Data System (ADS)

    Heo, Man-Woong; Kim, Jin-Hyuk; Kim, Kwang-Yong

    2015-05-01

    Multi-objective optimization of a centrifugal fan with additionally installed splitter blades was performed to simultaneously maximize the efficiency and pressure rise using three-dimensional Reynolds-averaged Navier-Stokes equations and hybrid multi-objective evolutionary algorithm. Two design variables defining the location of splitter, and the height ratio between inlet and outlet of impeller were selected for the optimization. In addition, the aerodynamic characteristics of the centrifugal fan were investigated with the variation of design variables in the design space. Latin hypercube sampling was used to select the training points, and response surface approximation models were constructed as surrogate models of the objective functions. With the optimization, both the efficiency and pressure rise of the centrifugal fan with splitter blades were improved considerably compared to the reference model.

  9. ToTem: a tool for variant calling pipeline optimization.

    PubMed

    Tom, Nikola; Tom, Ondrej; Malcikova, Jitka; Pavlova, Sarka; Kubesova, Blanka; Rausch, Tobias; Kolarik, Miroslav; Benes, Vladimir; Bystry, Vojtech; Pospisilova, Sarka

    2018-06-26

    High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. ToTem is a tool for automated pipeline optimization which is freely available as a web application at  https://totem.software .

  10. Multiobjective immune algorithm with nondominated neighbor-based selection.

    PubMed

    Gong, Maoguo; Jiao, Licheng; Du, Haifeng; Bo, Liefeng

    2008-01-01

    Abstract Nondominated Neighbor Immune Algorithm (NNIA) is proposed for multiobjective optimization by using a novel nondominated neighbor-based selection technique, an immune inspired operator, two heuristic search operators, and elitism. The unique selection technique of NNIA only selects minority isolated nondominated individuals in the population. The selected individuals are then cloned proportionally to their crowding-distance values before heuristic search. By using the nondominated neighbor-based selection and proportional cloning, NNIA pays more attention to the less-crowded regions of the current trade-off front. We compare NNIA with NSGA-II, SPEA2, PESA-II, and MISA in solving five DTLZ problems, five ZDT problems, and three low-dimensional problems. The statistical analysis based on three performance metrics including the coverage of two sets, the convergence metric, and the spacing, show that the unique selection method is effective, and NNIA is an effective algorithm for solving multiobjective optimization problems. The empirical study on NNIA's scalability with respect to the number of objectives shows that the new algorithm scales well along the number of objectives.

  11. Strain Selection and Optimization of Mixed Culture Conditions for Lactobacillus pentosus K1-23 with Antibacterial Activity and Aureobasidium pullulans NRRL 58012 Producing Immune-Enhancing β-Glucan.

    PubMed

    Sekar, Ashokkumar; Kim, Myoungjin; Jeong, Hyeong Chul; Kim, Keun

    2018-05-28

    Lactobacillus pentosus K1-23 was selected from among 25 lactic acid bacterial strains owing to its high inhibitory activity against several pathogenic bacteria, including Escherichia coli , Salmonella typhimurium , S. gallinarum , Staphylococcus aureus , Pseudomonas aeruginosa , Clostridium perfringens , and Listeria monocytogenes . Additionally, among 13 strains of Aureobasidium spp., A. pullulans NRRL 58012 was shown to produce the highest amount of β-glucan (15.45 ± 0.07%) and was selected. Next, the optimal conditions for a solid-phase mixed culture with these two different microorganisms (one bacterium and one yeast) were determined. The optimal inoculum sizes for L. pentosus and A. pullulans were 1% and 5%, respectively. The appropriate inoculation time for L. pentosus K1-23 was 3 days after the inoculation of A. pullulans to initiate fermentation. The addition of 0.5% corn steep powder and 0.1% FeSO₄ to the basal medium resulted in the increased production of lactic acid bacterial cells and β-glucan. The following optimal conditions for solid-phase mixed culture were also statistically determined by using the response surface method: 37.84°C, pH 5.25, moisture content of 60.82%, and culture time of 6.08 days for L. pentosus ; and 24.11°C, pH 5.65, moisture content of 60.08%, and culture time of 5.71 days for A. pullulans. Using the predicted optimal conditions, the experimental production values of L. pentosus cells and β-glucan were 3.15 ± 0.10 × 10⁸ CFU/g and 13.41 ± 0.04%, respectively. This mixed culture may function as a highly efficient antibiotic substitute based on the combined action of its anti-pathogenic bacterial and immune-enhancing activities.

  12. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2007-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service. These research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, dependable sensor suite to address system health assessment requirements.

  13. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2008-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service these research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, defendable sensor suite to address system health assessment requirements.

  14. Optimization of microelectrophoresis to select highly negatively charged sperm.

    PubMed

    Simon, Luke; Murphy, Kristin; Aston, Kenneth I; Emery, Benjamin R; Hotaling, James M; Carrell, Douglas T

    2016-06-01

    The sperm membrane undergoes extensive surface remodeling as it matures in the epididymis. During this process, the sperm is encapsulated in an extensive glycocalyx layer, which provides the membrane with its characteristic negative electrostatic charge. In this study, we develop a method of microelectrophoresis and standardize the protocol to isolate sperm with high negative membrane charge. Under an electric field, the percentage of positively charged sperm (PCS), negatively charged sperm (NCS), and neutrally charged sperm was determined for each ejaculate prior to and following density gradient centrifugation (DGC), and evaluated for sperm DNA damage, and histone retention. Subsequently, PCS, NCS, and neutrally charged sperm were selected using an ICSI needle and directly analyzed for DNA damage. When raw semen was analyzed using microelectrophoresis, 94 % were NCS. In contrast, DGC completely or partially stripped the negative membrane charge from sperm resulting PCS and neutrally charged sperm, while the charged sperm populations are increased with an increase in electrophoretic current. Following DGC, high sperm DNA damage and abnormal histone retention were inversely correlated with percentage NCS and directly correlated with percentage PCS. NCS exhibited significantly lower DNA damage when compared with control (P < 0.05) and PCS (P < 0.05). When the charged sperm population was corrected for neutrally charged sperm, sperm DNA damage was strongly associated with NCS at a lower electrophoretic current. The results suggest that selection of NCS at lower current may be an important biomarker to select healthy sperm for assisted reproductive treatment.

  15. Evolution-guided optimization of biosynthetic pathways.

    PubMed

    Raman, Srivatsan; Rogers, Jameson K; Taylor, Noah D; Church, George M

    2014-12-16

    Engineering biosynthetic pathways for chemical production requires extensive optimization of the host cellular metabolic machinery. Because it is challenging to specify a priori an optimal design, metabolic engineers often need to construct and evaluate a large number of variants of the pathway. We report a general strategy that combines targeted genome-wide mutagenesis to generate pathway variants with evolution to enrich for rare high producers. We convert the intracellular presence of the target chemical into a fitness advantage for the cell by using a sensor domain responsive to the chemical to control a reporter gene necessary for survival under selective conditions. Because artificial selection tends to amplify unproductive cheaters, we devised a negative selection scheme to eliminate cheaters while preserving library diversity. This scheme allows us to perform multiple rounds of evolution (addressing ∼10(9) cells per round) with minimal carryover of cheaters after each round. Based on candidate genes identified by flux balance analysis, we used targeted genome-wide mutagenesis to vary the expression of pathway genes involved in the production of naringenin and glucaric acid. Through up to four rounds of evolution, we increased production of naringenin and glucaric acid by 36- and 22-fold, respectively. Naringenin production (61 mg/L) from glucose was more than double the previous highest titer reported. Whole-genome sequencing of evolved strains revealed additional untargeted mutations that likely benefit production, suggesting new routes for optimization.

  16. Analysis and optimization of hybrid electric vehicle thermal management systems

    NASA Astrophysics Data System (ADS)

    Hamut, H. S.; Dincer, I.; Naterer, G. F.

    2014-02-01

    In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.

  17. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  18. Economic optimization of natural hazard protection - conceptual study of existing approaches

    NASA Astrophysics Data System (ADS)

    Spackova, Olga; Straub, Daniel

    2013-04-01

    Risk-based planning of protection measures against natural hazards has become a common practice in many countries. The selection procedure aims at identifying an economically efficient strategy with regard to the estimated costs and risk (i.e. expected damage). A correct setting of the evaluation methodology and decision criteria should ensure an optimal selection of the portfolio of risk protection measures under a limited state budget. To demonstrate the efficiency of investments, indicators such as Benefit-Cost Ratio (BCR), Marginal Costs (MC) or Net Present Value (NPV) are commonly used. However, the methodologies for efficiency evaluation differ amongst different countries and different hazard types (floods, earthquakes etc.). Additionally, several inconsistencies can be found in the applications of the indicators in practice. This is likely to lead to a suboptimal selection of the protection strategies. This study provides a general formulation for optimization of the natural hazard protection measures from a socio-economic perspective. It assumes that all costs and risks can be expressed in monetary values. The study regards the problem as a discrete hierarchical optimization, where the state level sets the criteria and constraints, while the actual optimization is made on the regional level (towns, catchments) when designing particular protection measures and selecting the optimal protection level. The study shows that in case of an unlimited budget, the task is quite trivial, as it is sufficient to optimize the protection measures in individual regions independently (by minimizing the sum of risk and cost). However, if the budget is limited, the need for an optimal allocation of resources amongst the regions arises. To ensure this, minimum values of BCR or MC can be required by the state, which must be achieved in each region. The study investigates the meaning of these indicators in the optimization task at the conceptual level and compares their

  19. Relay Selection for Cooperative Relaying in Wireless Energy Harvesting Networks

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiyan; Wang, Fei; Li, Songsong; Jiang, Fengjiao; Cao, Lijie

    2018-01-01

    Energy harvesting from the surroundings is a promising solution to provide energy supply and extend the life of wireless sensor networks. Recently, energy harvesting has been shown as an attractive solution to prolong the operation of cooperative networks. In this paper, we propose a relay selection scheme to optimize the amplify-and-forward (AF) cooperative transmission in wireless energy harvesting cooperative networks. The harvesting energy and channel conditions are considered to select the optimal relay as cooperative relay to minimize the outage probability of the system. Simulation results show that our proposed relay selection scheme achieves better outage performance than other strategies.

  20. Error Analysis and Selection of Optimal Excitation Parameters for the Sensing of CO2 and O2 from Space for ASCENDS Applications

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; Prasad, Narasimha S.

    2012-01-01

    Simulation studies to optimize sensing of CO2 and O2 from space are described. Uncertainties in line-by-line calculations unaccounted for in previous studies identified. Multivariate methods are employed for measurement wavelengths selection. The Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) recommended by NRC Decadal Survey has a stringent accuracy requirements of 0.5% or better in XCO2 retrievals. NASA LaRC and its partners are investigating the use of the 1.57 m band of CO2 and the 1.26-1.27 m band of oxygen for XCO2 measurements. As part of these efforts, we are carrying out simulation studies using a lidar modeling framework being developed at NASA LaRC to predict the performance of our proposed ASCENDS mission implementation [1]. Our study is aimed at predicting the sources and magnitudes of errors anticipated in XCO2 retrievals for further error minimization through the selection of optimum excitation parameters and development of better retrieval methods.

  1. Urban Rain Gauge Siting Selection Based on Gis-Multicriteria Analysis

    NASA Astrophysics Data System (ADS)

    Fu, Yanli; Jing, Changfeng; Du, Mingyi

    2016-06-01

    With the increasingly rapid growth of urbanization and climate change, urban rainfall monitoring as well as urban waterlogging has widely been paid attention. In the light of conventional siting selection methods do not take into consideration of geographic surroundings and spatial-temporal scale for the urban rain gauge site selection, this paper primarily aims at finding the appropriate siting selection rules and methods for rain gauge in urban area. Additionally, for optimization gauge location, a spatial decision support system (DSS) aided by geographical information system (GIS) has been developed. In terms of a series of criteria, the rain gauge optimal site-search problem can be addressed by a multicriteria decision analysis (MCDA). A series of spatial analytical techniques are required for MCDA to identify the prospective sites. With the platform of GIS, using spatial kernel density analysis can reflect the population density; GIS buffer analysis is used to optimize the location with the rain gauge signal transmission character. Experiment results show that the rules and the proposed method are proper for the rain gauge site selection in urban areas, which is significant for the siting selection of urban hydrological facilities and infrastructure, such as water gauge.

  2. Optimization Under Uncertainty of Site-Specific Turbine Configurations

    NASA Astrophysics Data System (ADS)

    Quick, J.; Dykes, K.; Graf, P.; Zahle, F.

    2016-09-01

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained with increasing risk aversion on the part of the designer.

  3. Knowledge-Based, Central Nervous System (CNS) Lead Selection and Lead Optimization for CNS Drug Discovery

    PubMed Central

    2011-01-01

    The central nervous system (CNS) is the major area that is affected by aging. Alzheimer’s disease (AD), Parkinson’s disease (PD), brain cancer, and stroke are the CNS diseases that will cost trillions of dollars for their treatment. Achievement of appropriate blood–brain barrier (BBB) penetration is often considered a significant hurdle in the CNS drug discovery process. On the other hand, BBB penetration may be a liability for many of the non-CNS drug targets, and a clear understanding of the physicochemical and structural differences between CNS and non-CNS drugs may assist both research areas. Because of the numerous and challenging issues in CNS drug discovery and the low success rates, pharmaceutical companies are beginning to deprioritize their drug discovery efforts in the CNS arena. Prompted by these challenges and to aid in the design of high-quality, efficacious CNS compounds, we analyzed the physicochemical property and the chemical structural profiles of 317 CNS and 626 non-CNS oral drugs. The conclusions derived provide an ideal property profile for lead selection and the property modification strategy during the lead optimization process. A list of substructural units that may be useful for CNS drug design was also provided here. A classification tree was also developed to differentiate between CNS drugs and non-CNS oral drugs. The combined analysis provided the following guidelines for designing high-quality CNS drugs: (i) topological molecular polar surface area of <76 Å2 (25–60 Å2), (ii) at least one (one or two, including one aliphatic amine) nitrogen, (iii) fewer than seven (two to four) linear chains outside of rings, (iv) fewer than three (zero or one) polar hydrogen atoms, (v) volume of 740–970 Å3, (vi) solvent accessible surface area of 460–580 Å2, and (vii) positive QikProp parameter CNS. The ranges within parentheses may be used during lead optimization. One violation to this proposed profile may be acceptable. The

  4. Codon optimization underpins generalist parasitism in fungi

    PubMed Central

    Badet, Thomas; Peyraud, Remi; Mbengue, Malick; Navaud, Olivier; Derbyshire, Mark; Oliver, Richard P; Barbacci, Adelin; Raffaele, Sylvain

    2017-01-01

    The range of hosts that parasites can infect is a key determinant of the emergence and spread of disease. Yet, the impact of host range variation on the evolution of parasite genomes remains unknown. Here, we show that codon optimization underlies genome adaptation in broad host range parasites. We found that the longer proteins encoded by broad host range fungi likely increase natural selection on codon optimization in these species. Accordingly, codon optimization correlates with host range across the fungal kingdom. At the species level, biased patterns of synonymous substitutions underpin increased codon optimization in a generalist but not a specialist fungal pathogen. Virulence genes were consistently enriched in highly codon-optimized genes of generalist but not specialist species. We conclude that codon optimization is related to the capacity of parasites to colonize multiple hosts. Our results link genome evolution and translational regulation to the long-term persistence of generalist parasitism. DOI: http://dx.doi.org/10.7554/eLife.22472.001 PMID:28157073

  5. The Inverse Optimal Control Problem for a Three-Loop Missile Autopilot

    NASA Astrophysics Data System (ADS)

    Hwang, Donghyeok; Tahk, Min-Jea

    2018-04-01

    The performance characteristics of the autopilot must have a fast response to intercept a maneuvering target and reasonable robustness for system stability under the effect of un-modeled dynamics and noise. By the conventional approach, the three-loop autopilot design is handled by time constant, damping factor and open-loop crossover frequency to achieve the desired performance requirements. Note that the general optimal theory can be also used to obtain the same gain as obtained from the conventional approach. The key idea of using optimal control technique for feedback gain design revolves around appropriate selection and interpretation of the performance index for which the control is optimal. This paper derives an explicit expression, which relates the weight parameters appearing in the quadratic performance index to the design parameters such as open-loop crossover frequency, phase margin, damping factor, or time constant, etc. Since all set of selection of design parameters do not guarantee existence of optimal control law, explicit inequalities, which are named the optimality criteria for the three-loop autopilot (OC3L), are derived to find out all set of design parameters for which the control law is optimal. Finally, based on OC3L, an efficient gain selection procedure is developed, where time constant is set to design objective and open-loop crossover frequency and phase margin as design constraints. The effectiveness of the proposed technique is illustrated through numerical simulations.

  6. optBINS: Optimal Binning for histograms

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  7. Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications

    USDA-ARS?s Scientific Manuscript database

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...

  8. An optimized design to reduce eddy current sensitivity in velocity-selective arterial spin labeling using symmetric BIR-8 pulses.

    PubMed

    Guo, Jia; Meakin, James A; Jezzard, Peter; Wong, Eric C

    2015-03-01

    Velocity-selective arterial spin labeling (VSASL) tags arterial blood on a velocity-selective (VS) basis and eliminates the tagging/imaging gap and associated transit delay sensitivity observed in other ASL tagging methods. However, the flow-weighting gradient pulses in VS tag preparation can generate eddy currents (ECs), which may erroneously tag the static tissue and create artificial perfusion signal, compromising the accuracy of perfusion quantification. A novel VS preparation design is presented using an eight-segment B1 insensitive rotation with symmetric radio frequency and gradient layouts (sym-BIR-8), combined with delays after gradient pulses to optimally reduce ECs of a wide range of time constants while maintaining B0 and B1 insensitivity. Bloch simulation, phantom, and in vivo experiments were carried out to determine robustness of the new and existing pulse designs to ECs, B0 , and B1 inhomogeneity. VSASL with reduced EC sensitivity across a wide range of EC time constants was achieved with the proposed sym-BIR-8 design, and the accuracy of cerebral blood flow measurement was improved. The sym-BIR-8 design performed the most robustly among the existing VS tagging designs, and should benefit studies using VS preparation with improved accuracy and reliability. © 2014 Wiley Periodicals, Inc.

  9. A Lifetime Maximization Relay Selection Scheme in Wireless Body Area Networks.

    PubMed

    Zhang, Yu; Zhang, Bing; Zhang, Shi

    2017-06-02

    Network Lifetime is one of the most important metrics in Wireless Body Area Networks (WBANs). In this paper, a relay selection scheme is proposed under the topology constrains specified in the IEEE 802.15.6 standard to maximize the lifetime of WBANs through formulating and solving an optimization problem where relay selection of each node acts as optimization variable. Considering the diversity of the sensor nodes in WBANs, the optimization problem takes not only energy consumption rate but also energy difference among sensor nodes into account to improve the network lifetime performance. Since it is Non-deterministic Polynomial-hard (NP-hard) and intractable, a heuristic solution is then designed to rapidly address the optimization. The simulation results indicate that the proposed relay selection scheme has better performance in network lifetime compared with existing algorithms and that the heuristic solution has low time complexity with only a negligible performance degradation gap from optimal value. Furthermore, we also conduct simulations based on a general WBAN model to comprehensively illustrate the advantages of the proposed algorithm. At the end of the evaluation, we validate the feasibility of our proposed scheme via an implementation discussion.

  10. Selection of sampling rate for digital control of aircrafts

    NASA Technical Reports Server (NTRS)

    Katz, P.; Powell, J. D.

    1974-01-01

    The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.

  11. Optimisation of strain selection in evolutionary continuous culture

    NASA Astrophysics Data System (ADS)

    Bayen, T.; Mairet, F.

    2017-12-01

    In this work, we study a minimal time control problem for a perfectly mixed continuous culture with n ≥ 2 species and one limiting resource. The model that we consider includes a mutation factor for the microorganisms. Our aim is to provide optimal feedback control laws to optimise the selection of the species of interest. Thanks to Pontryagin's Principle, we derive optimality conditions on optimal controls and introduce a sub-optimal control law based on a most rapid approach to a singular arc that depends on the initial condition. Using adaptive dynamics theory, we also study a simplified version of this model which allows to introduce a near optimal strategy.

  12. Five-Junction Solar Cell Optimization Using Silvaco Atlas

    DTIC Science & Technology

    2017-09-01

    experimental sources [1], [4], [6]. f. Numerical Method The method selected for solving the non -linear equations that make up the simulation can be...and maximize efficiency. Optimization of solar cell efficiency is carried out via nearly orthogonal balanced design of experiments methodology . Silvaco...Optimization of solar cell efficiency is carried out via nearly orthogonal balanced design of experiments methodology . Silvaco ATLAS is utilized to

  13. Improving Minimally Invasive Adrenalectomy: Selection of Optimal Approach and Comparison of Outcomes.

    PubMed

    Lairmore, Terry C; Folek, Jessica; Govednik, Cara M; Snyder, Samuel K

    2016-07-01

    Minimally invasive adrenalectomy is commonly performed by either a transperitoneal laparoscopic (TLA) or posterior retroperitoneoscopic (PRA) approach. Our group described the technique for robot-assisted PRA (RAPRA) in 2010. Few studies are available that directly compare outcomes between the available operative approaches. We reviewed our results for minimally invasive adrenalectomy using the three different approaches over a 10-year period. Between January 2005 and April 2015, 160 minimally invasive adrenalectomies were performed. Clinicopathologic data were prospectively collected and retrospectively analyzed. The primary endpoints evaluated were operative time, blood loss, length of stay (LOS), and morbidity. The study included 67 TLA, 76 PRA, and 17 RAPRA procedures. Tumor size for PRA/RAPRA was smaller than for patients undergoing TLA (2.38 vs 3.6 cm, p ≤ 0.0001). Procedure time was shorter for PRA versus TLA (133.3 vs 152.8 min, p = 0.0381), as was LOS (1.85 vs 2.82 days, p = 0.0145). Procedure time was longer in RAPRA versus TLA/PRA (177 vs 153/133 min, p = 0.008), but LOS was significantly decreased (1.53 vs 2.82/1.85 days, p = 0.004). Minimally invasive adrenalectomy is associated with expected excellent outcomes regardless of approach. In our series, the posterior approach is associated with decreased operative time and LOS. Robotic technology provides potential advantages for the surgeon at the expense of more complex setup requirements and costs. Further study is required to demonstrate clear benefit of one surgical approach. Utilization of the entire spectrum of available operative techniques can allow for selection of the optimal approach based on individual patient factors.

  14. Selecting Pixels for Kepler Downlink

    NASA Technical Reports Server (NTRS)

    Bryson, Stephen T.; Jenkins, Jon M.; Klaus, Todd C.; Cote, Miles T.; Quintana, Elisa V.; Hall, Jennifer R.; Ibrahim, Khadeejah; Chandrasekaran, Hema; Caldwell, Douglas A.; Van Cleve, Jeffrey E.; hide

    2010-01-01

    The Kepler mission monitors > 100,000 stellar targets using 42 2200 1024 pixel CCDs. Bandwidth constraints prevent the downlink of all 96 million pixels per 30-minute cadence, so the Kepler spacecraft downlinks a specified collection of pixels for each target. These pixels are selected by considering the object brightness, background and the signal-to-noise of each pixel, and are optimized to maximize the signal-to-noise ratio of the target. This paper describes pixel selection, creation of spacecraft apertures that efficiently capture selected pixels, and aperture assignment to a target. Diagnostic apertures, short-cadence targets and custom specified shapes are discussed.

  15. A probabilistic and multi-objective analysis of lexicase selection and ε-lexicase selection.

    PubMed

    Cava, William La; Helmuth, Thomas; Spector, Lee; Moore, Jason H

    2018-05-10

    Lexicase selection is a parent selection method that considers training cases individually, rather than in aggregate, when performing parent selection. Whereas previous work has demonstrated the ability of lexicase selection to solve difficult problems in program synthesis and symbolic regression, the central goal of this paper is to develop the theoretical underpinnings that explain its performance. To this end, we derive an analytical formula that gives the expected probabilities of selection under lexicase selection, given a population and its behavior. In addition, we expand upon the relation of lexicase selection to many-objective optimization methods to describe the behavior of lexicase selection, which is to select individuals on the boundaries of Pareto fronts in high-dimensional space. We show analytically why lexicase selection performs more poorly for certain sizes of population and training cases, and show why it has been shown to perform more poorly in continuous error spaces. To address this last concern, we propose new variants of ε-lexicase selection, a method that modifies the pass condition in lexicase selection to allow near-elite individuals to pass cases, thereby improving selection performance with continuous errors. We show that ε-lexicase outperforms several diversity-maintenance strategies on a number of real-world and synthetic regression problems.

  16. A scenario optimization model for dynamic reserve site selection

    Treesearch

    Stephanie A. Snyder; Robert G. Haight; Charles S. ReVelle

    2004-01-01

    Conservation planners are called upon to make choices and trade-offs about the preservation of natural areas for the protection of species in the face of development pressures. We addressed the problem of selecting sites for protection over time with the objective of maximizing species representation, with uncertainty about future site development, and with periodic...

  17. The Cord Blood Apgar: a novel scoring system to optimize selection of banked cord blood grafts for transplantation

    PubMed Central

    Page, Kristin M.; Zhang, Lijun; Mendizabal, Adam; Wease, Stephen; Carter, Shelly; Shoulars, Kevin; Gentry, Tracy; Balber, Andrew E.; Kurtzberg, Joanne

    2012-01-01

    BACKGROUND Engraftment failure and delays, likely due to diminished cord blood unit (CBU) potency, remain major barriers to the overall success of unrelated umbilical cord blood transplantation (UCBT). To address this problem, we developed and retrospectively validated a novel scoring system, the Cord Blood Apgar (CBA), which is predictive of engraftment after UCBT. STUDY DESIGN AND METHODS In a single-center retrospective study, utilizing a database of 435 consecutive single cord myeloablative UCBTs performed between January 1, 2000, to December 31, 2008, precryopreservation and postthaw graft variables (total nucleated cell, CD34+, colony-forming units, mononuclear cell content, and volume) were initially correlated with neutrophil engraftment. Subsequently, based on the magnitude of hazard ratios (HRs) in univariate analysis, a weighted scoring system to predict CBU potency was developed using a randomly selected training data set and internally validated on the remaining data set. RESULTS The CBA assigns transplanted CBUs three scores: a precryopreservation score (PCS), a postthaw score (PTS), and a composite score (CS), which incorporates the PCS and PTS values. CBA-PCS scores, which could be used for initial unit selection, were predictive of neutrophil (CBA-PCS ≥ 7.75 vs. <7.75, HR 3.5; p < 0.0001) engraftment. Likewise, CBA-PTS and CS scores were strongly predictive of Day 42 neutrophil engraftment (CBA-PTS ≥ 9.5 vs. <9.5, HR 3.16, p < 0.0001; CBA-CS ≥ 17.75 vs. <17.75, HR 4.01, p < 0.0001). CONCLUSION The CBA is strongly predictive of engraftment after UCBT and shows promise for optimizing screening of CBU donors for transplantation. In the future, a segment could be assayed for the PTS score providing data to apply the CS for final CBU selection. PMID:21810098

  18. A parametric model and estimation techniques for the inharmonicity and tuning of the piano.

    PubMed

    Rigaud, François; David, Bertrand; Daudet, Laurent

    2013-05-01

    Inharmonicity of piano tones is an essential property of their timbre that strongly influences the tuning, leading to the so-called octave stretching. It is proposed in this paper to jointly model the inharmonicity and tuning of pianos on the whole compass. While using a small number of parameters, these models are able to reflect both the specificities of instrument design and tuner's practice. An estimation algorithm is derived that can run either on a set of isolated note recordings, but also on chord recordings, assuming that the played notes are known. It is applied to extract parameters highlighting some tuner's choices on different piano types and to propose tuning curves for out-of-tune pianos or piano synthesizers.

  19. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information),more » four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in

  20. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.