Science.gov

Sample records for optimal tuner selection

  1. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  2. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  3. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  4. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  5. Model-Based Control of an Aircraft Engine using an Optimal Tuner Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Chicatelli, Amy; Garg, Sanjay

    2012-01-01

    This paper covers the development of a model-based engine control (MBEC) method- ology applied to an aircraft turbofan engine. Here, a linear model extracted from the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) at a cruise operating point serves as the engine and the on-board model. The on-board model is up- dated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. MBEC provides the ability for a tighter control bound of thrust over the entire life cycle of the engine that is not achievable using traditional control feedback, which uses engine pressure ratio or fan speed. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC tighter thrust control. In addition, investigations of using the MBEC to provide a surge limit for the controller limit logic are presented that could provide benefits over a simple acceleration schedule that is currently used in engine control architectures.

  6. Fast ferrite tuner for the BNL synchrotron light source

    SciTech Connect

    Pivit, E. ); Hanna, S.M.; Keane, J. )

    1991-01-01

    A new type of ferrite tuner has been tested at the BNL. The ferrite tuner uses garnet slabs partially filling a stripline. One of the important features of the tuner is that the ferrite is perpendicularly biased for operation above FMR, thus reducing the magnetic losses. A unique design was adopted to achieve the efficient cooling. The principle of operation of the tuner as well as our preliminary results on tuning a 52 MHz cavity are reported. Optimized conditions under which we demonstrated linear tunability of 80 KHz are described. The tuner's losses and its effect on higher-order modes in the cavity are discussed. 2 refs., 8 figs.

  7. Model-Based Control of a Nonlinear Aircraft Engine Simulation using an Optimal Tuner Kalman Filter Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob

    2013-01-01

    This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.

  8. Circular piezoelectric bender laser tuners

    NASA Technical Reports Server (NTRS)

    Mcelroy, J. H.; Thompson, P. E.; Walker, H. E.; Johnson, E. H.; Radecki, D. J.; Reynolds, R. S.

    1972-01-01

    The circular piezoelectric bender laser tuner to replace conventional laser tuners when mirror diameters up to 0.50 inch are sufficient is described. The circular piezoelectric bender laser tuner offers much higher displacements per applied volt and permits laser control circuits to be fabricated using standard operational amplifiers, rather than the expensive high-voltage amplifiers required by conventional tuners. The cost of the device is more than one order of magnitude lower than conventional tuners and the device is very rugged with all mechanical resonances easily designed to be greater than 3kHz. In addition to its use as a laser frequency tuner, the circular bender tuner should find many applications in interferometers and similar devices.

  9. Coaxial stub tuner

    NASA Technical Reports Server (NTRS)

    Chern, Shy-Shiun (Inventor)

    1981-01-01

    A coaxial stub tuner assembly is comprised of a short circuit branch diametrically opposite an open circuit branch. The stub of the short circuit branch is tubular, and the stub of the open circuit branch is a rod which extends through the tubular stub into the open circuit branch. The rod is threaded at least at its outer end, and the tubular stub is internally threaded to receive the threads of the rod. The open circuit branch can be easily tuned by turning the threaded rod in the tubular stub to adjust the length of the rod extending into the open circuit branch.

  10. Selective Optimization

    DTIC Science & Technology

    2015-07-06

    optimization solvers, they typically exhibit extremely poor performance . We develop a variety of effective model and algorithm enhancement techniques...commercial optimization solvers, they typically exhibit extremely poor performance . We develop a variety of effective model and algorithm enhancement ...class of problems, and developed strengthened formulations and algorithmic techniques which perform significantly better than standard MIP

  11. Test of a coaxial blade tuner at HTS FNAL

    SciTech Connect

    Pischalnikov, Y.; Barbanotti, S.; Harms, E.; Hocker, A.; Khabiboulline, T.; Schappert, W.; Bosotti, A.; Pagani, C.; Paparella, R.; /LASA, Segrate

    2011-03-01

    A coaxial blade tuner has been selected for the 1.3GHz SRF cavities of the Fermilab SRF Accelerator Test Facility. Results from tuner cold tests in the Fermilab Horizontal Test Stand are presented. Fermilab is constructing the SRF Accelerator Test Facility, a facility for accelerator physics research and development. This facility will contain a total of six cryomodules, each containing eight 1.3 GHz nine-cell elliptical cavities. Each cavity will be equipped with a Slim Blade Tuner designed by INFN Milan. The blade tuner incorporates both a stepper motor and piezo actuators to allow for both slow and fast cavity tuning. The stepper motor allows the cavity frequency to be statically tuned over a range of 500 kHz with an accuracy of several Hz. The piezos provide up to 2 kHz of dynamic tuning for compensation of Lorentz force detuning and variations in the He bath pressure. The first eight blade tuners were built at INFN Milan, but the remainder are being manufactured commercially following the INFN design. To date, more than 40 of the commercial tuners have been delivered.

  12. Electromagnetic SCRF Cavity Tuner

    SciTech Connect

    Kashikhin, V.; Borissov, E.; Foster, G.W.; Makulski, A.; Pischalnikov, Y.; Khabiboulline, T.; /Fermilab

    2009-05-01

    A novel prototype of SCRF cavity tuner is being designed and tested at Fermilab. This is a superconducting C-type iron dominated magnet having a 10 mm gap, axial symmetry, and a 1 Tesla field. Inside the gap is mounted a superconducting coil capable of moving {+-} 1 mm and producing a longitudinal force up to {+-} 1.5 kN. The static force applied to the RF cavity flanges provides a long-term cavity geometry tuning to a nominal frequency. The same coil powered by fast AC current pulse delivers mechanical perturbation for fast cavity tuning. This fast mechanical perturbation could be used to compensate a dynamic RF cavity detuning caused by cavity Lorentz forces and microphonics. A special configuration of magnet system was designed and tested.

  13. LEB tuner made out of titanium alloy

    SciTech Connect

    Goren, Y.; Campbell, B.

    1991-09-01

    A proposed design of a closed shell tuner for the LEB cavity is presented. The tuner is made out of Ti alloy which has a high electrical resistivity as well as very good mechanical strength. Using this alloy results in a substantial reduction in the eddy current heating as well as allowing for faster frequency control. 9 figs.

  14. Inductive tuners for microwave driven discharge lamps

    DOEpatents

    Simpson, James E.

    1999-01-01

    An RF powered electrodeless lamp utilizing an inductive tuner in the waveguide which couples the RF power to the lamp cavity, for reducing reflected RF power and causing the lamp to operate efficiently.

  15. ANT tuner retrofit for LEB cavity

    SciTech Connect

    Walling, L.; Goren, Y.; Kwiatkowski, S.

    1994-03-01

    This report describes a ferrite tuner design for the LEB cavity that utilizes techniques for bonding ferrite to metallic cooling plates that is utilized in the high-power rf and microwave industry. A test tuner was designed to fit into the existing LEB-built magnet and onto the Grimm LEB Cavity. It will require a new vacuum window in order to attain maximal tuning range and high voltage capability and a new center conductor of longer length and a different vacuum window connection than the Grimm center conductor. However, the new center conductor will be essentially identical to the Grimm center conductor in its basic construction and in the way it connects to the stand for support. The tuner is mechanically very similar to high-power stacked circulators built by ANT of Germany and was designed according to ANT`s established engineering and design criteria and SSC LEB tuning and power requirements. The tuner design incorporates thin tiles of ferrite glued using a high-radiation-resistance epoxy to copper-plated stainless steel cooling plates of thickness 6.5 mm with water cooling channels inside the plates. The cooling plates constitute 16 pie-shaped segments arranged in a disk. They are electrically isolated from each other to suppress eddy currents. Five of these disks are arranged in parallel with high-pressure rf contacts between the plates at the outer radius. The end walls are slotted copper-plated stainless steel of thickness 3 mm.

  16. Mechanical design upgrade of the APS storage ring rf cavity tuner

    SciTech Connect

    Jones, J.; Bromberek, D.; Kang, Y.

    1997-08-01

    The Advanced Photon Source (APS) storage ring (SR) rf system employs four banks of four spherical, single-cell resonant cavities. Each cavity is tuned by varying the cavity volume through insertion/retraction of a copper piston located at the circumference of the cavity and oriented perpendicular to the accelerator beam. During the commissioning of the APS SR, the tuners and cavity tuner ports were prone to extensive arcing and overheating. The existing tuners were modified to eliminate the problems, and two new, redesigned tuners were installed. In both cases marked improvements were obtained in the tuner mechanical performance. As measured by tuner piston and flange surface temperatures, tuner heating has been reduced by a factor of five in the new version. Redesign considerations discussed include tuner piston-to-housing alignment, tuner piston and housing materials and cooling configurations, and tuner piston sliding electrical contacts. The tuner redesign is also distinguished by a modular, more maintainable assembly.

  17. Dependence of ion beam current on position of mobile plate tuner in multi-frequencies microwaves electron cyclotron resonance ion source

    SciTech Connect

    Kurisu, Yosuke; Kiriyama, Ryutaro; Takenaka, Tomoya; Nozaki, Dai; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2012-02-15

    We are constructing a tandem-type electron cyclotron resonance ion source (ECRIS). The first stage of this can supply 2.45 GHz and 11-13 GHz microwaves to plasma chamber individually and simultaneously. We optimize the beam current I{sub FC} by the mobile plate tuner. The I{sub FC} is affected by the position of the mobile plate tuner in the chamber as like a circular cavity resonator. We aim to clarify the relation between the I{sub FC} and the ion saturation current in the ECRIS against the position of the mobile plate tuner. We obtained the result that the variation of the plasma density contributes largely to the variation of the I{sub FC} when we change the position of the mobile plate tuner.

  18. Dependence of ion beam current on position of mobile plate tuner in multi-frequencies microwaves electron cyclotron resonance ion source.

    PubMed

    Kurisu, Yosuke; Kiriyama, Ryutaro; Takenaka, Tomoya; Nozaki, Dai; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2012-02-01

    We are constructing a tandem-type electron cyclotron resonance ion source (ECRIS). The first stage of this can supply 2.45 GHz and 11-13 GHz microwaves to plasma chamber individually and simultaneously. We optimize the beam current I(FC) by the mobile plate tuner. The I(FC) is affected by the position of the mobile plate tuner in the chamber as like a circular cavity resonator. We aim to clarify the relation between the I(FC) and the ion saturation current in the ECRIS against the position of the mobile plate tuner. We obtained the result that the variation of the plasma density contributes largely to the variation of the I(FC) when we change the position of the mobile plate tuner.

  19. Laser tuners using circular piezoelectric benders

    NASA Technical Reports Server (NTRS)

    Mcelroy, J. H.; Thompson, P. E.; Walker, H. E.; Johnson, E. H.; Radecki, D. J.; Reynolds, R. S.

    1975-01-01

    The paper presents the results of an experimental evaluation of a new type of piezoelectric ceramic device designed for use as a laser mirror tuner. Thin plates made from various materials were assembled into a circular bimorph configuration and tested for linearity of movement, maximum travel, and resonant frequency for varying conditions of clamping torque and mirror loading values. Most of the devices tested could accept mirror diameters up to approximately 1.3 cm and maintain a resonant frequency above 2 kHz. Typical mirror translation without measurable tilt was plus or minus 20 micrometers or greater for applied voltages of less than plus or minus 300 V.

  20. Laser tuners using circular piezoelectric benders

    NASA Technical Reports Server (NTRS)

    Mcelroy, J. H.; Thompson, P. E.; Walker, H. E.; Johnson, E. H.; Radecki, D. J.; Reynolds, R. S.

    1975-01-01

    The paper presents the results of an experimental evaluation of a new type of piezoelectric ceramic device designed for use as a laser mirror tuner. Thin plates made from various materials were assembled into a circular bimorph configuration and tested for linearity of movement, maximum travel, and resonant frequency for varying conditions of clamping torque and mirror loading values. Most of the devices tested could accept mirror diameters up to approximately 1.3 cm and maintain a resonant frequency above 2 kHz. Typical mirror translation without measurable tilt was plus or minus 20 micrometers or greater for applied voltages of less than plus or minus 300 V.

  1. Fast Tuner R&D for RIA

    SciTech Connect

    Rusnak, B; Shen, S

    2003-08-19

    The limited cavity beam loading conditions anticipated for the Rare Isotope Accelerator (RIA) create a situation where microphonic-induced cavity detuning dominates radio frequency (RF) coupling and RF system architecture choices in the linac design process. Where most superconducting electron and proton linacs have beam-loaded bandwidths that are comparable to or greater than typical microphonic detuning bandwidths on the cavities, the beam-loaded bandwidths for many heavy-ion species in the RIA driver linac can be as much as a factor of 10 less than the projected 80-150 Hz microphonic control window for the RF structures along the driver, making RF control problematic. System studies indicate that for the low-{beta} driver linac alone, running the cavities with no fast tuner may cost 50% or more than an RF system employing a voltage controlled reactance (VCX) or other type of fast tuner. An update of these system cost studies, along with the status of the VCX work being done at Lawrence Livermore National Lab is presented.

  2. Feedback controlled hybrid fast ferrite tuners

    SciTech Connect

    Remsen, D.B.; Phelps, D.A.; deGrassie, J.S.; Cary, W.P.; Pinsker, R.I.; Moeller, C.P.; Arnold, W.; Martin, S.; Pivit, E.

    1993-09-01

    A low power ANT-Bosch fast ferrite tuner (FFT) was successfully tested into (1) the lumped circuit equivalent of an antenna strap with dynamic plasma loading, and (2) a plasma loaded antenna strap in DIII-D. When the FFT accessible mismatch range was phase-shifted to encompass the plasma-induced variation in reflection coefficient, the 50 {Omega} source was matched (to within the desired 1.4 : 1 voltage standing wave ratio). The time required to achieve this match (i.e., the response time) was typically a few hundred milliseconds, mostly due to a relatively slow network analyzer-computer system. The response time for the active components of the FFT was 10 to 20 msec, or much faster than the present state-of-the-art for dynamic stub tuners. Future FFT tests are planned, that will utilize the DIII-D computer (capable of submillisecond feedback control), as well as several upgrades to the active control circuit, to produce a FFT feedback control system with a response time approaching 1 msec.

  3. Characterization of CNRS Fizeau wedge laser tuner

    NASA Astrophysics Data System (ADS)

    A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom fabricated circuit board which contains a high speed fringe detection and locating circuit. This board includes a dc level discriminator type fringe detector, a counter circuit to determine fringe center, a pulsed laser triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data collection process and interprets the results.

  4. Characterization of CNRS Fizeau wedge laser tuner

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom fabricated circuit board which contains a high speed fringe detection and locating circuit. This board includes a dc level discriminator type fringe detector, a counter circuit to determine fringe center, a pulsed laser triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data collection process and interprets the results.

  5. Fast Ferroelectric L-Band Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2011-03-01

    Analysis and modeling is presented for a fast microwave tuner to operate at 700 MHz which incorporates ferroelectric elements whose dielectric permittivity can be rapidly altered by application of an external voltage. This tuner could be used to correct unavoidable fluctuations in the resonant frequency of superconducting cavities in accelerator structures, thereby greatly reducing the RF power needed to drive the cavities. A planar test version of the tuner has been tested at low levels of RF power, but at 1300 MHz to minimize the physical size of the test structure. This test version comprises one-third of the final version. The tests show performance in good agreement with simulations, but with losses in the ferroelectric elements that are too large for practical use, and with issues in bonding of ferroelectric elements to the metal walls of the tuner structure.

  6. Fast Ferroelectric L-Band Tuner for ILC Cavities

    SciTech Connect

    Hirshfield, Jay L

    2010-03-15

    Design, analysis, and low-power tests are described on a 1.3 GHz ferroelectric tuner that could find application in the International Linear Collider or in Project X at Fermi National Accelerator Laboratory. The tuner configuration utilizes a three-deck sandwich imbedded in a WR-650 waveguide, in which ferroelectric bars are clamped between conducting plates that allow the tuning bias voltage to be applied. Use of a reduced one-third structure allowed tests of critical parameters of the configuration, including phase shift, loss, and switching speed. Issues that were revealed that require improvement include reducing loss tangent in the ferroelectric material, development of a reliable means of brazing ferroelectric elements to copper parts of the tuner, and simplification of the mechanical design of the configuration.

  7. Fast Ferroelectric L-Band Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2012-07-03

    Design, analysis, and low-power tests are described on a ferroelectric tuner concept that could be used for controlling external coupling to RF cavities for the superconducting Energy Recovery Linac (ERL) in the electron cooler of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL). The tuner configuration utilizes several small donut-shaped ferroelectric assemblies, which allow the design to be simpler and more flexible, as compared to previous designs. Design parameters for 704 and 1300 MHz versions of the tuner are given. Simulation results point to efficient performance that could reduce by a factor-of-ten the RF power levels required for driving superconducting cavities in the BNL ERL.

  8. A Fast Double Mode Tuner for Antenna Matching

    NASA Astrophysics Data System (ADS)

    Martin, S.; Arnold, W.; Pivit, E.

    1992-01-01

    To match a microwave transmitter to different loading conditions, two networks are presented. Based on a Fast Ferrite Tuner (FFT) a Double Stub Tuner (DST) is developed. The principle function of a FFT, which consists of a stripline partially filled with microwave ferrites, may be described by tunable reactances. The DST consists of two FFTs connected by a transformation line. An advanced development is the Double Mode Tuner (DMT). It uses two different modes, which can be tuned independently by varying DC magnetic field. Thus series and parallel reactances are introduced into a coaxial system and a T-matching network results. Both matching networks can be tuned within 20 ms and handle power levels up to 2 MW. Typical applications are in the frequency range from 25 MHz to 300 MHz with load reflection coefficients from 0.0 to 0.5 at all phases.

  9. Fast 704 MHz Ferroelectric Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2012-04-12

    The Omega-P SBIR project described in this Report has as its goal the development, test, and evaluation of a fast electrically-controlled L-band tuner for BNL Energy Recovery Linac (ERL) in the Electron Ion Collider (EIC) upgrade of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL). The tuner, that employs an electrically-controlled ferroelectric component, is to allow fast compensation to cavity resonance changes. In ERLs, there are several factors which significantly affect the amount of power required from the wall-plug to provide the RF-power level necessary for the operation. When beam loading is small, the power requirements are determined by (i) ohmic losses in cavity walls, (ii) fluctuations in amplitude and/or phase for beam currents, and (iii) microphonics. These factors typically require a substantial change in the coupling between the cavity and the feeding line, which results in an intentional broadening of the cavity bandwidth, which in turn demands a significant amount of additional RF power. If beam loading is not small, there is a variety of beam-drive phase instabilities to be managed, and microphonics will still remain an issue, so there remain requirements for additional power. Moreover ERL performance is sensitive to changes in beam arrival time, since any such change is equivalent to phase instability with its vigorous demands for additional power. In this Report, we describe the new modular coaxial tuner, with specifications suitable for the 704 MHz ERL application. The device would allow changing the RF-coupling during the cavity filling process in order to effect significant RF power savings, and also will provide rapid compensation for beam imbalance and allow for fast stabilization against phase fluctuations caused by microphonics, beam-driven instabilities, etc. The tuner is predicted to allow a reduction of about ten times in the required power from the RF source, as compared to a compensation system

  10. Waveguide Stub Tuner Analysis for CEBAF Machine Application

    SciTech Connect

    Haipeng Wang; Michael Tiefenback

    2004-08-01

    Three-stub WR650 waveguide tuners have been used on the CEBAF superconducting cavities for two changes of the external quality factors (Qext): increasing the Qext from 3.4-7.6 x 10{sup 6} to 8 x 10{sup 6}6 on 5-cell cavities to reduce klystron power at operating gradients and decreasing the Qext from 1.7-2.4 x 10{sup 7} to 8 x 10{sup 6} on 7-cell cavities to simplify control of Lorenz Force detuning. To understand the reactive tuning effects in the machine operations with beam current and mechanical tuning, a network analysis model was developed. The S parameters of the stub tuner were simulated by MAFIA and measured on the bench. We used this stub tuner model to study tuning range, sensitivity, and frequency pulling, as well as cold waveguide (WG) and window heating problems. Detailed experimental results are compared against this model. Pros and cons of this stub tuner application are summarized.

  11. A hydrogen maser with cavity auto-tuner for timekeeping

    NASA Technical Reports Server (NTRS)

    Lin, C. F.; He, J. W.; Zhai, Z. C.

    1992-01-01

    A hydrogen maser frequency standard for timekeeping was worked on at the Shanghai Observatory. The maser employs a fast cavity auto-tuner, which can detect and compensate the frequency drift of the high-Q resonant cavity with a short time constant by means of a signal injection method, so that the long term frequency stability of the maser standard is greatly improved. The cavity auto-tuning system and some maser data obtained from the atomic time comparison are described.

  12. A hydrogen maser with cavity auto-tuner for timekeeping

    NASA Technical Reports Server (NTRS)

    Lin, C. F.; He, J. W.; Zhai, Z. C.

    1992-01-01

    A hydrogen maser frequency standard for timekeeping was worked on at the Shanghai Observatory. The maser employs a fast cavity auto-tuner, which can detect and compensate the frequency drift of the high-Q resonant cavity with a short time constant by means of a signal injection method, so that the long term frequency stability of the maser standard is greatly improved. The cavity auto-tuning system and some maser data obtained from the atomic time comparison are described.

  13. Testing of the new tuner design for the CEBAF 12 GeV upgrade SRF cavities

    SciTech Connect

    Edward Daly; G. Davis; William Hicks

    2005-05-01

    The new tuner design for the 12 GeV Upgrade SRF cavities consists of a coarse mechanical tuner and a fine piezoelectric tuner. The mechanism provides a 30:1 mechanical advantage, is pre-loaded at room temperature and tunes the cavities in tension only. All of the components are located in the insulating vacuum space and attached to the helium vessel, including the motor, harmonic drive and piezoelectric actuators. The requirements and detailed design are presented. Measurements of range and resolution of the coarse tuner are presented and discussed.

  14. Optimizing Site Selection for HEDS

    NASA Astrophysics Data System (ADS)

    Marshall, J. R.

    1999-01-01

    MSP 2001 will be conducting environmental assessment for the Human exploration and Development of Space (HEDS) Program in order to safeguard future human exploration of the planet, in addition to geological studies being addressed by the APEX payload. In particular, the MECA experiment (see other abstracts, this volume), will address chemical toxicity of the soil, the presence of adhesive or abrasive soil dust components, and the geoelectrical-triboelectrical character of the surface environment. The attempt will be to quantify hazards to humans and machinery structures deriving from compounds that poison, corrode, abrade, invade (lungs or machinery), contaminate, or electrically interfere with the human presence. The DART experiment, will also address the size and electrical nature of airborne dust. Photo-imaging of the local scene with RAC and Pancam will be able to assess dust raising events such as local thermal vorticity-driven dust devils. The need to introduce discussion of HEDS landing site requirements stems from potential conflict, but also potential synergism with other '01 site requirements. In-situ Resource Utilization (ISRU) mission components desire as much solar radiation as possible, with some very limited amount of dust available; the planetary-astrobiology mission component desires sufficient rock abundance without inhibiting rover activities (and an interesting geological niche if available), the radiation component may again have special requirements, as will the engineers concerned with mission safety and mission longevity. The '01 mission affords an excellent opportunity to emphasize HEDS landing site requirements, given the constraint that both recent missions (Pathfinder, Mars '98) and future missions (MSP '03 & '05) have had or will have strong geological science drivers in the site selection process. What type of landing site best facilitates investigation of the physical, chemical, and behavioral properties of soil and dust? There are

  15. Self-extinction through optimizing selection

    PubMed Central

    Parvinen, Kalle; Dieckmann, Ulf

    2013-01-01

    Evolutionary suicide is a process in which selection drives a viable population to extinction. So far, such selection-driven self-extinction has been demonstrated in models with frequency-dependent selection. This is not surprising, since frequency-dependent selection can disconnect individual-level and population-level interests through environmental feedback. Hence it can lead to situations akin to the tragedy of the commons, with adaptations that serve the selfish interests of individuals ultimately ruining a population. For frequency-dependent selection to play such a role, it must not be optimizing. Together, all published studies of evolutionary suicide have created the impression that evolutionary suicide is not possible with optimizing selection. Here we disprove this misconception by presenting and analyzing an example in which optimizing selection causes self-extinction. We then take this line of argument one step further by showing, in a further example, that selection-driven self-extinction can occur even under frequency-independent selection. PMID:23583808

  16. Self-extinction through optimizing selection.

    PubMed

    Parvinen, Kalle; Dieckmann, Ulf

    2013-09-21

    Evolutionary suicide is a process in which selection drives a viable population to extinction. So far, such selection-driven self-extinction has been demonstrated in models with frequency-dependent selection. This is not surprising, since frequency-dependent selection can disconnect individual-level and population-level interests through environmental feedback. Hence it can lead to situations akin to the tragedy of the commons, with adaptations that serve the selfish interests of individuals ultimately ruining a population. For frequency-dependent selection to play such a role, it must not be optimizing. Together, all published studies of evolutionary suicide have created the impression that evolutionary suicide is not possible with optimizing selection. Here we disprove this misconception by presenting and analyzing an example in which optimizing selection causes self-extinction. We then take this line of argument one step further by showing, in a further example, that selection-driven self-extinction can occur even under frequency-independent selection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. DESIGN CONSIDERATIONS FOR THE MECHANICAL TUNER OF THE RHIC ELECTRON COOLER RF CAVITY.

    SciTech Connect

    RANK, J.; BEN-ZVI,I.; HAHN,G.; MCINTYRE,G.; DALY,E.; PREBLE,J.

    2005-05-16

    The ECX Project, Brookhaven Lab's predecessor to the RHIC e-Cooler, includes a prototype RF tuner mechanism capable of both coarse and fast tuning. This tuner concept, adapted originally from a DESY design, has longer stroke and significantly higher loads attributable to the very stiff ECX cavity shape. Structural design, kinematics, controls, thermal and RF issues are discussed and certain improvements are proposed.

  18. Optimel: Software for selecting the optimal method

    NASA Astrophysics Data System (ADS)

    Popova, Olga; Popov, Boris; Romanov, Dmitry; Evseeva, Marina

    Optimel: software for selecting the optimal method automates the process of selecting a solution method from the optimization methods domain. Optimel features practical novelty. It saves time and money when conducting exploratory studies if its objective is to select the most appropriate method for solving an optimization problem. Optimel features theoretical novelty because for obtaining the domain a new method of knowledge structuring was used. In the Optimel domain, extended quantity of methods and their properties are used, which allows identifying the level of scientific studies, enhancing the user's expertise level, expand the prospects the user faces and opening up new research objectives. Optimel can be used both in scientific research institutes and in educational institutions.

  19. State-space self-tuner for on-line adaptive control

    NASA Technical Reports Server (NTRS)

    Shieh, L. S.

    1994-01-01

    Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.

  20. Feature Selection via Chaotic Antlion Optimization

    PubMed Central

    Zawbaa, Hossam M.; Emary, E.; Grosan, Crina

    2016-01-01

    Background Selecting a subset of relevant properties from a large set of features that describe a dataset is a challenging machine learning task. In biology, for instance, the advances in the available technologies enable the generation of a very large number of biomarkers that describe the data. Choosing the more informative markers along with performing a high-accuracy classification over the data can be a daunting task, particularly if the data are high dimensional. An often adopted approach is to formulate the feature selection problem as a biobjective optimization problem, with the aim of maximizing the performance of the data analysis model (the quality of the data training fitting) while minimizing the number of features used. Results We propose an optimization approach for the feature selection problem that considers a “chaotic” version of the antlion optimizer method, a nature-inspired algorithm that mimics the hunting mechanism of antlions in nature. The balance between exploration of the search space and exploitation of the best solutions is a challenge in multi-objective optimization. The exploration/exploitation rate is controlled by the parameter I that limits the random walk range of the ants/prey. This variable is increased iteratively in a quasi-linear manner to decrease the exploration rate as the optimization progresses. The quasi-linear decrease in the variable I may lead to immature convergence in some cases and trapping in local minima in other cases. The chaotic system proposed here attempts to improve the tradeoff between exploration and exploitation. The methodology is evaluated using different chaotic maps on a number of feature selection datasets. To ensure generality, we used ten biological datasets, but we also used other types of data from various sources. The results are compared with the particle swarm optimizer and with genetic algorithm variants for feature selection using a set of quality metrics. PMID:26963715

  1. Optimizing Clinical Research Participant Selection with Informatics.

    PubMed

    Weng, Chunhua

    2015-11-01

    Clinical research participants are often not reflective of real-world patients due to overly restrictive eligibility criteria. Meanwhile, unselected participants introduce confounding factors and reduce research efficiency. Biomedical informatics, especially Big Data increasingly made available from electronic health records, offers promising aids to optimize research participant selection through data-driven transparency.

  2. Tests of a prototype magnetostrictive tuner for superconducting cavities

    SciTech Connect

    Benesch, J.F.; Wiseman, M.

    1996-10-01

    The Continuous Electron Beam Accelerator (CEBA) uses mechanical tuners at 2 K driven by room temperature stepping motors in a feedback loop to maintain cavity frequency at 1497 MHz. Modification of the system was designed, replacing a passive section of the mechanical tuner with a magnetostrictive tuning element consisting of a Ni rod and an industrially supplied 0.25 T superconducting solenoid. This assembly was tested with several magnetic shield configurations designed to keep the stray flux at the Nb cavity below 1 {mu}T when the cavity was normal, to maintain cavity Q. Results of the tests, including change in cavity performance when the cavity was locally quenched near the end of the solenoid, showed that the a multi-layer shield of 6mm steel, with sheets of mu metal, niobium and my metal spaced appropriately outside the thick steel, was effective in containing the flux, both remanent and current-driven, preventing any change in cavity Q upon cooldown or quench with an external heater near the solenoid end. Hysteresis attributed to the Ni magnetostrictive element was observed.

  3. Tuner of a Second Harmonic Cavity of the Fermilab Booster

    SciTech Connect

    Terechkine, I.; Duel, K.; Madrak, R.; Makarov, A.; Romanov, G.; Sun, D.; Tan, C.-Y.

    2015-05-17

    Introducing a second harmonic cavity in the accelerating system of the Fermilab Booster promises significant reduc-tion of the particle beam loss during the injection, transi-tion, and extraction stages. To follow the changing energy of the beam during acceleration cycles, the cavity is equipped with a tuner that employs perpendicularly biased AL800 garnet material as the frequency tuning media. The required tuning range of the cavity is from 75.73 MHz at injection to 105.64 MHz at extraction. This large range ne-cessitates the use of a relatively low bias magnetic field at injection, which could lead to high RF loss power density in the garnet, or a strong bias magnetic field at extraction, which could result in high power consumption in the tuner’s bias magnet. The required 15 Hz repetition rate of the device and high sensitivity of the local RF power loss to the level of the magnetic field added to the challenges of the bias system design. In this report, the main features of a proposed prototype of the second harmonic cavity tuner are presented.

  4. Occluded object imaging via optimal camera selection

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Zhang, Yanning; Tong, Xiaomin; Ma, Wenguang; Yu, Rui

    2013-12-01

    High performance occluded object imaging in cluttered scenes is a significant challenging task for many computer vision applications. Recently the camera array synthetic aperture imaging is proved to be an effective way to seeing object through occlusion. However, the imaging quality of occluded object is often significantly decreased by the shadows of the foreground occluder. Although some works have been presented to label the foreground occluder via object segmentation or 3D reconstruction, these methods will fail in the case of complicated occluder and severe occlusion. In this paper, we present a novel optimal camera selection algorithm to solve the above problem. The main characteristics of this algorithm include: (1) Instead of synthetic aperture imaging, we formulate the occluded object imaging problem as an optimal camera selection and mosaicking problem. To the best of our knowledge, our proposed method is the first one for occluded object mosaicing. (2) A greedy optimization framework is presented to propagate the visibility information among various depth focus planes. (3) A multiple label energy minimization formulation is designed in each plane to select the optimal camera. The energy is estimated in the synthetic aperture image volume and integrates the multi-view intensity consistency, previous visibility property and camera view smoothness, which is minimized via Graph cuts. We compare our method with the state-of-the-art synthetic aperture imaging algorithms, and extensive experimental results with qualitative and quantitative analysis demonstrate the effectiveness and superiority of our approach.

  5. Active Learning With Optimal Instance Subset Selection.

    PubMed

    Fu, Yifan; Zhu, Xingquan; Elmagarmid, A K

    2013-04-01

    Active learning (AL) traditionally relies on some instance-based utility measures (such as uncertainty) to assess individual instances and label the ones with the maximum values for training. In this paper, we argue that such approaches cannot produce good labeling subsets mainly because instances are evaluated independently without considering their interactions, and individuals with maximal ability do not necessarily form an optimal instance subset for learning. Alternatively, we propose to achieve AL with optimal subset selection (ALOSS), where the key is to find an instance subset with a maximum utility value. To achieve the goal, ALOSS simultaneously considers the following: 1) the importance of individual instances and 2) the disparity between instances, to build an instance-correlation matrix. As a result, AL is transformed to a semidefinite programming problem to select a k-instance subset with a maximum utility value. Experimental results demonstrate that ALOSS outperforms state-of-the-art approaches for AL.

  6. Optimization methods for activities selection problems

    NASA Astrophysics Data System (ADS)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  7. Optimized periocular template selection for human recognition.

    PubMed

    Bakshi, Sambit; Sa, Pankaj K; Majhi, Banshidhar

    2013-01-01

    A novel approach for selecting a rectangular template around periocular region optimally potential for human recognition is proposed. A comparatively larger template of periocular image than the optimal one can be slightly more potent for recognition, but the larger template heavily slows down the biometric system by making feature extraction computationally intensive and increasing the database size. A smaller template, on the contrary, cannot yield desirable recognition though the smaller template performs faster due to low computation for feature extraction. These two contradictory objectives (namely, (a) to minimize the size of periocular template and (b) to maximize the recognition through the template) are aimed to be optimized through the proposed research. This paper proposes four different approaches for dynamic optimal template selection from periocular region. The proposed methods are tested on publicly available unconstrained UBIRISv2 and FERET databases and satisfactory results have been achieved. Thus obtained template can be used for recognition of individuals in an organization and can be generalized to recognize every citizen of a nation.

  8. Optimal remediation policy selection under general conditions

    SciTech Connect

    Wang, M.; Zheng, C.

    1997-09-01

    A new simulation-optimization model has been developed for the optimal design of ground-water remediation systems under a variety of field conditions. The model couples genetic algorithm (GA), a global search technique inspired by biological evolution, with MODFLOW and MT3D, two commonly used ground-water flow and solute transport codes. The model allows for multiple management periods in which optimal pumping/injection rates vary with time to reflect the changes in the flow and transport conditions during the remediation process. The objective function of the model incorporates multiple cost terms including the drilling cost, the installation cost, and the costs to extract and treat the contaminated ground water. The simulation-optimization model is first applied to a typical two-dimensional pump-and-treat example with one and three management periods to demonstrate the effectiveness and robustness of the new model. The model is then applied to a large-scale three-dimensional field problem to determine the minimum pumping needed to contain an existing contaminant plume. The optimal solution as determined in this study is compared with a previous solution based on trial-and-error selection.

  9. Optimization of solar-selective paint coatings

    NASA Astrophysics Data System (ADS)

    McChesney, M. A.; Zimmer, P. B.; Lin, R. J. H.

    1982-06-01

    The objective was the development of low-cost, high-performance, solar-selective paint coatings for solar flat-plate collector (FPC) use and passive thermal wall application. Thickness-sensitive selective paint coating development was intended to demonstrate large scale producibility. Thickness-insensitive selective paint (TISP) coating development was intended to develop and optimize the coating for passive solar systems and FPC applications. Low-cost, high-performance TSSP coatings and processes were developed to demonstrate large-scale producibility and meet all program goals. Dip, spray, roll, laminating and gravure processes were investigated and used to produce final samples. High-speed gravure coating was selected as the most promising process for solar foil fabrication. Development and optimization of TISP coatings was not completely successful. A variation in reflective metal pigment was suspected of being the primary problem, although other variables may have contributed. Consistent repeating of optical properties of these coatings achieved on the previous program was not achieved.

  10. A Broadband and Low Cost Monolithic BiCMOS Tuner Chip

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Hui; Cheng, Ting-Yuan; Chang, Chun-Yen

    2004-11-01

    This circuit is a single chip of a BiCMOS wafer used in cable TV set-top converters, cable modems, cable TV tuners and digital TVs. It is a double conversion tuner (DCT) structure which has better performance characteristics than the single conversion tuner (SCT). This single chip comprises two LNAs, one AGC, two LO buffers, two VCOs, two resistor-type and double-balanced Gilbert cell mixers, two synthesizers and ESD protection. Its total gain is 45.70 dB, and its total noise figure is 4.3-7.9 dB. The RF varied range, which is not only 50-860 MHz but also 50-1000 MHz, can be applied in cable TV tuners and cable modems. The consuming power is 0.885 W under 3 V and the die size is 4.8 mm2.

  11. The use of an autochromatic tuner for the measurement of vocal fundamental frequency.

    PubMed

    Solberg, L C; Fowler, L P; Walker, V G

    1991-02-01

    Comparative measures of vocal fundamental frequency were made with three different instruments, the Visi-Pitch. FLORIDA 1, and a Korg Autochromatic Tuner. The purpose of these measures was to determine whether an autochromatic tuner, a relatively inexpensive device designed to assist musicians in fast-tuning their instruments, would provide a valid and reliable measure of vocal fundamental frequency. Subjects for this study included 15 males and 15 females. They were recorded while sustaining vowels, reading a phonetically balanced passage, and singing to their lowest and highest pitch levels. Spearman's Rho correlations indicated that measures taken with the autochromatic tuner significantly correlated with measures taken with the other instruments. Results indicate that the use of an autochromatic tuner to measure vocal fundamental frequency is an effective and inexpensive alternative to other methods for clinical purposes.

  12. Tests of a tuner for a 325 MHz SRF spoke resonator

    SciTech Connect

    Pishchalnikov, Y.; Borissov, E.; Khabiboulline, T.; Madrak, R.; Pilipenko, R.; Ristori, L.; Schappert, W.; /Fermilab

    2011-03-01

    Fermilab is developing 325 MHz SRF spoke cavities for the proposed Project X. A compact fast/slow tuner has been developed for final tuning of the resonance frequency of the cavity after cooling down to operating temperature and to compensate microphonics and Lorentz force detuning [2]. The modified tuner design and results of 4.5K tests of the first prototype are presented. The performance of lever tuners for the SSR1 spoke resonator prototype has been measured during recent CW and pulsed tests in the Fermilab SCTF. The tuner met or exceeded all design goals and has been used to successfully: (1) Bring the cold cavity to the operating frequency; (2) Compensate for dynamic Lorentz force detuning; and (3) Compensate for frequency detuning of the cavity due to changes in the He bath pressure.

  13. Optimal Sensor Selection for Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  14. Selected Isotopes for Optimized Fuel Assembly Tags

    SciTech Connect

    Gerlach, David C.; Mitchell, Mark R.; Reid, Bruce D.; Gesh, Christopher J.; Hurley, David E.

    2008-10-01

    In support of our ongoing signatures project we present information on 3 isotopes selected for possible application in optimized tags that could be applied to fuel assemblies to provide an objective measure of burnup. 1. Important factors for an optimized tag are compatibility with the reactor environment (corrosion resistance), low radioactive activation, at least 2 stable isotopes, moderate neutron absorption cross-section, which gives significant changes in isotope ratios over typical fuel assembly irradiation levels, and ease of measurement in the SIMS machine 2. From the candidate isotopes presented in the 3rd FY 08 Quarterly Report, the most promising appear to be Titanium, Hafnium, and Platinum. The other candidate isotopes (Iron, Tungsten, exhibited inadequate corrosion resistance and/or had neutron capture cross-sections either too high or too low for the burnup range of interest.

  15. Proof-of-principle Experiment of a Ferroelectric Tuner for the 1.3 GHz Cavity

    SciTech Connect

    Choi,E.M.; Hahn, H.; Shchelkunov, S. V.; Hirshfield, J.; Kazakov, S.

    2009-01-01

    A novel tuner has been developed by the Omega-P company to achieve fast control of the accelerator RF cavity frequency. The tuner is based on the ferroelectric property which has a variable dielectric constant as function of applied voltage. Tests using a Brookhaven National Laboratory (BNL) 1.3 GHz electron gun cavity have been carried out for a proof-of-principle experiment of the ferroelectric tuner. Two different methods were used to determine the frequency change achieved with the ferroelectric tuner (FT). The first method is based on a S11 measurement at the tuner port to find the reactive impedance change when the voltage is applied. The reactive impedance change then is used to estimate the cavity frequency shift. The second method is a direct S21 measurement of the frequency shift in the cavity with the tuner connected. The estimated frequency change from the reactive impedance measurement due to 5 kV is in the range between 3.2 kHz and 14 kHz, while 9 kHz is the result from the direct measurement. The two methods are in reasonable agreement. The detail description of the experiment and the analysis are discussed in the paper.

  16. Selectively-informed particle swarm optimization

    PubMed Central

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-01-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315

  17. Selectively-informed particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-03-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors.

  18. Optimal test selection for prediction uncertainty reduction

    DOE PAGES

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less

  19. Optimal test selection for prediction uncertainty reduction

    SciTech Connect

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecise data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.

  20. Optimal selection of biochars for remediating metals ...

    EPA Pesticide Factsheets

    Approximately 500,000 abandoned mines across the U.S. pose a considerable, pervasive risk to human health and the environment due to possible exposure to the residuals of heavy metal extraction. Historically, a variety of chemical and biological methods have been used to reduce the bioavailability of the metals at mine sites. Biochar with its potential to complex and immobilize heavy metals, is an emerging alternative for reducing bioavailability. Furthermore, biochar has been reported to improve soil conditions for plant growth and can be used for promoting the establishment of a soil-stabilizing native plant community to reduce offsite movement of metal-laden waste materials. Because biochar properties depend upon feedstock selection, pyrolysis production conditions, and activation procedures used, they can be designed to meet specific remediation needs. As a result biochar with specific properties can be produced to correspond to specific soil remediation situations. However, techniques are needed to optimally match biochar characteristics with metals contaminated soils to effectively reduce metal bioavailability. Here we present experimental results used to develop a generalized method for evaluating the ability of biochar to reduce metals in mine spoil soil from an abandoned Cu and Zn mine. Thirty-eight biochars were produced from approximately 20 different feedstocks and produced via slow pyrolysis or gasification, and were allowed to react with a f

  1. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2016-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a "Color-Enhanced" sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  2. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2015-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a 'Color-Enhanced' sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  3. Tuner control system of Spoke012 SRF cavity for C-ADS injector I

    NASA Astrophysics Data System (ADS)

    Liu, Na; Sun, Yi; Wang, Guang-Wei; Mi, Zheng-Hui; Lin, Hai-Ying; Wang, Qun-Yao; Liu, Rong; Ma, Xin-Peng

    2016-09-01

    A new tuner control system for spoke superconducting radio frequency (SRF) cavities has been developed and applied to cryomodule I of the C-ADS injector I at the Institute of High Energy Physics, Chinese Academy of Sciences. We have successfully implemented the tuner controller based on Programmable Logic Controller (PLC) for the first time and achieved a cavity tuning phase error of ±0.7° (about ±4 Hz peak to peak) in the presence of electromechanical coupled resonance. This paper presents preliminary experimental results based on the PLC tuner controller under proton beam commissioning. Supported by Proton linac accelerator I of China Accelerator Driven sub-critical System (Y12C32W129)

  4. Perpendicularly Biased YIG Tuners for the Fermilab Recycler 52.809 MHz Cavities

    SciTech Connect

    Madrak, R.; Kashikhin, V.; Makarov, A.; Wildman, D.

    2013-09-13

    For NOvA and future experiments requiring high intensity proton beams, Fermilab is in the process of upgrading the existing accelerator complex for increased proton production. One such improvement is to reduce the Main Injector cycle time, by performing slip stacking, previously done in the Main Injector, in the now repurposed Recycler Ring. Recycler slip stacking requires new tuneable RF cavities, discussed separately in these proceedings. These are quarter wave cavities resonant at 52.809 MHz with a 10 kHz tuning range. The 10 kHz range is achieved by use of a tuner which has an electrical length of approximately one half wavelength at 52.809 MHz. The tuner is constructed from 31/8” diameter rigid coaxial line, with 5 inches of its length containing perpendicularly biased, Al doped Yttrium Iron Garnet (YIG). The tuner design, measurements, and high power test results are presented.

  5. GaAs up converter integrated circuit for a double conversion cable TV set-top tuner

    NASA Astrophysics Data System (ADS)

    Scheinberg, N.; Michels, R.; Fedoroff, V.; Stoffman, D.; Li, K.; Kent, S.; Waight, M.; Marz, D.

    1994-06-01

    A GaAs up converter integrated circuit used for a double conversion cable TV 'set-top' tuner is described. The up converter IC converts the 50 to 550 MHz band to an IF of 700 MHz. The IC meets the linearity and noise figure requirements for a cable TV tuner. It includes an AGC and image reject filter. The reduced component count achieved by using an integrated circuit and the resulting reduction in the size of the tuner, provides potential cost savings over a discrete implementation.

  6. Water Quality Optimization through Selective Withdrawal.

    DTIC Science & Technology

    1983-03-01

    river. 16. Kaplan noted that Staha and Himmelblau compared the COMET al- gorithm to three nonlinear programming codes for 25 test problems. The...Mathematics, Vol 9. Staha, R. L. and Himmelblau , D. M. 1972. "Constrained Optimization Via Moving Exterior Truncations," presented at the Society for

  7. Optimized source selection for intracavitary low dose rate brachytherapy

    SciTech Connect

    Nurushev, T.; Kim, Jinkoo

    2005-05-01

    A procedure has been developed for automating optimal selection of sources from an available inventory for the low dose rate brachytherapy, as a replacement for the conventional trial-and-error approach. The method of optimized constrained ratios was applied for clinical source selection for intracavitary Cs-137 implants using Varian BRACHYVISION software as initial interface. However, this method can be easily extended to another system with isodose scaling and shaping capabilities. Our procedure provides optimal source selection results independent of the user experience and in a short amount of time. This method also generates statistics on frequently requested ideal source strengths aiding in ordering of clinically relevant sources.

  8. Optimization of a crossing system using mate selection.

    PubMed

    Li, Yongjun; van der Werf, Julius H J; Kinghorn, Brian P

    2006-01-01

    A simple model based on one single identified quantitative trait locus (QTL) in a two-way crossing system was used to demonstrate the power of mate selection algorithms as a natural means of opportunistic line development for optimization of crossbreeding programs over multiple generations. Mate selection automatically invokes divergent selection in two parental lines for an over-dominant QTL and increased frequency of the favorable allele toward fixation in the sire-line for a fully-dominant QTL. It was concluded that an optimal strategy of line development could be found by mate selection algorithms for a given set of parameters such as genetic model of QTL, breeding objective and initial frequency of the favorable allele in the base populations, etc. The same framework could be used in other scenarios, such as programs involving crossing to exploit breed effects and heterosis. In contrast to classical index selection, this approach to mate selection can optimize long-term responses.

  9. Optimization of a crossing system using mate selection

    PubMed Central

    Li, Yongjun; Werf, Julius HJ van der; Kinghorn, Brian P

    2006-01-01

    A simple model based on one single identified quantitative trait locus (QTL) in a two-way crossing system was used to demonstrate the power of mate selection algorithms as a natural means of opportunistic line development for optimization of crossbreeding programs over multiple generations. Mate selection automatically invokes divergent selection in two parental lines for an over-dominant QTL and increased frequency of the favorable allele toward fixation in the sire-line for a fully-dominant QTL. It was concluded that an optimal strategy of line development could be found by mate selection algorithms for a given set of parameters such as genetic model of QTL, breeding objective and initial frequency of the favorable allele in the base populations, etc. The same framework could be used in other scenarios, such as programs involving crossing to exploit breed effects and heterosis. In contrast to classical index selection, this approach to mate selection can optimize long-term responses. PMID:16492372

  10. Optimal Selection of Army Military Construction Projects

    DTIC Science & Technology

    2002-06-01

    The second column defines the categories for each project. For example, a pier (category: waterfront restoration) receives ten points even though...world information system project selection (from a set of 28) for the Dubai Medical Center in the State of Dubai in the United Arab Emirates. After

  11. Producer breeding objectives and optimal sire selection.

    PubMed

    Tozer, P R; Stokes, J R

    2002-12-01

    Information from an online survey of dairy producers was used to determine how important producers perceived three different objectives in the breeding problem. The objectives were: maximizing expected net merit of the progeny, minimizing the expected progeny inbreeding coefficient, and minimizing semen expenditure. Producers were asked to rank the three objectives and then to weight the importance of each objective relative to the others. This information was then used to determine weights to be used in a multiple-objective integer program designed to select individual mates for a herd of 76 Jersey cows with known genetic background and cow net merit. The results of the multiple-objective models show that rank and relative importance of producer objectives can affect the portfolio of sires selected. Producers whose primary objective was to maximize expected net merit had a range of average expected progeny net merit of $306 to $310, but the level of expected progeny inbreeding was from 6.99 to 10.45%, with a semen cost per conception of $35 to $41. For producers who selected minimizing progeny inbreeding as the primary goal in their breeding programs, the range of inbreeding was from 6.11 to 6.60%, with lower net merit range of $274 to $301 and semen expenditure of $30 to $37 per conception. One producer selected minimizing semen cost as the primary objective. For that producer's portfolio, the semen cost was $27 per conception and net merit was $288, with a progeny inbreeding coefficient of 10.68%. The results of this research suggest that producer information and goals have a substantial impact on the portfolio of sires selected by that producer to attain these goals.

  12. On Optimal Input Design and Model Selection for Communication Channels

    SciTech Connect

    Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  13. Selective robust optimization: A new intensity-modulated proton therapy optimization strategy

    SciTech Connect

    Li, Yupeng; Niemela, Perttu; Siljamaki, Sami; Vanderstraeten, Reynald; Liao, Li; Jiang, Shengpeng; Li, Heng; Poenisch, Falk; Zhu, X. Ronald; Sahoo, Narayan; Gillin, Michael; Zhang, Xiaodong

    2015-08-15

    Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology, are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.

  14. Optimization of ultrasonic transducers for selective guided wave actuation

    NASA Astrophysics Data System (ADS)

    Miszczynski, Mateusz; Packo, Pawel; Zbyrad, Paulina; Stepinski, Tadeusz; Uhl, Tadeusz; Lis, Jerzy; Wiatr, Kazimierz

    2016-04-01

    The application of guided waves using surface-bonded piezoceramic transducers for nondestructive testing (NDT) and Structural Health Monitoring (SHM) have shown great potential. However, due to difficulty in identification of individual wave modes resulting from their dispersive and multi-modal nature, selective mode excitement methods are highly desired. The presented work focuses on an optimization-based approach to design of a piezoelectric transducer for selective guided waves generation. The concept of the presented framework involves a Finite Element Method (FEM) model in the optimization process. The material of the transducer is optimized in topological sense with the aim of tuning piezoelectric properties for actuation of specific guided wave modes.

  15. Optimized Selective Coatings for Solar Collectors

    NASA Technical Reports Server (NTRS)

    Mcdonald, G.; Curtis, H. B.

    1967-01-01

    The spectral reflectance properties of black nickel electroplated over stainless steel and of black copper produced by oxidation of copper sheet were measured for various plating times of black nickel and for various lengths of time of oxidation of the copper sheet, and compared to black chrome over nickel and to converted zinc. It was determined that there was an optimum time for both plating of black nickel and for the oxidation of copper black. At this time the solar selective properties show high absorptance in the solar spectrum and low emittance in the infrared. The conditions are compared for production of optimum optical properties for black nickel, black copper, black chrome, and two black zinc conversions which at the same conditions had absorptances of 0.84, 0.90, 0.95, 0.84, and 0.92, respectively, and emittances of 0.18, 0.08, 0.09, 0.10, and 0.08, respectively.

  16. Digital logic optimization using selection operators

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor); Cameron, Eric G. (Inventor); Gambles, Jody W. (Inventor)

    2004-01-01

    According to the invention, a digital design method for manipulating a digital circuit netlist is disclosed. In one step, a first netlist is loaded. The first netlist is comprised of first basic cells that are comprised of first kernel cells. The first netlist is manipulated to create a second netlist. The second netlist is comprised of second basic cells that are comprised of second kernel cells. A percentage of the first and second kernel cells are selection circuits. There is less chip area consumed in the second basic cells than in the first basic cells. The second netlist is stored. In various embodiments, the percentage could be 2% or more, 5% or more, 10% or more, 20% or more, 30% or more, or 40% or more.

  17. A frequency tuner for resonant inverters suitable for magnetic hyperthermia applications

    NASA Astrophysics Data System (ADS)

    Mazon, E. E.; Sámano, A. H.; Calleja, H.; Quintero, L. H.; Paz, J. A.; Cano, M. E.

    2017-09-01

    In this study, a frequency tuner system is developed for generating variable frequency magnetic fields for magnetic hyperthermia applications. The tuning device contains three specially designed phase lock loop devices that drive a resonant inverter working in the frequency band of 180-525 kHz. This tuner system can be adapted for other resonant inverters employed in the studies of ferrofluids with superparamagnetic nanoparticles. The performance of the whole system is also examined. Our findings were in agreement with the theoretical expectations of phase locking and frequency tuning. The system is tested for samples of a solid magnetic material of cylindrical shape and ferrofluids with differing concentrations of powdered magnetite. The observations indicate significant frequency changes of the magnetic field due to heating of the samples. These frequency variations can be a source of errors, which should not be neglected in experiments determining the specific absorption rate or power dissipated density.

  18. Selecting optimal partitioning schemes for phylogenomic datasets.

    PubMed

    Lanfear, Robert; Calcott, Brett; Kainer, David; Mayer, Christoph; Stamatakis, Alexandros

    2014-04-17

    Partitioning involves estimating independent models of molecular evolution for different subsets of sites in a sequence alignment, and has been shown to improve phylogenetic inference. Current methods for estimating best-fit partitioning schemes, however, are only computationally feasible with datasets of fewer than 100 loci. This is a problem because datasets with thousands of loci are increasingly common in phylogenetics. We develop two novel methods for estimating best-fit partitioning schemes on large phylogenomic datasets: strict and relaxed hierarchical clustering. These methods use information from the underlying data to cluster together similar subsets of sites in an alignment, and build on clustering approaches that have been proposed elsewhere. We compare the performance of our methods to each other, and to existing methods for selecting partitioning schemes. We demonstrate that while strict hierarchical clustering has the best computational efficiency on very large datasets, relaxed hierarchical clustering provides scalable efficiency and returns dramatically better partitioning schemes as assessed by common criteria such as AICc and BIC scores. These two methods provide the best current approaches to inferring partitioning schemes for very large datasets. We provide free open-source implementations of the methods in the PartitionFinder software. We hope that the use of these methods will help to improve the inferences made from large phylogenomic datasets.

  19. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  20. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  1. A technique for monitoring fast tuner piezoactuator preload forces for superconducting rf cavities

    SciTech Connect

    Pischalnikov, Y.; Branlard, J.; Carcagno, R.; Chase, B.; Edwards, H.; Orris, D.; Makulski, A.; McGee, M.; Nehring, R.; Poloubotko, V.; Sylvester, C.; /Fermilab

    2007-06-01

    The technology for mechanically compensating Lorentz Force detuning in superconducting RF cavities has already been developed at DESY. One technique is based on commercial piezoelectric actuators and was successfully demonstrated on TESLA cavities [1]. Piezo actuators for fast tuners can operate in a frequency range up to several kHz; however, it is very important to maintain a constant static force (preload) on the piezo actuator in the range of 10 to 50% of its specified blocking force. Determining the preload force during cool-down, warm-up, or re-tuning of the cavity is difficult without instrumentation, and exceeding the specified range can permanently damage the piezo stack. A technique based on strain gauge technology for superconducting magnets has been applied to fast tuners for monitoring the preload on the piezoelectric assembly. The design and testing of piezo actuator preload sensor technology is discussed. Results from measurements of preload sensors installed on the tuner of the Capture Cavity II (CCII)[2] tested at FNAL are presented. These results include measurements during cool-down, warmup, and cavity tuning along with dynamic Lorentz force compensation.

  2. Selection of Structures with Grid Optimization, in Multiagent Data Warehouse

    NASA Astrophysics Data System (ADS)

    Gorawski, Marcin; Bańkowski, Sławomir; Gorawski, Michał

    The query optimization problem in data base and data warehouse management systems is quite similar. Changes to Joins sequences, projections and selections, usage of indexes, and aggregations are all decided during the analysis of an execution schedule. The main goal of these changes is to decrease the query response time. The optimization operation is often dedicated to a single node. This paper proposes optimization to grid or cluster data warehouses / databases. Tests were conducted in a multi-agent environment, and the optimization focused not only on a single node but on the whole system as well. A new idea is proposed here with multi-criteria optimization that is based on user-given parameters. Depending on query time, result admissible errors, and the level of system usage, task results were obtained along with grid optimization.

  3. Fast Simulation and Optimization Tool to Explore Selective Neural Stimulation

    PubMed Central

    Dali, Mélissa; Rossel, Olivier; Guiraud, David

    2016-01-01

    In functional electrical stimulation, selective stimulation of axons is desirable to activate a specific target, in particular muscular function. This implies to simulate a fascicule without activating neighboring ones i.e. to be spatially selective. Spatial selectivity is achieved by the use of multicontact cuff electrodes over which the stimulation current is distributed. Because of the large number of parameters involved, numerical simulations provide a way to find and optimize electrode configuration. The present work offers a computation effective scheme and associated tool chain capable of simulating electrode-nerve interface and find the best spread of current to achieve spatial selectivity. PMID:27990231

  4. Fast Simulation and Optimization Tool to Explore Selective Neural Stimulation.

    PubMed

    Dali, Mélissa; Rossel, Olivier; Guiraud, David

    2016-06-13

    In functional electrical stimulation, selective stimulation of axons is desirable to activate a specific target, in particular muscular function. This implies to simulate a fascicule without activating neighboring ones i.e. to be spatially selective. Spatial selectivity is achieved by the use of multicontact cuff electrodes over which the stimulation current is distributed. Because of the large number of parameters involved, numerical simulations provide a way to find and optimize electrode configuration. The present work offers a computation effective scheme and associated tool chain capable of simulating electrode-nerve interface and find the best spread of current to achieve spatial selectivity.

  5. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    PubMed

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained.

  6. Managing the Public Sector Research and Development Portfolio Selection Process: A Case Study of Quantitative Selection and Optimization

    DTIC Science & Technology

    2016-09-01

    PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION by Jason A. Schwartz...PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION 5. FUNDING NUMBERS 6...describing how public sector organizations can implement a research and development (R&D) portfolio optimization strategy to maximize the cost

  7. Training set optimization under population structure in genomic selection

    USDA-ARS?s Scientific Manuscript database

    The optimization of the training set (TRS) in genomic selection (GS) has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the Coefficient of D...

  8. Multidimensional Adaptive Testing with Optimal Design Criteria for Item Selection

    ERIC Educational Resources Information Center

    Mulder, Joris; van der Linden, Wim J.

    2009-01-01

    Several criteria from the optimal design literature are examined for use with item selection in multidimensional adaptive testing. In particular, it is examined what criteria are appropriate for adaptive testing in which all abilities are intentional, some should be considered as a nuisance, or the interest is in the testing of a composite of the…

  9. Optimal Financial Aid Policies for a Selective University.

    ERIC Educational Resources Information Center

    Ehrenberg, Ronald G.; Sherman, Daniel R.

    1984-01-01

    This paper provides a model of optimal financial aid policies for a selective university. The model implies that the financial aid package to be offered to each category of admitted applicants depends on the elasticity of the fraction who accept offers of admission with respect to the financial aid package offered them. (Author/SSH)

  10. [Optimal selection method of technologies of medical wastes treatment].

    PubMed

    Zhou, Feng; Liu, Yong; Guo, Huai-cheng; Wang, Li-jing

    2006-06-01

    This paper investigate the medical wastes (MW) definition, production, characteristics and technical requirements, which is decisive for properly selecting methods for medical wastes treatment (MWT). Base on this, the advantages/disadvantages and adaptation of various treatment options are qualitatively analyzed and broadly compared. Then, four kinds of technologies, namely the thermal treatment, autoclaving, chemical disinfection, and microwave disinfection, are primarily chosen. Moreover, a hierarchy decision-making model considering the disposal status, economic level, policies and international turns is further set up. According to it, 4 proposed methods are effectively assessed. The result indicates that thermal treatment technology is the optimal choice for medical wastes treatment in Hangzhou city. Besides, the optimal selection method for medical wastes treatment is synthetically presented, which is suggested as a strong support for choosing optimal technology, and will contribute a lot to related research as well.

  11. Optimal selection of nodes to propagate influence on networks

    NASA Astrophysics Data System (ADS)

    Sun, Yifan

    2016-11-01

    How to optimize the spreading process on networks has been a hot issue in complex networks, marketing, epidemiology, finance, etc. In this paper, we investigate a problem of optimizing locally the spreading: identifying a fixed number of nodes as seeds which would maximize the propagation of influence to their direct neighbors. All the nodes except the selected seeds are assumed not to spread their influence to their neighbors. This problem can be mapped onto a spin glass model with a fixed magnetization. We provide a message-passing algorithm based on replica symmetrical mean-field theory in statistical physics, which can find the nearly optimal set of seeds. Extensive numerical results on computer-generated random networks and real-world networks demonstrate that this algorithm has a better performance than several other optimization algorithms.

  12. Efficient and scalable Pareto optimization by evolutionary local selection algorithms.

    PubMed

    Menczer, F; Degeratu, M; Street, W N

    2000-01-01

    Local selection is a simple selection scheme in evolutionary computation. Individual fitnesses are accumulated over time and compared to a fixed threshold, rather than to each other, to decide who gets to reproduce. Local selection, coupled with fitness functions stemming from the consumption of finite shared environmental resources, maintains diversity in a way similar to fitness sharing. However, it is more efficient than fitness sharing and lends itself to parallel implementations for distributed tasks. While local selection is not prone to premature convergence, it applies minimal selection pressure to the population. Local selection is, therefore, particularly suited to Pareto optimization or problem classes where diverse solutions must be covered. This paper introduces ELSA, an evolutionary algorithm employing local selection and outlines three experiments in which ELSA is applied to multiobjective problems: a multimodal graph search problem, and two Pareto optimization problems. In all these experiments, ELSA significantly outperforms other well-known evolutionary algorithms. The paper also discusses scalability, parameter dependence, and the potential distributed applications of the algorithm.

  13. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate

  14. Tuner and radiation shield for planar electron paramagnetic resonance microresonators

    SciTech Connect

    Narkowicz, Ryszard; Suter, Dieter

    2015-02-15

    Planar microresonators provide a large boost of sensitivity for small samples. They can be manufactured lithographically to a wide range of target parameters. The coupler between the resonator and the microwave feedline can be integrated into this design. To optimize the coupling and to compensate manufacturing tolerances, it is sometimes desirable to have a tuning element available that can be adjusted when the resonator is connected to the spectrometer. This paper presents a simple design that allows one to bring undercoupled resonators into the condition for critical coupling. In addition, it also reduces radiation losses and thereby increases the quality factor and the sensitivity of the resonator.

  15. Optimizing Ligand Efficiency of Selective Androgen Receptor Modulators (SARMs)

    PubMed Central

    2015-01-01

    A series of selective androgen receptor modulators (SARMs) containing the 1-(trifluoromethyl)benzyl alcohol core have been optimized for androgen receptor (AR) potency and drug-like properties. We have taken advantage of the lipophilic ligand efficiency (LLE) parameter as a guide to interpret the effect of structural changes on AR activity. Over the course of optimization efforts the LLE increased over 3 log units leading to a SARM 43 with nanomolar potency, good aqueous kinetic solubility (>700 μM), and high oral bioavailability in rats (83%). PMID:26819671

  16. Optimizing Ligand Efficiency of Selective Androgen Receptor Modulators (SARMs).

    PubMed

    Handlon, Anthony L; Schaller, Lee T; Leesnitzer, Lisa M; Merrihew, Raymond V; Poole, Chuck; Ulrich, John C; Wilson, Joseph W; Cadilla, Rodolfo; Turnbull, Philip

    2016-01-14

    A series of selective androgen receptor modulators (SARMs) containing the 1-(trifluoromethyl)benzyl alcohol core have been optimized for androgen receptor (AR) potency and drug-like properties. We have taken advantage of the lipophilic ligand efficiency (LLE) parameter as a guide to interpret the effect of structural changes on AR activity. Over the course of optimization efforts the LLE increased over 3 log units leading to a SARM 43 with nanomolar potency, good aqueous kinetic solubility (>700 μM), and high oral bioavailability in rats (83%).

  17. Optimized LOWESS normalization parameter selection for DNA microarray data

    PubMed Central

    Berger, John A; Hautaniemi, Sampsa; Järvinen, Anna-Kaarina; Edgren, Henrik; Mitra, Sanjit K; Astola, Jaakko

    2004-01-01

    Background Microarray data normalization is an important step for obtaining data that are reliable and usable for subsequent analysis. One of the most commonly utilized normalization techniques is the locally weighted scatterplot smoothing (LOWESS) algorithm. However, a much overlooked concern with the LOWESS normalization strategy deals with choosing the appropriate parameters. Parameters are usually chosen arbitrarily, which may reduce the efficiency of the normalization and result in non-optimally normalized data. Thus, there is a need to explore LOWESS parameter selection in greater detail. Results and discussion In this work, we discuss how to choose parameters for the LOWESS method. Moreover, we present an optimization approach for obtaining the fraction of data points utilized in the local regression and analyze results for local print-tip normalization. The optimization procedure determines the bandwidth parameter for the local regression by minimizing a cost function that represents the mean-squared difference between the LOWESS estimates and the normalization reference level. We demonstrate the utility of the systematic parameter selection using two publicly available data sets. The first data set consists of three self versus self hybridizations, which allow for a quantitative study of the optimization method. The second data set contains a collection of DNA microarray data from a breast cancer study utilizing four breast cancer cell lines. Our results show that different parameter choices for the bandwidth window yield dramatically different calibration results in both studies. Conclusions Results derived from the self versus self experiment indicate that the proposed optimization approach is a plausible solution for estimating the LOWESS parameters, while results from the breast cancer experiment show that the optimization procedure is readily applicable to real-life microarray data normalization. In summary, the systematic approach to obtain critical

  18. Optimization of the selective frequency damping parameters using model reduction

    NASA Astrophysics Data System (ADS)

    Cunha, Guilherme; Passaggia, Pierre-Yves; Lazareff, Marc

    2015-09-01

    In the present work, an optimization methodology to compute the best control parameters, χ and Δ, for the selective frequency damping method is presented. The optimization does not suppose any a priori knowledge of the flow physics, neither of the underlying numerical methods, and is especially suited for simulations requiring large quantity of grid elements and processors. It allows for obtaining an optimal convergence rate to a steady state of the damped Navier-Stokes system. This is achieved using the Dynamic Mode Decomposition, which is a snapshot-based method, to estimate the eigenvalues associated with global unstable dynamics. Validations test cases are presented for the numerical configurations of a laminar flow past a 2D cylinder, a separated boundary-layer over a shallow bump, and a 3D turbulent stratified-Poiseuille flow.

  19. Improved Clonal Selection Algorithm Combined with Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Gao, Shangce; Wang, Wei; Dai, Hongwei; Li, Fangjia; Tang, Zheng

    Both the clonal selection algorithm (CSA) and the ant colony optimization (ACO) are inspired by natural phenomena and are effective tools for solving complex problems. CSA can exploit and explore the solution space parallely and effectively. However, it can not use enough environment feedback information and thus has to do a large redundancy repeat during search. On the other hand, ACO is based on the concept of indirect cooperative foraging process via secreting pheromones. Its positive feedback ability is nice but its convergence speed is slow because of the little initial pheromones. In this paper, we propose a pheromone-linker to combine these two algorithms. The proposed hybrid clonal selection and ant colony optimization (CSA-ACO) reasonably utilizes the superiorities of both algorithms and also overcomes their inherent disadvantages. Simulation results based on the traveling salesman problems have demonstrated the merit of the proposed algorithm over some traditional techniques.

  20. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    PubMed Central

    Faltermeier, Rupert; Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses. PMID:26693250

  1. Adjustable Bearing System with Selectively Optimized Installational Clearances

    DTIC Science & Technology

    1997-06-30

    ÄÖQ¥ÄL1HE mssx^. Navy Case No. 78,325 PATENTS ADJUSTABLE BEARING SYSTEM WITH SELECTIVELY OPTIMIZED INSTALLATIONAL CLEARANCES BACKGROUND OF THE... clearance 7 conditions. 8 9 10 .. small range of clearances within which to accommodate various operational conditions. Thus, a 12 very tight... clearance is extremely difficult to achieve for certain installations or conditions such as 13 quiet submarine control surface operation and

  2. Feature selection for optimized skin tumor recognition using genetic algorithms.

    PubMed

    Handels, H; Ross, T; Kreusch, J; Wolff, H H; Pöppl, S J

    1999-07-01

    In this paper, a new approach to computer supported diagnosis of skin tumors in dermatology is presented. High resolution skin surface profiles are analyzed to recognize malignant melanomas and nevocytic nevi (moles), automatically. In the first step, several types of features are extracted by 2D image analysis methods characterizing the structure of skin surface profiles: texture features based on cooccurrence matrices, Fourier features and fractal features. Then, feature selection algorithms are applied to determine suitable feature subsets for the recognition process. Feature selection is described as an optimization problem and several approaches including heuristic strategies, greedy and genetic algorithms are compared. As quality measure for feature subsets, the classification rate of the nearest neighbor classifier computed with the leaving-one-out method is used. Genetic algorithms show the best results. Finally, neural networks with error back-propagation as learning paradigm are trained using the selected feature sets. Different network topologies, learning parameters and pruning algorithms are investigated to optimize the classification performance of the neural classifiers. With the optimized recognition system a classification performance of 97.7% is achieved.

  3. Hyperopt: a Python library for model selection and hyperparameter optimization

    NASA Astrophysics Data System (ADS)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  4. Theoretical Analysis of Triple Liquid Stub Tuner Impedance Matching for ICRH on Tokamaks

    NASA Astrophysics Data System (ADS)

    Du, Dan; Gong, Xueyu; Yin, Lan; Xiang, Dong; Li, Jingchun

    2015-12-01

    The impedance matching is crucial for continuous wave operation of ion cyclotron resonance heating (ICRH) antennae with high power injection into plasmas. A sudden increase in the reflected radio frequency power due to an impedance mismatch of the ICRH system is an issue which must be solved for present-day and future fusion reactors. This paper presents a method for theoretical analysis of ICRH system impedance matching for a triple liquid stub tuner under plasma operational conditions. The relationship of the antenna input impedance with the plasma parameters and operating frequency is first obtained using a global solution. Then, the relations of the plasma parameters and operating frequency with the matching liquid heights are indirectly obtained through numerical simulation according to transmission line theory and matching conditions. The method provides an alternative theoretical method, rather than measurements, to study triple liquid stub tuner impedance matching for ICRH, which may be beneficial for the design of ICRH systems on tokamaks. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2014GB108002, 2013GB107001), National Natural Science Foundation of China (Nos. 11205086, 11205053, 11375085, and 11405082), the Construct Program of Fusion and Plasma Physics Innovation Team in Hunan Province, China (No. NHXTD03), the Natural Science Foundation of Hunan Province, China (No. 2015JJ4044)

  5. ICRF antenna matching system with ferrite tuners for the Alcator C-Mod tokamak

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Binus, A.; Wukitch, S. J.; Koert, P.; Murray, R.; Pfeiffer, A.

    2015-12-01

    Real-time fast ferrite tuning (FFT) has been successfully implemented on the ICRF antennas on Alcator C-Mod. The former prototypical FFT system on the E-port 2-strap antenna has been upgraded using new ferrite tuners that have been designed specifically for the operational parameters of the Alcator C-Mod ICRF system (˜ 80 MHz). Another similar FFT system, with two ferrite tuners and one fixed-length stub, has been installed on the transmission line of the D-port 2-strap antenna. These two systems share a Linux-server-based real-time controller. These FFT systems are able to achieve and maintain the reflected power to the transmitters to less than 1% in real time during the plasma discharges under almost all plasma conditions, and help ensure reliable high power operation of the antennas. The innovative field-aligned (FA) 4-strap antenna on J-port has been found to have an interesting feature of loading insensitivity vs. plasma conditions. This feature allows us to significantly improve the matching for the FA J-port antenna by installing carefully designed stubs on the two transmission lines. The reduction of the RF voltages in the transmission lines has enabled the FA J-port antenna to deliver 3.7 MW RF power to plasmas out of the 4 MW source power in high performance I-mode plasmas.

  6. Optimization of topical gels with betamethasone dipropionate: selection of gel forming and optimal cosolvent system.

    PubMed

    Băiţan, Mariana; Lionte, Mihaela; Moisuc, Lăcrămioara; Gafiţanu, Eliza

    2011-01-01

    The purpose of these studies was to develop a 0.05% betamethasone gel characterized by physical-chemical stability and good release properties. The preliminary studies were designed to select the gel-forming agents and the excipients compatible with betamethasone dipropionate. In order to formulate a clear gel without particles of drug substances in suspension, a solvent system for the drug substance was selected. The content of drug substance released, the rheological and in vitro release tests were the tools used for the optimal formulation selection. A stable carbomer gel was obtained by solubilization of betamethasone dipropionate in a vehicle composed by 40% PEG 400, 10% ethanol and 5% Transcutol.

  7. State-Selective Excitation of Quantum Systems via Geometrical Optimization.

    PubMed

    Chang, Bo Y; Shin, Seokmin; Sola, Ignacio R

    2015-09-08

    We lay out the foundations of a general method of quantum control via geometrical optimization. We apply the method to state-selective population transfer using ultrashort transform-limited pulses between manifolds of levels that may represent, e.g., state-selective transitions in molecules. Assuming that certain states can be prepared, we develop three implementations: (i) preoptimization, which implies engineering the initial state within the ground manifold or electronic state before the pulse is applied; (ii) postoptimization, which implies engineering the final state within the excited manifold or target electronic state, after the pulse; and (iii) double-time optimization, which uses both types of time-ordered manipulations. We apply the schemes to two important dynamical problems: To prepare arbitrary vibrational superposition states on the target electronic state and to select weakly coupled vibrational states. Whereas full population inversion between the electronic states only requires control at initial time in all of the ground vibrational levels, only very specific superposition states can be prepared with high fidelity by either pre- or postoptimization mechanisms. Full state-selective population inversion requires manipulating the vibrational coherences in the ground electronic state before the optical pulse is applied and in the excited electronic state afterward, but not during all times.

  8. A new approach to the optimal target selection problem

    NASA Astrophysics Data System (ADS)

    Elson, E. C.; Bassett, B. A.; van der Heyden, K.; Vilakazi, Z. Z.

    2007-03-01

    Context: This paper addresses a common problem in astronomy and cosmology: to optimally select a subset of targets from a larger catalog. A specific example is the selection of targets from an imaging survey for multi-object spectrographic follow-up. Aims: We present a new heuristic optimisation algorithm, HYBRID, for this purpose and undertake detailed studies of its performance. Methods: HYBRID combines elements of the simulated annealing, MCMC and particle-swarm methods and is particularly successful in cases where the survey landscape has multiple curvature or clustering scales. Results: HYBRID consistently outperforms the other methods, especially in high-dimensionality spaces with many extrema. This means many fewer simulations must be run to reach a given performance confidence level and implies very significant advantages in solving complex or computationally expensive optimisation problems. Conclusions: .HYBRID outperforms both MCMC and SA in all cases including optimisation of high dimensional continuous surfaces indicating that HYBRID is useful far beyond the specific problem of optimal target selection. Future work will apply HYBRID to target selection for the new 10 m Southern African Large Telescope in South Africa.

  9. An R-D optimized transcoding resilient motion vector selection

    NASA Astrophysics Data System (ADS)

    Aminlou, Alireza; Semsarzadeh, Mehdi; Fatemi, Omid

    2014-12-01

    Selection of motion vector (MV) has a significant impact on the quality of an encoded, and particularly a transcoded video, in terms of rate-distortion (R-D) performance. The conventional motion estimation process, in most existing video encoders, ignores the rate of residuals by utilizing rate and distortion of motion compensation step. This approach implies that the selected MV depends on the quantization parameter. Hence, the same MV that has been selected for high bit rate compression may not be suitable for low bit rate ones when transcoding the video with motion information reuse technique, resulting in R-D performance degradation. In this paper, we propose an R-D optimized motion selection criterion that takes into account the effect of residual rate in MV selection process. Based on the proposed criterion, a new two-piece Lagrange multiplier selection is introduced for motion estimation process. Analytical evaluations indicate that our proposed scheme results in MVs that are less sensitive to changes in bit rate or quantization parameter. As a result, MVs in the encoded bitstream may be used even after the encoded sequence has been transcoded to a lower bit rate one using re-quantization. Simulation results indicate that the proposed technique improves the quality performance of coding and transcoding without any computational overhead.

  10. Field of view selection for optimal airborne imaging sensor performance

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.; Barnard, P. Werner; Fildis, Halidun; Erbudak, Mustafa; Senger, Tolga; Alpman, Mehmet E.

    2014-05-01

    The choice of the Field of View (FOV) of imaging sensors used in airborne targeting applications has major impact on the overall performance of the system. Conducting a market survey from published data on sensors used in stabilized airborne targeting systems shows a trend of ever narrowing FOVs housed in smaller and lighter volumes. This approach promotes the ever increasing geometric resolution provided by narrower FOVs, while it seemingly ignores the influences the FOV selection has on the sensor's sensitivity, the effects of diffraction, the influences of sight line jitter and collectively the overall system performance. This paper presents a trade-off methodology to select the optimal FOV for an imaging sensor that is limited in aperture diameter by mechanical constraints (such as space/volume available and window size) by balancing the influences FOV has on sensitivity and resolution and thereby optimizing the system's performance. The methodology may be applied to staring array based imaging sensors across all wavebands from visible/day cameras through to long wave infrared thermal imagers. Some examples of sensor analysis applying the trade-off methodology are given that highlights the performance advantages that can be gained by maximizing the aperture diameters and choosing the optimal FOV for an imaging sensor used in airborne targeting applications.

  11. Optimal control of mode-selective femtochemistry in multidimensional systems

    SciTech Connect

    Mitric, Roland; Bonacic-Koutecky, Vlasta

    2007-09-15

    We present a strategy for optimal control of the ground-state dynamics in multidimensional systems based on a combination of the semiclassical Wigner distribution approach with direct quantum chemical molecular dynamics (MD) 'on the fly'. This allows one to treat all degrees of freedom without the need for precalculation of global potential energy surfaces. We demonstrate the scope of our theoretical procedure on two prototype systems representing rigid symmetrical molecules (Na{sub 3}F) and flexible biomolecules with low-frequency modes (glycine). We show that the ground-state isomerization process can be selectively driven by ultrashort laser pulses with different shapes which are characteristic of the prototype systems. Thus, our method opens perspectives for control of the functionality of biomolecules. Moreover, assignment of the underlying processes to pulse shapes based on MD allows one to use optimal control as a tool for analysis.

  12. Development of a movable plunger tuner for the high-power RF cavity for the PEP-II B-factory

    SciTech Connect

    Schwarz, H.D.; Fant, K.; Judkins, J.G.

    1997-05-01

    A 10 cm diameter by 5 cm travel plunger tuner was developed for the PEP-II RF copper cavity system. The single cell cavity including the tuner is designed to operate up to 150 kW of dissipated RF power are specially placed 8.5 cm away from the inside wall of the cavity to avoid fundamental and higher order mode resonances. The spring fingers are made of dispersion-strengthened copper to accommodate relatively high heating. The design, alignment, testing and performance of the tuner is described.

  13. Designing Pareto-optimal selection systems: formalizing the decisions required for selection system development.

    PubMed

    De Corte, Wilfried; Sackett, Paul R; Lievens, Filip

    2011-09-01

    The article presents an analytic method for designing Pareto-optimal selection systems where the applicants belong to a mixture of candidate populations. The method is useful in both applied and research settings. In an applied context, the present method is the first to assist the selection practitioner when deciding on 6 major selection design issues: (1) the predictor subset, (2) the selection rule, (3) the selection staging, (4) the predictor sequencing, (5) the predictor weighting, and (6) the stage retention decision issue. From a research perspective, the method offers a unique opportunity for studying the impact and relative importance of different strategies for reducing adverse impact. PsycINFO Database Record (c) 2011 APA, all rights reserved

  14. Key elements in optimizing catalyst selections for resid FCC units

    SciTech Connect

    Yanik, S.J.; O`Connor, P.

    1995-09-01

    Achieving the optimum activity and yield structure from a commercial Resid FCC Unit (RFCC) is essential to maximizing profitability in today`s modern refinery. Proper catalyst selection is a key element in this optimization. This paper is written to provide FCC Process Engineers with an understanding of some basic elements of RFCC operation. The necessity of using realistic evaluation methods to assure proper RFCC catalyst selection is explained. The differences between Activity limited and Delta Coke limited RFCC operations are elucidated and the related catalyst performance requirements are discussed. The effect of the catalyst to oil ratio on conversion and on catalyst site utilization and poisoning plays a key role in the transition of an RFCC unit from Catalyst Activity limited regime to a Cat-to-Oil limited regime. For the Activity limited operation the catalyst resistance to poisons with the appropriate feedstock will be the most important selection criteria. For the Delta Coke limited operation, a reduction of the commercial delta coke of the catalyst will be crucial. The types of commercial delta coke are discussed and methods for their evaluation are suggested. In both cases the use of realistic catalyst evaluation methods and feedstock will be essential in order to arrive at the correct catalyst selection. Finally, commercial data comparisons illustrate the improvements in product value that can be achieved when the proper catalyst is chosen.

  15. Optimal Selection of Threshold Value 'r' for Refined Multiscale Entropy.

    PubMed

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar

    2015-12-01

    Refined multiscale entropy (RMSE) technique was introduced to evaluate complexity of a time series over multiple scale factors 't'. Here threshold value 'r' is updated as 0.15 times SD of filtered scaled time series. The use of fixed threshold value 'r' in RMSE sometimes assigns very close resembling entropy values to certain time series at certain temporal scale factors and is unable to distinguish different time series optimally. The present study aims to evaluate RMSE technique by varying threshold value 'r' from 0.05 to 0.25 times SD of filtered scaled time series and finding optimal 'r' values for each scale factor at which different time series can be distinguished more effectively. The proposed RMSE was used to evaluate over HRV time series of normal sinus rhythm subjects, patients suffering from sudden cardiac death, congestive heart failure, healthy adult male, healthy adult female and mid-aged female groups as well as over synthetic simulated database for different datalengths 'N' of 3000, 3500 and 4000. The proposed RMSE results in improved discrimination among different time series. To enhance the computational capability, empirical mathematical equations have been formulated for optimal selection of threshold values 'r' as a function of SD of filtered scaled time series and datalength 'N' for each scale factor 't'.

  16. Optimizing Hammermill Performance Through Screen Selection and Hammer Design

    SciTech Connect

    Neal A. Yancey; Tyler L. Westover; Christopher T. Wright

    2013-01-01

    Background: Mechanical preprocessing, which includes particle size reduction and mechanical separation, is one of the primary operations in the feedstock supply system for a lignocellulosic biorefinery. It is the means by which raw biomass from the field or forest is mechanically transformed into an on-spec feedstock with characteristics better suited for the fuel conversion process. Results: This work provides a general overview of the objectives and methodologies of mechanical preprocessing and then presents experimental results illustrating (1) improved size reduction via optimization of hammer mill configuration, (2) improved size reduction via pneumatic-assisted hammer milling, and (3) improved control of particle size and particle size distribution through proper selection of grinder process parameters. Conclusion: Optimal grinder configuration for maximal process throughput and efficiency is strongly dependent on feedstock type and properties, such moisture content. Tests conducted using a HG200 hammer grinder indicate that increasing the tip speed, optimizing hammer geometry, and adding pneumatic assist can increase grinder throughput as much as 400%.

  17. Brachytherapy for clinically localized prostate cancer: optimal patient selection.

    PubMed

    Kollmeier, Marisa A; Zelefsky, Michael J

    2011-10-01

    The objective of this review is to present an overview of each modality and delineate how to best select patients who are optimal candidates for these treatment approaches. Prostate brachytherapy as a curative modality for clinically localized prostate cancer has become increasingly utilized over the past decade; 25% of all early cancers are now treated this way in the United States (1). The popularity of this treatment strategy lies in the highly conformal nature of radiation dose, low morbidity, patient convenience, and high efficacy rates. Prostate brachytherapy can be delivered by either a permanent interstitial radioactive seed implantation (low dose rate [LDR]) or a temporary interstitial insertion of iridium-192 (Ir192) afterloading catheters. The objective of both of these techniques is to deliver a high dose of radiation to the prostate gland while exposing normal surrounding tissues to minimal radiation dose. Brachytherapy techniques are ideal to achieve this goal given the close proximity of the radiation source to tumor and sharp fall off of the radiation dose cloud proximate to the source. Brachytherapy provides a powerful means of delivering dose escalation above and beyond that achievable with intensity-modulated external beam radiotherapy alone. Careful selection of appropriate patients for these therapies, however, is critical for optimizing both disease-related outcomes and treatment-related toxicity.

  18. Optimal subinterval selection approach for power system transient stability simulation

    SciTech Connect

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.

  19. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGES

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  20. A CMOS Sub-GHz Wideband Low-Noise Amplifier for Digital TV Tuner Applications

    NASA Astrophysics Data System (ADS)

    Cha, Hyouk-Kyu

    A high performance highly integrated sub-GHz wideband differential low-noise amplifier (LNA) for terrestrial and cable digital TV tuner applications is realized in 0.18µm CMOS technology. A noise-canceling topology using a feed-forward current reuse common-source stage is presented to obtain low noise characteristics and high gain while achieving good wideband input matching within 48-860MHz. In addition, linearization methods are appropriately utilized to improve the linearity. The implemented LNA achieves a power gain of 20.9dB, a minimum noise figure of 2.8dB, and an OIP3 of 24.2dBm. The chip consumes 32mA of current at 1.8V power supply and the core die size is 0.21mm2.

  1. Digitral Down Conversion Technology for Tevatron Beam Line Tuner at FNAL

    SciTech Connect

    Schappert, W.; Lorman, E.; Scarpine, V.; Ross, M.C.; Sebek, J.; Straumann, T.; /Fermilab /SLAC

    2008-03-17

    Fermilab is presently in Run II collider operations and is developing instrumentation to improve luminosity. Improving the orbit matching between accelerator components using a Beam Line Tuner (BLT) can improve the luminosity. Digital Down Conversion (DDC) has been proposed as a method for making more accurate beam position measurements. Fermilab has implemented a BLT system using a DDC technique to measure orbit oscillations during injections from the Main Injector to the Tevatron. The output of a fast ADC is downconverted and filtered in software. The system measures the x and y positions, the intensity, and the time of arrival for each proton or antiproton bunch, on a turn-by-turn basis, during the first 1024 turns immediately following injection. We present results showing position, intensity, and time of arrival for both injected and coasting beam. Initial results indicate a position resolution of {approx}20 to 40 microns and a phase resolution of {approx}25 ps.

  2. A dual molecular analogue tuner for dissecting protein function in mammalian cells

    PubMed Central

    Brosh, Ran; Hrynyk, Iryna; Shen, Jessalyn; Waghray, Avinash; Zheng, Ning; Lemischka, Ihor R.

    2016-01-01

    Loss-of-function studies are fundamental for dissecting gene function. Yet, methods to rapidly and effectively perturb genes in mammalian cells, and particularly in stem cells, are scarce. Here we present a system for simultaneous conditional regulation of two different proteins in the same mammalian cell. This system harnesses the plant auxin and jasmonate hormone-induced degradation pathways, and is deliverable with only two lentiviral vectors. It combines RNAi-mediated silencing of two endogenous proteins with the expression of two exogenous proteins whose degradation is induced by external ligands in a rapid, reversible, titratable and independent manner. By engineering molecular tuners for NANOG, CHK1, p53 and NOTCH1 in mammalian stem cells, we have validated the applicability of the system and demonstrated its potential to unravel complex biological processes. PMID:27230261

  3. Ant colony optimization with selective evaluation for feature selection in character recognition

    NASA Astrophysics Data System (ADS)

    Oh, Il-Seok; Lee, Jin-Seon

    2010-01-01

    This paper analyzes the size characteristics of character recognition domain with the aim of developing a feature selection algorithm adequate for the domain. Based on the results, we further analyze the timing requirements of three popular feature selection algorithms, greedy algorithm, genetic algorithm, and ant colony optimization. For a rigorous timing analysis, we adopt the concept of atomic operation. We propose a novel scheme called selective evaluation to improve convergence of ACO. The scheme cut down the computational load by excluding the evaluation of unnecessary or less promising candidate solutions. The scheme is realizable in ACO due to the valuable information, pheromone trail which helps identify those solutions. Experimental results showed that the ACO with selective evaluation was promising both in timing requirement and recognition performance.

  4. Optimal experiment design for model selection in biochemical networks

    PubMed Central

    2014-01-01

    Background Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. Results We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. Conclusions We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors. PMID:24555498

  5. Portfolio optimization for seed selection in diverse weather scenarios

    PubMed Central

    Brdar, Sanja; Panić, Marko; Šašić, Isidora; Despotović, Danica; Knežević, Milivoje; Crnojević, Vladimir

    2017-01-01

    The aim of this work was to develop a method for selection of optimal soybean varieties for the American Midwest using data analytics. We extracted the knowledge about 174 varieties from the dataset, which contained information about weather, soil, yield and regional statistical parameters. Next, we predicted the yield of each variety in each of 6,490 observed subregions of the Midwest. Furthermore, yield was predicted for all the possible weather scenarios approximated by 15 historical weather instances contained in the dataset. Using predicted yields and covariance between varieties through different weather scenarios, we performed portfolio optimisation. In this way, for each subregion, we obtained a selection of varieties, that proved superior to others in terms of the amount and stability of yield. According to the rules of Syngenta Crop Challenge, for which this research was conducted, we aggregated the results across all subregions and selected up to five soybean varieties that should be distributed across the network of seed retailers. The work presented in this paper was the winning solution for Syngenta Crop Challenge 2017. PMID:28863173

  6. Portfolio optimization for seed selection in diverse weather scenarios.

    PubMed

    Marko, Oskar; Brdar, Sanja; Panić, Marko; Šašić, Isidora; Despotović, Danica; Knežević, Milivoje; Crnojević, Vladimir

    2017-01-01

    The aim of this work was to develop a method for selection of optimal soybean varieties for the American Midwest using data analytics. We extracted the knowledge about 174 varieties from the dataset, which contained information about weather, soil, yield and regional statistical parameters. Next, we predicted the yield of each variety in each of 6,490 observed subregions of the Midwest. Furthermore, yield was predicted for all the possible weather scenarios approximated by 15 historical weather instances contained in the dataset. Using predicted yields and covariance between varieties through different weather scenarios, we performed portfolio optimisation. In this way, for each subregion, we obtained a selection of varieties, that proved superior to others in terms of the amount and stability of yield. According to the rules of Syngenta Crop Challenge, for which this research was conducted, we aggregated the results across all subregions and selected up to five soybean varieties that should be distributed across the network of seed retailers. The work presented in this paper was the winning solution for Syngenta Crop Challenge 2017.

  7. Optimizing Site Selection in Urban Areas in Northern Switzerland

    NASA Astrophysics Data System (ADS)

    Plenkers, K.; Kraft, T.; Bethmann, F.; Husen, S.; Schnellmann, M.

    2012-04-01

    There is a need to observe weak seismic events (M<2) in areas close to potential nuclear-waste repositories or nuclear power plants, in order to analyze the underlying seismo-tectonic processes and estimate their seismic hazard. We are therefore densifying the existing Swiss Digital Seismic Network in northern Switzerland by additional 20 stations. The new network that will be in operation by the end of 2012, aims at observing seismicity in northern Switzerland with a completeness of M_c=1.0 and a location error < 0.5 km in epicenter and < 2 km in focal depth. Monitoring of weak seismic events in this region is challenging, because the area of interest is densely populated and geology is dominated by the Swiss molasse basin. A optimal network-design and a thoughtful choice for station-sites is, therefore, mandatory. To help with decision making we developed a step-wise approach to find the optimum network configuration. Our approach is based on standard network optimization techniques regarding the localization error. As a new feature, our approach uses an ambient noise model to compute expected signal-to-noise ratios for a given site. The ambient noise model uses information on land use and major infrastructures such as highways and train lines. We ran a series of network optimizations with increasing number of stations until the requirements regarding localization error and magnitude of completeness are reached. The resulting network geometry serves as input for the site selection. Site selection is done by using a newly developed multi-step assessment-scheme that takes into account local noise level, geology, infrastructure, and costs necessary to realize the station. The assessment scheme is weighting the different parameters and the most promising sites are identified. In a first step, all potential sites are classified based on information from topographic maps and site inspection. In a second step, local noise conditions are measured at selected sites. We

  8. Influenza B vaccine lineage selection--an optimized trivalent vaccine.

    PubMed

    Mosterín Höpping, Ana; Fonville, Judith M; Russell, Colin A; James, Sarah; Smith, Derek J

    2016-03-18

    Epidemics of seasonal influenza viruses cause considerable morbidity and mortality each year. Various types and subtypes of influenza circulate in humans and evolve continuously such that individuals at risk of serious complications need to be vaccinated annually to keep protection up to date with circulating viruses. The influenza vaccine in most parts of the world is a trivalent vaccine, including an antigenically representative virus of recently circulating influenza A/H3N2, A/H1N1, and influenza B viruses. However, since the 1970s influenza B has split into two antigenically distinct lineages, only one of which is represented in the annual trivalent vaccine at any time. We describe a lineage selection strategy that optimizes protection against influenza B using the standard trivalent vaccine as a potentially cost effective alternative to quadrivalent vaccines.

  9. Selection of an optimal treatment method for acute periodontitis disease.

    PubMed

    Aliev, Rafik A; Aliyev, B F; Gardashova, Latafat A; Huseynov, Oleg H

    2012-04-01

    The present paper is devoted to selection of an optimal treatment method for acute periodontitis by using fuzzy Choquet integral-based approach. We consider application of different treatment methods depending on development stages and symptoms of the disease. The effectiveness of application of different treatment methods in each stage of the disease is linguistically evaluated by a dentist. The stages of the disease are also linguistically described by a dentist. Dentist's linguistic evaluations are represented by fuzzy sets. The total effectiveness of the each considered treatment method is calculated by using fuzzy Choquet integral with fuzzy number-valued integrand and fuzzy number-valued fuzzy measure. The most effective treatment method is determined by using fuzzy ranking method.

  10. Optimized bioregenerative space diet selection with crew choice

    NASA Technical Reports Server (NTRS)

    Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean

    2003-01-01

    Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.

  11. Optimized bioregenerative space diet selection with crew choice.

    PubMed

    Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean

    2003-01-01

    Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.

  12. Optimization of killer assays for yeast selection protocols.

    PubMed

    Lopes, C A; Sangorrín, M P

    2010-01-01

    A new optimized semiquantitative yeast killer assay is reported for the first time. The killer activity of 36 yeast isolates belonging to three species, namely, Metschnikowia pulcherrima, Wickerhamomyces anomala and Torulaspora delbrueckii, was tested with a view to potentially using these yeasts as biocontrol agents against the wine spoilage species Pichia guilliermondii and Pichia membranifaciens. The effectiveness of the classical streak-based (qualitative method) and the new semiquantitative techniques was compared. The percentage of yeasts showing killer activity was found to be higher by the semiquantitative technique (60%) than by the qualitative method (45%). In all cases, the addition of 1% NaCl into the medium allowed a better observation of the killer phenomenon. Important differences were observed in the killer capacity of different isolates belonging to a same killer species. The broadest spectrum of action was detected in isolates of W. anomala NPCC 1023 and 1025, and M. pulcherrima NPCC 1009 and 1013. We also brought experimental evidence supporting the importance of the adequate selection of the sensitive isolate to be used in killer evaluation. The new semiquantitative method proposed in this work enables to visualize the relationship between the number of yeasts tested and the growth of the inhibition halo (specific productivity). Hence, this experimental approach could become an interesting tool to be taken into account for killer yeast selection protocols.

  13. Optimized bioregenerative space diet selection with crew choice

    NASA Technical Reports Server (NTRS)

    Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean

    2003-01-01

    Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.

  14. Applications of Optimal Building Energy System Selection and Operation

    SciTech Connect

    Marnay, Chris; Stadler, Michael; Siddiqui, Afzal; DeForest, Nicholas; Donadee, Jon; Bhattacharya, Prajesh; Lai, Judy

    2011-04-01

    Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated by description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.

  15. Selective optimization with compensation (SOC) competencies in depression.

    PubMed

    Weiland, Marcus; Dammermann, Claudia; Stoppe, Gabriela

    2011-09-01

    The metamodel of selective optimization with compensation (SOC) aims to integrate scientific knowledge about the nature of development and aging with a focus on successful adaptation. For the first time the present study examines how SOC competencies and depressive symptoms are associated. In particular, potential state or trait effects of SOC competencies are considered. Fifty-three patients (31 women and 22 men), aged 21 to 73 years, suffering from depression, were interviewed twice during inpatient treatment, first on admission to hospital and later during remission or on discharge, to assess the severity of depression and differences in the SOC competencies using standardized scales. For comparison purpose, data from a population based survey in Germany were used. The SOC scores in the first interview were significantly lower than those of the comparison collective (p<0.0001), but in remission there was no significant difference left. Younger and older patients showed no significant difference in their SOC competencies, neither regarding the severity of depressive symptoms on admission to the hospital, nor during remission. These findings support the hypothesis that the SOC ability is dynamic and mood dependent (state effect). Otherwise, there is no hint of life-long reduced SOC competencies or a trait effect which would be associated with an increased vulnerability to the development of a depressive disorder. Regarding the high prevalence of depression especially in the elderly and physically ill patients, (gerontological) studies on SOC competencies should take depression into account. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Making the optimal decision in selecting protective clothing

    SciTech Connect

    Price, J. Mark

    2007-07-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of employee dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered, including - Protection; - Personnel Contamination; - Cost; - Radwaste; - Comfort; - Convenience; - Logistics/Rad Material Considerations; - Reject Rate of Laundered Clothing; - Durability; - Security; - Personnel Safety including Heat Stress; - Disposition of Gloves and Booties. In addition, over the last several years there has been a trend of nuclear power plants either running trials or switching to Single Use Protective Clothing (SUPC) from traditional protective clothing. In some cases, after trial usage of SUPC, plants have chosen not to switch. In other cases after switching to SUPC for a period of time, some plants have chosen to switch back to laundering. Based on these observations, this paper reviews the 'real' drivers, issues, and interrelating factors regarding the selection and use of protective clothing throughout the nuclear industry. (authors)

  17. 75 FR 39437 - Optimizing the Security of Biological Select Agents and Toxins in the United States

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ... Executive Order 13546--Optimizing the Security of Biological Select Agents and Toxins in the United States... July 2, 2010 Optimizing the Security of Biological Select Agents and Toxins in the United States By the... and productive scientific enterprise that utilizes biological select agents and toxins (BSAT)...

  18. Optimal Estimation of the Average Areal Rainfall and Optimal Selection of Rain Gauge Locations

    NASA Astrophysics Data System (ADS)

    Bastin, G.; Lorent, B.; Duqué, C.; Gevers, M.

    1984-04-01

    We propose a simple procedure for the real-time estimation of the average rainfall over a catchment area. The rainfall is modeled as a two-dimensional random field. The average areal rainfall is computed by a linear unbiased minimum variance estimation method (kriging) which requires knowledge of the variogram of the random field. We propose a time-varying estimator for the variogram which takes into account the influences of both the seasonal variations and the rainfall intensity. Our average areal rainfall estimator has been implemented in practice. We illustrate its application to real data in two river basins in Belgium. Finally, it is shown how the method can be used for the optimal selection of the rain gauge locations in a basin.

  19. Update on RF System Studies and VCX Fast Tuner Work for the RIA Drive Linac

    SciTech Connect

    Rusnak, B; Shen, S

    2003-05-06

    The limited cavity beam loading conditions anticipated for the Rare Isotope Accelerator (RIA) create a situation where microphonic-induced cavity detuning dominates radio frequency (RF) coupling and RF system architecture choices in the linac design process. Where most superconducting electron and proton linacs have beam-loaded bandwidths that are comparable to or greater than typical microphonic detuning bandwidths on the cavities, the beam-loaded bandwidths for many heavy-ion species in the RIA driver linac can be as much as a factor of 10 less than the projected 80-150 Hz microphonic control window for the RF structures along the driver, making RF control problematic. While simply overcoupling the coupler to the cavity can mitigate this problem to some degree, system studies indicate that for the low-{beta} driver linac alone, this approach may cost 50% or more than an RF system employing a voltage controlled reactance (VCX) fast tuner. An update of these system cost studies, along with the status of the VCX work being done at Lawrence Livermore National Lab is presented here.

  20. A low power 8th order elliptic low-pass filter for a CMMB tuner

    NASA Astrophysics Data System (ADS)

    Zheng, Gong; Bei, Chen; Xueqing, Hu; Yin, Shi; Foster, Dai Fa

    2011-09-01

    This paper presents an 8th order active-RC elliptic low-pass filter (LPF) for a direct conversion China Mobile Multimedia Broadcasting (CMMB) tuner with a 1 or 4 MHz -3 dB cutoff frequency (f-3dB). By using a novel gain-bandwidth-product (GBW) extension technique in designing the operational amplifiers (op-amps), the proposed filter achieves 71 dB stop-band rejection at 1.7 f-3dB to meet the stringent CMMB adjacent channel rejection (ACR) specifications while dissipates only 2.8 mA/channel from a 3 V supply, its bias current can be further lowered to 2 mA/channel with only 0.5 dB peaking measured at the filter's pass-band edge. Elaborated common-mode (CM) control circuits are applied to the filter op-amp to increase its common-mode rejection ratio (CMRR) and effectively reject the large signal common-mode interference. Measurement results show that the filter has 128 dBμVrms in-band IIP3 and more than 80 dB passband CMRR. Fabricated in a 0.35-μm SiGe BiCMOS process, the proposed filter occupies a 1.19 mm2 die area.

  1. The optimization of diffraction structures based on the principle selection of the main criterion

    NASA Astrophysics Data System (ADS)

    Kravets, O.; Beletskaja, S.; Lvovich, Ya; Lvovich, I.; Choporov, O.; Preobrazhenskiy, A.

    2017-02-01

    The possibilities of optimizing the characteristics of diffractive structures are analysed. A functional block diagram of a subsystem of diffractive structure optimization is shown. Next, a description of the method for the multicriterion optimization of diffractive structures is given. We then consider an algorithm for selecting the main criterion in the process of optimization. The algorithm efficiency is confirmed by an example of optimization of the diffractive structure.

  2. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  3. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2017-04-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  4. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2016-06-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  5. To Eat or Not to Eat: An Easy Simulation of Optimal Diet Selection in the Classroom

    ERIC Educational Resources Information Center

    Ray, Darrell L.

    2010-01-01

    Optimal diet selection, a component of optimal foraging theory, suggests that animals should select a diet that either maximizes energy or nutrient consumption per unit time or minimizes the foraging time needed to attain required energy or nutrients. In this exercise, students simulate the behavior of foragers that either show no foraging…

  6. To Eat or Not to Eat: An Easy Simulation of Optimal Diet Selection in the Classroom

    ERIC Educational Resources Information Center

    Ray, Darrell L.

    2010-01-01

    Optimal diet selection, a component of optimal foraging theory, suggests that animals should select a diet that either maximizes energy or nutrient consumption per unit time or minimizes the foraging time needed to attain required energy or nutrients. In this exercise, students simulate the behavior of foragers that either show no foraging…

  7. An artificial system for selecting the optimal surgical team.

    PubMed

    Saberi, Nahid; Mahvash, Mohsen; Zenati, Marco

    2015-01-01

    We introduce an intelligent system to optimize a team composition based on the team's historical outcomes and apply this system to compose a surgical team. The system relies on a record of the procedures performed in the past. The optimal team composition is the one with the lowest probability of unfavorable outcome. We use the theory of probability and the inclusion exclusion principle to model the probability of team outcome for a given composition. A probability value is assigned to each person of database and the probability of a team composition is calculated from them. The model allows to determine the probability of all possible team compositions even if there is no recoded procedure for some team compositions. From an analytical perspective, assembling an optimal team is equivalent to minimizing the overlap of team members who have a recurring tendency to be involved with procedures of unfavorable results. A conceptual example shows the accuracy of the proposed system on obtaining the optimal team.

  8. Polyhedral Interpolation for Optimal Reaction Control System Jet Selection

    NASA Technical Reports Server (NTRS)

    Gefert, Leon P.; Wright, Theodore

    2014-01-01

    An efficient algorithm is described for interpolating optimal values for spacecraft Reaction Control System jet firing duty cycles. The algorithm uses the symmetrical geometry of the optimal solution to reduce the number of calculations and data storage requirements to a level that enables implementation on the small real time flight control systems used in spacecraft. The process minimizes acceleration direction errors, maximizes control authority, and minimizes fuel consumption.

  9. Age-Related Differences in Goals: Testing Predictions from Selection, Optimization, and Compensation Theory and Socioemotional Selectivity Theory

    ERIC Educational Resources Information Center

    Penningroth, Suzanna L.; Scott, Walter D.

    2012-01-01

    Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two…

  10. Age-Related Differences in Goals: Testing Predictions from Selection, Optimization, and Compensation Theory and Socioemotional Selectivity Theory

    ERIC Educational Resources Information Center

    Penningroth, Suzanna L.; Scott, Walter D.

    2012-01-01

    Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two…

  11. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  12. Eliminating Scope and Selection Restrictions in Compiler Optimization

    DTIC Science & Technology

    2006-09-01

    C o d e S a m p le s O pt i l ev el (R ) O pt i l ev el (W ) If- co nv . (R ) If- co nv . (W ) Ld - S t (R ) Ld - S ...exploration performance” of each such subset is determined, as follows: Let R( s , c ) be the runtime of a code sample s when optimized using an...optimization configuration c . Then the exploration value of a set of configurations C on a set of code samples S is given by the

  13. Self-Selection, Optimal Income Taxation, and Redistribution

    ERIC Educational Resources Information Center

    Amegashie, J. Atsu

    2009-01-01

    The author makes a pedagogical contribution to optimal income taxation. Using a very simple model adapted from George A. Akerlof (1978), he demonstrates a key result in the approach to public economics and welfare economics pioneered by Nobel laureate James Mirrlees. He shows how incomplete information, in addition to the need to preserve…

  14. Self-Selection, Optimal Income Taxation, and Redistribution

    ERIC Educational Resources Information Center

    Amegashie, J. Atsu

    2009-01-01

    The author makes a pedagogical contribution to optimal income taxation. Using a very simple model adapted from George A. Akerlof (1978), he demonstrates a key result in the approach to public economics and welfare economics pioneered by Nobel laureate James Mirrlees. He shows how incomplete information, in addition to the need to preserve…

  15. A Regression Design Approach to Optimal and Robust Spacing Selection.

    DTIC Science & Technology

    1981-07-01

    release and sale; its distribution is unlimited Acceso For NTIS GRA&I DEPARTMENT OF STATISTICS DTIC TAB Unannounced Southern Methodist University F...such as the Cauchy where A is a constant multiple of the identity. In fact, for the Cauchy distribution asymptotically optimal spacing sequences for

  16. Optimal design and selection of magneto-rheological brake types based on braking torque and mass

    NASA Astrophysics Data System (ADS)

    Nguyen, Q. H.; Lang, V. T.; Choi, S. B.

    2015-06-01

    In developing magnetorheological brakes (MRBs), it is well known that the braking torque and the mass of the MRBs are important factors that should be considered in the product’s design. This research focuses on the optimal design of different types of MRBs, from which we identify an optimal selection of MRB types, considering braking torque and mass. In the optimization, common types of MRBs such as disc-type, drum-type, hybrid-type, and T-shape types are considered. The optimization problem is to find an optimal MRB structure that can produce the required braking torque while minimizing its mass. After a brief description of the configuration of the MRBs, the MRBs’ braking torque is derived based on the Herschel-Bulkley rheological model of the magnetorheological fluid. Then, the optimal designs of the MRBs are analyzed. The optimization objective is to minimize the mass of the brake while the braking torque is constrained to be greater than a required value. In addition, the power consumption of the MRBs is also considered as a reference parameter in the optimization. A finite element analysis integrated with an optimization tool is used to obtain optimal solutions for the MRBs. Optimal solutions of MRBs with different required braking torque values are obtained based on the proposed optimization procedure. From the results, we discuss the optimal selection of MRB types, considering braking torque and mass.

  17. Optimizing drilling performance using a selected drilling fluid

    SciTech Connect

    Judzis, Arnis; Black, Alan D; Green, Sidney J; Robertson, Homer A; Bland, Ronald G; Curry, David Alexander; Ledgerwood, III, Leroy W.

    2011-04-19

    To improve drilling performance, a drilling fluid is selected based on one or more criteria and to have at least one target characteristic. Drilling equipment is used to drill a wellbore, and the selected drilling fluid is provided into the wellbore during drilling with the drilling equipment. The at least one target characteristic of the drilling fluid includes an ability of the drilling fluid to penetrate into formation cuttings during drilling to weaken the formation cuttings.

  18. Optimized angle selection for radial sampled NMR experiments

    NASA Astrophysics Data System (ADS)

    Gledhill, John M.; Joshua Wand, A.

    2008-12-01

    Sparse sampling offers tremendous potential for overcoming the time limitations imposed by traditional Cartesian sampling of indirectly detected dimensions of multidimensional NMR data. Unfortunately, several otherwise appealing implementations are accompanied by spectral artifacts that have the potential to contaminate the spectrum with false peak intensity. In radial sampling of linked time evolution periods, the artifacts are easily identified and removed from the spectrum if a sufficient set of radial sampling angles is employed. Robust implementation of the radial sampling approach therefore requires optimization of the set of radial sampling angles collected. Here we describe several methods for such optimization. The approaches described take advantage of various aspects of the general simultaneous multidimensional Fourier transform in the analysis of multidimensional NMR data. Radially sampled data are primarily contaminated by ridges extending from authentic peaks. Numerical methods are described that definitively identify artifactual intensity and the optimal set of sampling angles necessary to eliminate it under a variety of scenarios. The algorithms are tested with both simulated and experimentally obtained triple resonance data.

  19. A parallel optimization method for product configuration and supplier selection based on interval

    NASA Astrophysics Data System (ADS)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  20. Selection of magnetorheological brake types via optimal design considering maximum torque and constrained volume

    NASA Astrophysics Data System (ADS)

    Nguyen, Q. H.; Choi, S. B.

    2012-01-01

    This research focuses on optimal design of different types of magnetorheological brakes (MRBs), from which an optimal selection of MRB types is identified. In the optimization, common types of MRB such as disc-type, drum-type, hybrid-types, and T-shaped type are considered. The optimization problem is to find the optimal value of significant geometric dimensions of the MRB that can produce a maximum braking torque. The MRB is constrained in a cylindrical volume of a specific radius and length. After a brief description of the configuration of MRB types, the braking torques of the MRBs are derived based on the Herschel-Bulkley model of the MR fluid. The optimal design of MRBs constrained in a specific cylindrical volume is then analysed. The objective of the optimization is to maximize the braking torque while the torque ratio (the ratio of maximum braking torque and the zero-field friction torque) is constrained to be greater than a certain value. A finite element analysis integrated with an optimization tool is employed to obtain optimal solutions of the MRBs. Optimal solutions of MRBs constrained in different volumes are obtained based on the proposed optimization procedure. From the results, discussions on the optimal selection of MRB types depending on constrained volumes are given.

  1. Optimizing the yield and selectivity of high purity nanoparticle clusters

    NASA Astrophysics Data System (ADS)

    Pease, Leonard F.

    2011-05-01

    Here we investigate the parameters that govern the yield and selectivity of small clusters composed of nanoparticles using a Monte Carlo simulation that accounts for spatial and dimensional distributions in droplet and nanoparticle density and size. Clustering nanoparticles presents a powerful paradigm with which to access properties not otherwise available using individual molecules, individual nanoparticles or bulk materials. However, the governing parameters that precisely tune the yield and selectivity of clusters fabricated via an electrospray droplet evaporation method followed by purification with differential mobility analysis (DMA) remain poorly understood. We find that the product of the electrospray droplet mean diameter to the third power and nanoparticle concentration governs the yield of individual clusters, while the ratio of the nanoparticle standard deviation to the mean diameter governs the selectivity. The resulting, easily accessible correlations may be used to minimize undesirable clustering, such as protein aggregation in the biopharmaceutical industry, and maximize the yield of a particular type of cluster for nanotechnology and energy applications.

  2. Optimization of selection for growth in Menz sheep while minimizing inbreeding depression in fitness traits.

    PubMed

    Gizaw, Solomon; Getachew, Tesfaye; Haile, Aynalem; Rischkowsky, Barbara; Sölkner, Johann; Tibbo, Markos

    2013-06-19

    The genetic trends in fitness (inbreeding, fertility and survival) of a closed nucleus flock of Menz sheep under selection during ten years for increased body weight were investigated to evaluate the consequences of selection for body weight on fitness. A mate selection tool was used to optimize in retrospect the actual selection and matings conducted over the project period to assess if the observed genetic gains in body weight could have been achieved with a reduced level of inbreeding. In the actual selection, the genetic trends for yearling weight, fertility of ewes and survival of lambs were 0.81 kg, -0.00026% and 0.016% per generation. The average inbreeding coefficient remained zero for the first few generations and then tended to increase over generations. The genetic gains achieved with the optimized retrospective selection and matings were highly comparable with the observed values, the correlation between the average breeding values of lambs born from the actual and optimized matings over the years being 0.99. However, the level of inbreeding with the optimized mate selections remained zero until late in the years of selection. Our results suggest that an optimal selection strategy that considers both genetic merits and coancestry of mates should be adopted to sustain the Menz sheep breeding program.

  3. A scenario optimization model for dynamic reserve site selection

    Treesearch

    Stephanie A. Snyder; Robert G. Haight; Charles S. ReVelle

    2004-01-01

    Conservation planners are called upon to make choices and trade-offs about the preservation of natural areas for the protection of species in the face of development pressures. We addressed the problem of selecting sites for protection over time with the objective of maximizing species representation, with uncertainty about future site development, and with periodic...

  4. Optimal parametrization of electrodynamical battery model using model selection criteria

    NASA Astrophysics Data System (ADS)

    Suárez-García, Andrés; Alfonsín, Víctor; Urréjola, Santiago; Sánchez, Ángel

    2015-07-01

    This paper describes the mathematical parametrization of an electrodynamical battery model using different model selection criteria. A good modeling technique is needed by the battery management units in order to increase battery lifetime. The elements of battery models can be mathematically parametrized to enhance their implementation in simulation environments. In this work, the best mathematical parametrizations are selected using three model selection criteria: the coefficient of determination (R2), the Akaike Information Criterion (AIC) and the Bayes Information Criterion (BIC). The R2 criterion only takes into account the error of the mathematical parametrizations, whereas AIC and BIC consider complexity. A commercial 40 Ah lithium iron phosphate (LiFePO4) battery is modeled and then simulated for contrasting. The OpenModelica open-source modeling and simulation environment is used for doing the battery simulations. The mean percent error of the simulations is 0.0985% for the models parametrized with R2 , 0.2300% for the AIC ones, and 0.3756% for the BIC ones. As expected, the R2 selected the most precise, complex and slowest mathematical parametrizations. The AIC criterion chose parametrizations with similar accuracy, but simpler and faster than the R2 ones.

  5. L-O-S-T: Logging Optimization Selection Technique

    Treesearch

    Jerry L. Koger; Dennis B. Webster

    1984-01-01

    L-O-S-T is a FORTRAN computer program developed to systematically quantify, analyze, and improve user selected harvesting methods. Harvesting times and costs are computed for road construction, landing construction, system move between landings, skidding, and trucking. A linear programming formulation utilizing the relationships among marginal analysis, isoquants, and...

  6. On some Methods for Constructing Optimal Subset Selection Procedures

    DTIC Science & Technology

    1976-09-01

    Reproduction in whole or in part is permitted for any purpose of the United States Government. cc-I U On Some Methods for Constructing Optimal...Reproduction in whole or in part is permitted for any purpose of the United States Government. 2/ LL-mma 1. A Family of’ distributions P, has SIP in...if" E ,,(X) , Eep(X) for,9 - _3 - - 𔃽 - al I non-decre:asin integralle function ip(x) and all (3 < It is ea;v to generalize Alam’s result in [1] as

  7. Selecting radiotherapy dose distributions by means of constrained optimization problems.

    PubMed

    Alfonso, J C L; Buttazzo, G; García-Archilla, B; Herrero, M A; Núñez, L

    2014-05-01

    The main steps in planning radiotherapy consist in selecting for any patient diagnosed with a solid tumor (i) a prescribed radiation dose on the tumor, (ii) bounds on the radiation side effects on nearby organs at risk and (iii) a fractionation scheme specifying the number and frequency of therapeutic sessions during treatment. The goal of any radiotherapy treatment is to deliver on the tumor a radiation dose as close as possible to that selected in (i), while at the same time conforming to the constraints prescribed in (ii). To this day, considerable uncertainties remain concerning the best manner in which such issues should be addressed. In particular, the choice of a prescription radiation dose is mostly based on clinical experience accumulated on the particular type of tumor considered, without any direct reference to quantitative radiobiological assessment. Interestingly, mathematical models for the effect of radiation on biological matter have existed for quite some time, and are widely acknowledged by clinicians. However, the difficulty to obtain accurate in vivo measurements of the radiobiological parameters involved has severely restricted their direct application in current clinical practice.In this work, we first propose a mathematical model to select radiation dose distributions as solutions (minimizers) of suitable variational problems, under the assumption that key radiobiological parameters for tumors and organs at risk involved are known. Second, by analyzing the dependence of such solutions on the parameters involved, we then discuss the manner in which the use of those minimizers can improve current decision-making processes to select clinical dosimetries when (as is generally the case) only partial information on model radiosensitivity parameters is available. A comparison of the proposed radiation dose distributions with those actually delivered in a number of clinical cases strongly suggests that solutions of our mathematical model can be

  8. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  9. Selection of Reserves for Woodland Caribou Using an Optimization Approach

    PubMed Central

    Schneider, Richard R.; Hauer, Grant; Dawe, Kimberly; Adamowicz, Wiktor; Boutin, Stan

    2012-01-01

    Habitat protection has been identified as an important strategy for the conservation of woodland caribou (Rangifer tarandus). However, because of the economic opportunity costs associated with protection it is unlikely that all caribou ranges can be protected in their entirety. We used an optimization approach to identify reserve designs for caribou in Alberta, Canada, across a range of potential protection targets. Our designs minimized costs as well as three demographic risk factors: current industrial footprint, presence of white-tailed deer (Odocoileus virginianus), and climate change. We found that, using optimization, 60% of current caribou range can be protected (including 17% in existing parks) while maintaining access to over 98% of the value of resources on public lands. The trade-off between minimizing cost and minimizing demographic risk factors was minimal because the spatial distributions of cost and risk were similar. The prospects for protection are much reduced if protection is directed towards the herds that are most at risk of near-term extirpation. PMID:22363702

  10. Selection of reserves for woodland caribou using an optimization approach.

    PubMed

    Schneider, Richard R; Hauer, Grant; Dawe, Kimberly; Adamowicz, Wiktor; Boutin, Stan

    2012-01-01

    Habitat protection has been identified as an important strategy for the conservation of woodland caribou (Rangifer tarandus). However, because of the economic opportunity costs associated with protection it is unlikely that all caribou ranges can be protected in their entirety. We used an optimization approach to identify reserve designs for caribou in Alberta, Canada, across a range of potential protection targets. Our designs minimized costs as well as three demographic risk factors: current industrial footprint, presence of white-tailed deer (Odocoileus virginianus), and climate change. We found that, using optimization, 60% of current caribou range can be protected (including 17% in existing parks) while maintaining access to over 98% of the value of resources on public lands. The trade-off between minimizing cost and minimizing demographic risk factors was minimal because the spatial distributions of cost and risk were similar. The prospects for protection are much reduced if protection is directed towards the herds that are most at risk of near-term extirpation.

  11. Optimal selection of Orbital Replacement Unit on-orbit spares - A Space Station system availability model

    NASA Technical Reports Server (NTRS)

    Schwaab, Douglas G.

    1991-01-01

    A mathematical programing model is presented to optimize the selection of Orbital Replacement Unit on-orbit spares for the Space Station. The model maximizes system availability under the constraints of logistics resupply-cargo weight and volume allocations.

  12. Optimization of gene sequences under constant mutational pressure and selection

    NASA Astrophysics Data System (ADS)

    Kowalczuk, M.; Gierlik, A.; Mackiewicz, P.; Cebrat, S.; Dudek, M. R.

    1999-12-01

    We have analyzed the influence of constant mutational pressure and selection on the nucleotide composition of DNA sequences of various size, which were represented by the genes of the Borrelia burgdorferi genome. With the help of MC simulations we have found that longer DNA sequences accumulate much less base substitutions per sequence length than short sequences. This leads us to the conclusion that the accuracy of replication may determine the size of genome.

  13. Application’s Method of Quadratic Programming for Optimization of Portfolio Selection

    NASA Astrophysics Data System (ADS)

    Kawamoto, Shigeru; Takamoto, Masanori; Kobayashi, Yasuhiro

    Investors or fund-managers face with optimization of portfolio selection, which means that determine the kind and the quantity of investment among several brands. We have developed a method to obtain optimal stock’s portfolio more rapidly from twice to three times than conventional method with efficient universal optimization. The method is characterized by quadratic matrix of utility function and constrained matrices divided into several sub-matrices by focusing on structure of these matrices.

  14. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2007-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service. These research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, dependable sensor suite to address system health assessment requirements.

  15. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2008-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service these research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, defendable sensor suite to address system health assessment requirements.

  16. Optimal band selection for dimensionality reduction of hyperspectral imagery

    NASA Technical Reports Server (NTRS)

    Stearns, Stephen D.; Wilson, Bruce E.; Peterson, James R.

    1993-01-01

    Hyperspectral images have many bands requiring significant computational power for machine interpretation. During image pre-processing, regions of interest that warrant full examination need to be identified quickly. One technique for speeding up the processing is to use only a small subset of bands to determine the 'interesting' regions. The problem addressed here is how to determine the fewest bands required to achieve a specified performance goal for pixel classification. The band selection problem has been addressed previously Chen et al., Ghassemian et al., Henderson et al., and Kim et al.. Some popular techniques for reducing the dimensionality of a feature space, such as principal components analysis, reduce dimensionality by computing new features that are linear combinations of the original features. However, such approaches require measuring and processing all the available bands before the dimensionality is reduced. Our approach, adapted from previous multidimensional signal analysis research, is simpler and achieves dimensionality reduction by selecting bands. Feature selection algorithms are used to determine which combination of bands has the lowest probability of pixel misclassification. Two elements required by this approach are a choice of objective function and a choice of search strategy.

  17. Optimizing the sequence of diameter distributions and selection harvests for uneven-aged stand management

    Treesearch

    Robert G. Haight; J. Douglas Brodie; Darius M. Adams

    1985-01-01

    The determination of an optimal sequence of diameter distributions and selection harvests for uneven-aged stand management is formulated as a discrete-time optimal-control problem with bounded control variables and free-terminal point. An efficient programming technique utilizing gradients provides solutions that are stable and interpretable on the basis of economic...

  18. Automated selection of appropriate pheromone representations in ant colony optimization.

    PubMed

    Montgomery, James; Randall, Marcus; Hendtlass, Tim

    2005-01-01

    Ant colony optimization (ACO) is a constructive metaheuristic that uses an analogue of ant trail pheromones to learn about good features of solutions. Critically, the pheromone representation for a particular problem is usually chosen intuitively rather than by following any systematic process. In some representations, distinct solutions appear multiple times, increasing the effective size of the search space and potentially misleading ants as to the true learned value of those solutions. In this article, we present a novel system for automatically generating appropriate pheromone representations, based on the characteristics of the problem model that ensures unique pheromone representation of solutions. This is the first stage in the development of a generalized ACO system that could be applied to a wide range of problems with little or no modification. However, the system we propose may be used in the development of any problem-specific ACO algorithm.

  19. Selection of optimal composition-control parameters for friable materials

    SciTech Connect

    Pak, Yu.N.; Vdovkin, A.V.

    1988-05-01

    A method for composition analysis of coal and minerals is proposed which uses scattered gamma radiation and does away with preliminary sample preparation to ensure homogeneous particle density, surface area, and size. Reduction of the error induced by material heterogeneity has previously been achieved by rotation of the control object during analysis. A further refinement is proposed which addresses the necessity that the contribution of the radiation scattered from each individual surface to the total intensity be the same. This is achieved by providing a constant linear rate of travel for the irradiated spot through back-and-forth motion of the sensor. An analytical expression is given for the laws of motion for the sensor and test tube which provides for uniform irradiated area movement along a path analogous to the Archimedes spiral. The relationships obtained permit optimization of measurement parameters in analyzing friable materials which are not uniform in grain size.

  20. About the use of vector optimization for company's contractors selection

    NASA Astrophysics Data System (ADS)

    Medvedeva, M. A.; Medvedev, M. A.

    2017-07-01

    For effective functioning of an enterprise it is necessary to make a right choice of partners: suppliers of raw material, buyers of finished products, and others with which the company interacts in the course of their business. However, the presence on the market of big amount of enterprises makes the choice of the most appropriate among them very difficult and requires the ability to objectively assess of the possible partners, based on multilateral analysis of their activities. This analysis can be carried out based on the solution of multiobjective problem of mathematical programming by using the methods of vector optimization. The present work addresses the theoretical foundations of such approach and also describes an algorithm realizing proposed method on practical example.

  1. Optimizing purebred selection for crossbred performance using QTL with different degrees of dominance

    PubMed Central

    Dekkers, Jack CM; Chakraborty, Reena

    2004-01-01

    A method was developed to optimize simultaneous selection for a quantitative trait with a known QTL within a male and a female line to maximize crossbred performance from a two-way cross. Strategies to maximize cumulative discounted response in crossbred performance over ten generations were derived by optimizing weights in an index of a QTL and phenotype. Strategies were compared to selection on purebred phenotype. Extra responses were limited for QTL with additive and partial dominance effects, but substantial for QTL with over-dominance, for which optimal QTL selection resulted in differential selection in male and female lines to increase the frequency of heterozygotes and polygenic responses. For over-dominant QTL, maximization of crossbred performance one generation at a time resulted in similar responses as optimization across all generations and simultaneous optimal selection in a male and female line resulted in greater response than optimal selection within a single line without crossbreeding. Results show that strategic use of information on over-dominant QTL can enhance crossbred performance without crossbred testing. PMID:15107268

  2. A method to optimize selection on multiple identified quantitative trait loci

    PubMed Central

    Chakraborty, Reena; Moreau, Laurence; Dekkers, Jack CM

    2002-01-01

    A mathematical approach was developed to model and optimize selection on multiple known quantitative trait loci (QTL) and polygenic estimated breeding values in order to maximize a weighted sum of responses to selection over multiple generations. The model allows for linkage between QTL with multiple alleles and arbitrary genetic effects, including dominance, epistasis, and gametic imprinting. Gametic phase disequilibrium between the QTL and between the QTL and polygenes is modeled but polygenic variance is assumed constant. Breeding programs with discrete generations, differential selection of males and females and random mating of selected parents are modeled. Polygenic EBV obtained from best linear unbiased prediction models can be accommodated. The problem was formulated as a multiple-stage optimal control problem and an iterative approach was developed for its solution. The method can be used to develop and evaluate optimal strategies for selection on multiple QTL for a wide range of situations and genetic models. PMID:12081805

  3. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    PubMed

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human

  4. Optimizing drug exposure to minimize selection of antibiotic resistance.

    PubMed

    Olofsson, Sara K; Cars, Otto

    2007-09-01

    The worldwide increase in antibiotic resistance is a concern for public health. The fact that the choice of dose and treatment duration can affect the selection of antibiotic-resistant mutants is becoming more evident, and an increased number of studies have used pharmacodynamic models to describe the drug exposure and pharmacodynamic breakpoints needed to minimize and predict the development of resistance. However, there remains a lack of sufficient data, and future work is needed to fully characterize these target drug concentrations. More knowledge is also needed of drug pharmacodynamics versus bacteria with different resistance mutations and susceptibility levels. The dosing regimens should exhibit high efficacy not only against susceptible wild-type bacteria but, preferably, also against mutated bacteria that may exist in low numbers in "susceptible" populations. Thus, to prolong the life span of existing and new antibiotics, it is important that dosing regimens be carefully selected on the basis of pharmacokinetic and pharmacodynamic properties that prevent emergence of preexisting and newly formed mutants.

  5. Optimizing the Selectivity of Surface-Adsorbing Multivalent Polymers

    PubMed Central

    2014-01-01

    Multivalent polymers are macromolecules containing multiple chemical moieties designed to bind to complementary moieties on a target; for example, a protein with multiple ligands that have affinity for receptors on a cell surface. Though the individual ligand–receptor bonds are often weak, the combinatorial entropy associated with the different possible ligand–receptor pairs leads to a binding transition that can be very sharp with respect to control parameters, such as temperature or surface receptor concentration. We use mean-field self-consistent field theory to study the binding selectivity of multivalent polymers to receptor-coated surfaces. Polymers that have their ligands clustered into a contiguous domain, either located at the chain end or chain midsection, exhibit cooperative surface adsorption and superselectivity when the polymer concentration is low. On the other hand, when the ligands are uniformly spaced along the chain backbone, selectivity is substantially reduced due to the lack of binding cooperativity and due to crowding of the surface by the inert polymer segments in the chain backbone. PMID:25400296

  6. Risk based treatment selection and optimization of contaminated site remediation

    SciTech Connect

    Heitzer, A.; Scholz, R.W.

    1995-12-31

    During the past few years numerous remediation technologies for the cleanup of contaminated sites have been developed. Because of the associated uncertainties concerning treatment reliability it is important to develop strategies to characterize their risks to achieve the cleanup requirements. For this purpose it is necessary to integrate existing knowledge on treatment efficacy and efficiency into the planning process for the management of contaminated sites. Based on field-scale experience data for the remediation of soils contaminated with petroleum hydrocarbons, two treatment technologies, biological land treatment and phyisco-chemical soil washing, were analyzed with respect to their general performance risks to achieve given cleanup standards. For a specific contamination scenario, efficient application ranges were identified using the method of linear optimization in combination with sensitivity analysis. Various constraints including cleanup standards, available financial budget, amount of contamination and others were taken into account. While land treatment was found to be most efficient at higher cleanup standards and less contaminated soils, soil washing exhibited better efficiency at lower cleanup standards and higher contaminated soils. These results compare favorably with practical experiences and indicate the utility of this approach to support decision making and planning processes for the general management of contaminated sites. In addition, the method allows for the simultaneous integration of various aspects such as risk based characteristics of treatment technologies, cleanup standards and more general ecological and economical remedial action objectives.

  7. Stationary phase optimized selectivity liquid chromatography: Basic possibilities of serially connected columns using the "PRISMA" principle.

    PubMed

    Nyiredy, Sz; Szucs, Zoltán; Szepesy, L

    2007-07-20

    A new procedure (stationary phase optimized selectivity liquid chromatography: SOS-LC) is described for the optimization of the HPLC stationary phase, using serially connected columns and the principle of the "PRISMA" model. The retention factors (k) of the analytes were determined on three different stationary phases. By use of these data the k values were predicted applying theoretically combined stationary phases. These predictions resulted in numerous intermediate theoretical separations from among which only the optimal one was assembled and tested. The overall selectivity of this separation was better than that of any individual base stationary phase. SOS-LC is independent of the mechanism and the scale of separation.

  8. SLOPE—ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION

    PubMed Central

    Bogdan, Małgorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; Candès, Emmanuel J.

    2015-01-01

    We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minb∈ℝp12‖y−Xb‖ℓ22+λ1|b|(1)+λ2|b|(2)+⋯+λp|b|(p),where λ1 ≥ λ2 ≥ … ≥ λp ≥ 0 and |b|(1)≥|b|(2)≥⋯≥|b|(p) are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical ℓ1 procedures such as the Lasso. Here, the regularizer is a sorted ℓ1 norm, which penalizes the regression coefficients according to their rank: the higher the rank—that is, stronger the signal—the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289–300] procedure (BH) which compares more significant p-values with more stringent thresholds. One notable choice of the sequence {λi} is given by the BH critical values λBH(i)=z(1−i⋅q/2p), where q ∈ (0, 1) and z(α) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data. PMID:26709357

  9. Determination of an Optimal Recruiting-Selection Strategy to Fill a Specified Quota of Satisfactory Personnel.

    ERIC Educational Resources Information Center

    Sands, William A.

    Managers of military and civilian personnel systems justifiably demand an estimate of the payoff in dollars and cents, which can be expected to result from the implementation of a proposed selection program. The Cost of Attaining Personnel Requirements (CAPER) Model provides an optimal recruiting-selection strategy for personnel decisions which…

  10. Determination of an Optimal Recruiting-Selection Strategy to Fill a Specified Quota of Satisfactory Personnel.

    ERIC Educational Resources Information Center

    Sands, William A.

    Managers of military and civilian personnel systems justifiably demand an estimate of the payoff in dollars and cents, which can be expected to result from the implementation of a proposed selection program. The Cost of Attaining Personnel Requirements (CAPER) Model provides an optimal recruiting-selection strategy for personnel decisions which…

  11. Selection for optimal crew performance - Relative impact of selection and training

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.

    1987-01-01

    An empirical study supporting Helmreich's (1986) theoretical work on the distinct manner in which training and selection impact crew coordination is presented. Training is capable of changing attitudes, while selection screens for stable personality characteristics. Training appears least effective for leadership, an area strongly influenced by personality. Selection is least effective for influencing attitudes about personal vulnerability to stress, which appear to be trained in resource management programs. Because personality correlates with attitudes before and after training, it is felt that selection may be necessary even with a leadership-oriented training cirriculum.

  12. On the complexity of discrete feature selection for optimal classification.

    PubMed

    Peña, Jose M; Nilsson, Roland

    2010-08-01

    Consider a classification problem involving only discrete features that are represented as random variables with some prescribed discrete sample space. In this paper, we study the complexity of two feature selection problems. The first problem consists in finding a feature subset of a given size k that has minimal Bayes risk. We show that for any increasing ordering of the Bayes risks of the feature subsets (consistent with an obvious monotonicity constraint), there exists a probability distribution that exhibits that ordering. This implies that solving the first problem requires an exhaustive search over the feature subsets of size k. The second problem consists of finding the minimal feature subset that has minimal Bayes risk. In the light of the complexity of the first problem, one may think that solving the second problem requires an exhaustive search over all of the feature subsets. We show that, under mild assumptions, this is not true. We also study the practical implications of our solutions to the second problem.

  13. Storage of human biospecimens: selection of the optimal storage temperature.

    PubMed

    Hubel, Allison; Spindler, Ralf; Skubitz, Amy P N

    2014-06-01

    Millions of biological samples are currently kept at low tempertures in cryobanks/biorepositories for long-term storage. The quality of the biospecimen when thawed, however, is not only determined by processing of the biospecimen but the storage conditions as well. The overall objective of this article is to describe the scientific basis for selecting a storage temperature for a biospecimen based on current scientific understanding. To that end, this article reviews some physical basics of the temperature, nucleation, and ice crystal growth present in biological samples stored at low temperatures (-20°C to -196°C), and our current understanding of the role of temperature on the activity of degradative molecules present in biospecimens. The scientific literature relevant to the stability of specific biomarkers in human fluid, cell, and tissue biospecimens is also summarized for the range of temperatures between -20°C to -196°C. These studies demonstrate the importance of storage temperature on the stability of critical biomarkers for fluid, cell, and tissue biospecimens.

  14. Optimizing landfill site selection by using land classification maps.

    PubMed

    Eskandari, M; Homaee, M; Mahmoodi, S; Pazira, E; Van Genuchten, M Th

    2015-05-01

    Municipal solid waste disposal is a major environmental concern throughout the world. Proper landfill siting involves many environmental, economic, technical, and sociocultural challenges. In this study, a new quantitative method for landfill siting that reduces the number of evaluation criteria, simplifies siting procedures, and enhances the utility of available land evaluation maps was proposed. The method is demonstrated by selecting a suitable landfill site near the city of Marvdasht in Iran. The approach involves two separate stages. First, necessary criteria for preliminary landfill siting using four constraints and eight factors were obtained from a land classification map initially prepared for irrigation purposes. Thereafter, the criteria were standardized using a rating approach and then weighted to obtain a suitability map for landfill siting, with ratings in a 0-1 domain and divided into five suitability classes. Results were almost identical to those obtained with a more traditional environmental landfill siting approach. Because of far fewer evaluation criteria, the proposed weighting method was much easier to implement while producing a more convincing database for landfill siting. The classification map also considered land productivity. In the second stage, the six best alternative sites were evaluated for final landfill siting using four additional criteria. Sensitivity analyses were furthermore conducted to assess the stability of the obtained ranking. Results indicate that the method provides a precise siting procedure that should convince all pertinent stakeholders.

  15. Applying optimal model selection in principal stratification for causal inference.

    PubMed

    Odondi, Lang'o; McNamee, Roseanne

    2013-05-20

    Noncompliance to treatment allocation is a key source of complication for causal inference. Efficacy estimation is likely to be compounded by the presence of noncompliance in both treatment arms of clinical trials where the intention-to-treat estimate provides a biased estimator for the true causal estimate even under homogeneous treatment effects assumption. Principal stratification method has been developed to address such posttreatment complications. The present work extends a principal stratification method that adjusts for noncompliance in two-treatment arms trials by developing model selection for covariates predicting compliance to treatment in each arm. We apply the method to analyse data from the Esprit study, which was conducted to ascertain whether unopposed oestrogen (hormone replacement therapy) reduced the risk of further cardiac events in postmenopausal women who survive a first myocardial infarction. We adjust for noncompliance in both treatment arms under a Bayesian framework to produce causal risk ratio estimates for each principal stratum. For mild values of a sensitivity parameter and using separate predictors of compliance in each arm, principal stratification results suggested that compliance with hormone replacement therapy only would reduce the risk for death and myocardial reinfarction by about 47% and 25%, respectively, whereas compliance with either treatment would reduce the risk for death by 13% and reinfarction by 60% among the most compliant. However, the results were sensitive to the user-defined sensitivity parameter.

  16. Plastic scintillation dosimetry: Optimal selection of scintillating fibers and scintillators

    SciTech Connect

    Archambault, Louis; Arsenault, Jean; Gingras, Luc; Sam Beddar, A.; Roy, Rene; Beaulieu, Luc

    2005-07-15

    Scintillation dosimetry is a promising avenue for evaluating dose patterns delivered by intensity-modulated radiation therapy plans or for the small fields involved in stereotactic radiosurgery. However, the increase in signal has been the goal for many authors. In this paper, a comparison is made between plastic scintillating fibers and plastic scintillator. The collection of scintillation light was measured experimentally for four commercial models of scintillating fibers (BCF-12, BCF-60, SCSF-78, SCSF-3HF) and two models of plastic scintillators (BC-400, BC-408). The emission spectra of all six scintillators were obtained by using an optical spectrum analyzer and they were compared with theoretical behavior. For scintillation in the blue region, the signal intensity of a singly clad scintillating fiber (BCF-12) was 120% of that of the plastic scintillator (BC-400). For the multiclad fiber (SCSF-78), the signal reached 144% of that of the plastic scintillator. The intensity of the green scintillating fibers was lower than that of the plastic scintillator: 47% for the singly clad fiber (BCF-60) and 77% for the multiclad fiber (SCSF-3HF). The collected light was studied as a function of the scintillator length and radius for a cylindrical probe. We found that symmetric detectors with nearly the same spatial resolution in each direction (2 mm in diameter by 3 mm in length) could be made with a signal equivalent to those of the more commonly used asymmetric scintillators. With augmentation of the signal-to-noise ratio in consideration, this paper presents a series of comparisons that should provide insight into selection of a scintillator type and volume for development of a medical dosimeter.

  17. Plastic scintillation dosimetry: Optimal selection of scintillating fibers and scintillators.

    PubMed

    Archambault, Louis; Arsenault, Jean; Gingras, Luc; Sam Beddar, A; Roy, René; Beaulieu, Luc

    2005-07-01

    Scintillation dosimetry is a promising avenue for evaluating dose patterns delivered by intensity-modulated radiation therapy plans or for the small fields involved in stereotactic radiosurgery. However, the increase in signal has been the goal for many authors. In this paper, a comparison is made between plastic scintillating fibers and plastic scintillator. The collection of scintillation light was measured experimentally for four commercial models of scintillating fibers (BCF-12, BCF-60, SCSF-78, SCSF-3HF) and two models of plastic scintillators (BC-400, BC-408). The emission spectra of all six scintillators were obtained by using an optical spectrum analyzer and they were compared with theoretical behavior. For scintillation in the blue region, the signal intensity of a singly clad scintillating fiber (BCF-12) was 120% of that of the plastic scintillator (BC-400). For the multiclad fiber (SCSF-78), the signal reached 144% of that of the plastic scintillator. The intensity of the green scintillating fibers was lower than that of the plastic scintillator: 47% for the singly clad fiber (BCF-60) and 77% for the multiclad fiber (SCSF-3HF). The collected light was studied as a function of the scintillator length and radius for a cylindrical probe. We found that symmetric detectors with nearly the same spatial resolution in each direction (2 mm in diameter by 3 mm in length) could be made with a signal equivalent to those of the more commonly used asymmetric scintillators. With augmentation of the signal-to-noise ratio in consideration, this paper presents a series of comparisons that should provide insight into selection of a scintillator type and volume for development of a medical dosimeter. © 2005 American Association of Physicists in Medicine.

  18. Plastic scintillation dosimetry: optimal selection of scintillating fibers and scintillators.

    PubMed

    Archambault, Louis; Arsenault, Jean; Gingras, Luc; Beddar, A Sam; Roy, René; Beaulieu, Luc

    2005-07-01

    Scintillation dosimetry is a promising avenue for evaluating dose patterns delivered by intensity-modulated radiation therapy plans or for the small fields involved in stereotactic radiosurgery. However, the increase in signal has been the goal for many authors. In this paper, a comparison is made between plastic scintillating fibers and plastic scintillator. The collection of scintillation light was measured experimentally for four commercial models of scintillating fibers (BCF-12, BCF-60, SCSF-78, SCSF-3HF) and two models of plastic scintillators (BC-400, BC-408). The emission spectra of all six scintillators were obtained by using an optical spectrum analyzer and they were compared with theoretical behavior. For scintillation in the blue region, the signal intensity of a singly clad scintillating fiber (BCF-12) was 120% of that of the plastic scintillator (BC-400). For the multiclad fiber (SCSF-78), the signal reached 144% of that of the plastic scintillator. The intensity of the green scintillating fibers was lower than that of the plastic scintillator: 47% for the singly clad fiber (BCF-60) and 77% for the multiclad fiber (SCSF-3HF). The collected light was studied as a function of the scintillator length and radius for a cylindrical probe. We found that symmetric detectors with nearly the same spatial resolution in each direction (2 mm in diameter by 3 mm in length) could be made with a signal equivalent to those of the more commonly used asymmetric scintillators. With augmentation of the signal-to-noise ratio in consideration, this paper presents a series of comparisons that should provide insight into selection of a scintillator type and volume for development of a medical dosimeter.

  19. Visualization of multi-property landscapes for compound selection and optimization

    NASA Astrophysics Data System (ADS)

    de la Vega de León, Antonio; Kayastha, Shilva; Dimova, Dilyana; Schultz, Thomas; Bajorath, Jürgen

    2015-08-01

    Compound optimization generally requires considering multiple properties in concert and reaching a balance between them. Computationally, this process can be supported by multi-objective optimization methods that produce numerical solutions to an optimization task. Since a variety of comparable multi-property solutions are usually obtained further prioritization is required. However, the underlying multi-dimensional property spaces are typically complex and difficult to rationalize. Herein, an approach is introduced to visualize multi-property landscapes by adapting the concepts of star and parallel coordinates from computer graphics. The visualization method is designed to complement multi-objective compound optimization. We show that visualization makes it possible to further distinguish between numerically equivalent optimization solutions and helps to select drug-like compounds from multi-dimensional property spaces. The methodology is intuitive, applicable to a wide range of chemical optimization problems, and made freely available to the scientific community.

  20. Visualization of multi-property landscapes for compound selection and optimization.

    PubMed

    de la Vega de León, Antonio; Kayastha, Shilva; Dimova, Dilyana; Schultz, Thomas; Bajorath, Jürgen

    2015-08-01

    Compound optimization generally requires considering multiple properties in concert and reaching a balance between them. Computationally, this process can be supported by multi-objective optimization methods that produce numerical solutions to an optimization task. Since a variety of comparable multi-property solutions are usually obtained further prioritization is required. However, the underlying multi-dimensional property spaces are typically complex and difficult to rationalize. Herein, an approach is introduced to visualize multi-property landscapes by adapting the concepts of star and parallel coordinates from computer graphics. The visualization method is designed to complement multi-objective compound optimization. We show that visualization makes it possible to further distinguish between numerically equivalent optimization solutions and helps to select drug-like compounds from multi-dimensional property spaces. The methodology is intuitive, applicable to a wide range of chemical optimization problems, and made freely available to the scientific community.

  1. Analysis of double stub tuner control stability in a many element phased array antenna with strong cross-coupling

    SciTech Connect

    Wallace, G. M.; Fitzgerald, E.; Johnson, D. K.; Kanojia, A. D.; Koert, P.; Lin, Y.; Murray, R.; Shiraiwa, S.; Terry, D. R.; Wukitch, S. J.; Hillairet, J.

    2014-02-12

    Active stub tuning with a fast ferrite tuner (FFT) allows for the system to respond dynamically to changes in the plasma impedance such as during the L-H transition or edge localized modes (ELMs), and has greatly increased the effectiveness of fusion ion cyclotron range of frequency systems. A high power waveguide double-stub tuner is under development for use with the Alcator C-Mod lower hybrid current drive (LHCD) system. Exact impedance matching with a double-stub is possible for a single radiating element under most load conditions, with the reflection coefficient reduced from Γ to Γ{sup 2} in the “forbidden region.” The relative phase shift between adjacent columns of a LHCD antenna is critical for control of the launched n{sub ∥} spectrum. Adding a double-stub tuning network will perturb the phase of the forward wave particularly if the unmatched reflection coefficient is high. This effect can be compensated by adjusting the phase of the low power microwave drive for each klystron amplifier. Cross-coupling of the reflected power between columns of the launcher must also be considered. The problem is simulated by cascading a scattering matrix for the plasma provided by a linear coupling model with the measured launcher scattering matrix and that of the FFTs. The solution is advanced in an iterative manner similar to the time-dependent behavior of the real system. System performance is presented under a range of edge density conditions from under-dense to over-dense and a range of launched n{sub ∥}.

  2. Analysis of double stub tuner control stability in a many element phased array antenna with strong cross-coupling

    NASA Astrophysics Data System (ADS)

    Wallace, G. M.; Fitzgerald, E.; Hillairet, J.; Johnson, D. K.; Kanojia, A. D.; Koert, P.; Lin, Y.; Murray, R.; Shiraiwa, S.; Terry, D. R.; Wukitch, S. J.

    2014-02-01

    Active stub tuning with a fast ferrite tuner (FFT) allows for the system to respond dynamically to changes in the plasma impedance such as during the L-H transition or edge localized modes (ELMs), and has greatly increased the effectiveness of fusion ion cyclotron range of frequency systems. A high power waveguide double-stub tuner is under development for use with the Alcator C-Mod lower hybrid current drive (LHCD) system. Exact impedance matching with a double-stub is possible for a single radiating element under most load conditions, with the reflection coefficient reduced from Γ to Γ2 in the "forbidden region." The relative phase shift between adjacent columns of a LHCD antenna is critical for control of the launched n∥ spectrum. Adding a double-stub tuning network will perturb the phase of the forward wave particularly if the unmatched reflection coefficient is high. This effect can be compensated by adjusting the phase of the low power microwave drive for each klystron amplifier. Cross-coupling of the reflected power between columns of the launcher must also be considered. The problem is simulated by cascading a scattering matrix for the plasma provided by a linear coupling model with the measured launcher scattering matrix and that of the FFTs. The solution is advanced in an iterative manner similar to the time-dependent behavior of the real system. System performance is presented under a range of edge density conditions from under-dense to over-dense and a range of launched n∥.

  3. A particle swarm optimization algorithm for beam angle selection in intensity-modulated radiotherapy planning.

    PubMed

    Li, Yongjie; Yao, Dezhong; Yao, Jonathan; Chen, Wufan

    2005-08-07

    Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated.

  4. Coupling between protein level selection and codon usage optimization in the evolution of bacteria and archaea.

    PubMed

    Ran, Wenqi; Kristensen, David M; Koonin, Eugene V

    2014-03-25

    The relationship between the selection affecting codon usage and selection on protein sequences of orthologous genes in diverse groups of bacteria and archaea was examined by using the Alignable Tight Genome Clusters database of prokaryote genomes. The codon usage bias is generally low, with 57.5% of the gene-specific optimal codon frequencies (Fopt) being below 0.55. This apparent weak selection on codon usage contrasts with the strong purifying selection on amino acid sequences, with 65.8% of the gene-specific dN/dS ratios being below 0.1. For most of the genomes compared, a limited but statistically significant negative correlation between Fopt and dN/dS was observed, which is indicative of a link between selection on protein sequence and selection on codon usage. The strength of the coupling between the protein level selection and codon usage bias showed a strong positive correlation with the genomic GC content. Combined with previous observations on the selection for GC-rich codons in bacteria and archaea with GC-rich genomes, these findings suggest that selection for translational fine-tuning could be an important factor in microbial evolution that drives the evolution of genome GC content away from mutational equilibrium. This type of selection is particularly pronounced in slowly evolving, "high-status" genes. A significantly stronger link between the two aspects of selection is observed in free-living bacteria than in parasitic bacteria and in genes encoding metabolic enzymes and transporters than in informational genes. These differences might reflect the special importance of translational fine-tuning for the adaptability of gene expression to environmental changes. The results of this work establish the coupling between protein level selection and selection for translational optimization as a distinct and potentially important factor in microbial evolution. IMPORTANCE Selection affects the evolution of microbial genomes at many levels, including both

  5. Constraint-selected and search-optimized families of Daubechies wavelet filters computable by spectral factorization

    NASA Astrophysics Data System (ADS)

    Taswell, Carl

    2000-09-01

    A unifying algorithm has been developed to systematize the collection of compact Daubechies wavelets computable by spectral factorization of a symmetric positive polynomial. This collection comprises all classes of real and complex orthogonal and biorthogonal wavelet filters with maximal flatness for their minimal length. The main algorithm incorporates spectral factorization of the Daubechies product filter into analysis and synthesis filters. The spectral factors are found for search-optimized families by examining a desired criterion over combinatorial subsets of roots indexed by binary codes, and for constraint-selected families by imposing sufficient constraints on the roots without any optimizing search for an extremal property. Daubechies wavelet filter families have been systematized to include those constraint-selected by the principle of separably disjoint roots, and those search-optimized for time-domain regularity, frequency-domain selectivity, time-frequency uncertainty, and phase nonlinearity. The latter criterion permits construction of the least and most asymmetric and least and most symmetric real and complex orthogonal filters. Biorthogonal symmetric spline and balanced-length filters with linear phase are also computable by these methods. This systematized collection has been developed in the context of a general framework enabling evaluation of the equivalence of constraint-selected and search-optimized families with respect to the filter coefficients and roots and their characteristics. Some of the constraint-selected families have been demonstrated to be equivalent to some of the search-optimized families, thereby obviating the necessity for any search in their computation.

  6. Modeling Network Intrusion Detection System Using Feature Selection and Parameters Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Dong Seong; Park, Jong Sou

    Previous approaches for modeling Intrusion Detection System (IDS) have been on twofold: improving detection model(s) in terms of (i) feature selection of audit data through wrapper and filter methods and (ii) parameters optimization of detection model design, based on classification, clustering algorithms, etc. In this paper, we present three approaches to model IDS in the context of feature selection and parameters optimization: First, we present Fusion of Genetic Algorithm (GA) and Support Vector Machines (SVM) (FuGAS), which employs combinations of GA and SVM through genetic operation and it is capable of building an optimal detection model with only selected important features and optimal parameters value. Second, we present Correlation-based Hybrid Feature Selection (CoHyFS), which utilizes a filter method in conjunction of GA for feature selection in order to reduce long training time. Third, we present Simultaneous Intrinsic Model Identification (SIMI), which adopts Random Forest (RF) and shows better intrusion detection rates and feature selection results, along with no additional computational overheads. We show the experimental results and analysis of three approaches on KDD 1999 intrusion detection datasets.

  7. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis.

    PubMed

    Li, Qiang; Chen, Huiling; Huang, Hui; Zhao, Xuehua; Cai, ZhenNao; Tong, Changfei; Liu, Wenbin; Tian, Xin

    2017-01-01

    In this study, a new predictive framework is proposed by integrating an improved grey wolf optimization (IGWO) and kernel extreme learning machine (KELM), termed as IGWO-KELM, for medical diagnosis. The proposed IGWO feature selection approach is used for the purpose of finding the optimal feature subset for medical data. In the proposed approach, genetic algorithm (GA) was firstly adopted to generate the diversified initial positions, and then grey wolf optimization (GWO) was used to update the current positions of population in the discrete searching space, thus getting the optimal feature subset for the better classification purpose based on KELM. The proposed approach is compared against the original GA and GWO on the two common disease diagnosis problems in terms of a set of performance metrics, including classification accuracy, sensitivity, specificity, precision, G-mean, F-measure, and the size of selected features. The simulation results have proven the superiority of the proposed method over the other two competitive counterparts.

  8. Rational optimization of tolC as a powerful dual selectable marker for genome engineering

    PubMed Central

    Gregg, Christopher J.; Lajoie, Marc J.; Napolitano, Michael G.; Mosberg, Joshua A.; Goodman, Daniel B.; Aach, John; Isaacs, Farren J.; Church, George M.

    2014-01-01

    Selection has been invaluable for genetic manipulation, although counter-selection has historically exhibited limited robustness and convenience. TolC, an outer membrane pore involved in transmembrane transport in E. coli, has been implemented as a selectable/counter-selectable marker, but counter-selection escape frequency using colicin E1 precludes using tolC for inefficient genetic manipulations and/or with large libraries. Here, we leveraged unbiased deep sequencing of 96 independent lineages exhibiting counter-selection escape to identify loss-of-function mutations, which offered mechanistic insight and guided strain engineering to reduce counter-selection escape frequency by ∼40-fold. We fundamentally improved the tolC counter-selection by supplementing a second agent, vancomycin, which reduces counter-selection escape by 425-fold, compared colicin E1 alone. Combining these improvements in a mismatch repair proficient strain reduced counter-selection escape frequency by 1.3E6-fold in total, making tolC counter-selection as effective as most selectable markers, and adding a valuable tool to the genome editing toolbox. These improvements permitted us to perform stable and continuous rounds of selection/counter-selection using tolC, enabling replacement of 10 alleles without requiring genotypic screening for the first time. Finally, we combined these advances to create an optimized E. coli strain for genome engineering that is ∼10-fold more efficient at achieving allelic diversity than previous best practices. PMID:24452804

  9. Optimization of meander line radiators for frequency selective surfaces by using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Bucuci, Stefania C.; Dumitrascu, Ana; Danisor, Alin; Berescu, Serban; Tamas, Razvan D.

    2015-02-01

    In this paper we propose the use of frequency selective surfaces based on meander line radiators, as targets for monitoring slow displacements with synthetic aperture radars. The optimization of the radiators is performed by using genetic algorithms on only two parameters i.e., gain and size. As an example, we have optimized a single meander antenna, resonating in the X-band, at 9.65 GHz.

  10. Debris Selection and Optimal Path Planning for Debris Removal on the SSO: Impulsive-Thrust Option

    NASA Astrophysics Data System (ADS)

    Olympio, J. T.; Frouvelle, N.

    2013-08-01

    The current paper deals with the mission design of a generic active space debris removal spacecraft. Considered debris are all on a sun-synchronous orbit. A perturbed Lambert's problem, modelling the transfer between two debris, is devised to take into account J2 perturbation, and to quickly evaluate mission scenarios. A robust approach, using techniques of global optimisation, is followed to find optimal debris sequence and mission strategy. Manoeuvres optimization is then performed to refine the selected trajectory scenarii.

  11. Compression of biomedical signals with mother wavelet optimization and best-basis wavelet packet selection.

    PubMed

    Brechet, Laurent; Lucas, Marie-Françoise; Doncarli, Christian; Farina, Dario

    2007-12-01

    We propose a novel scheme for signal compression based on the discrete wavelet packet transform (DWPT) decompositon. The mother wavelet and the basis of wavelet packets were optimized and the wavelet coefficients were encoded with a modified version of the embedded zerotree algorithm. This signal dependant compression scheme was designed by a two-step process. The first (internal optimization) was the best basis selection that was performed for a given mother wavelet. For this purpose, three additive cost functions were applied and compared. The second (external optimization) was the selection of the mother wavelet based on the minimal distortion of the decoded signal given a fixed compression ratio. The mother wavelet was parameterized in the multiresolution analysis framework by the scaling filter, which is sufficient to define the entire decomposition in the orthogonal case. The method was tested on two sets of ten electromyographic (EMG) and ten electrocardiographic (ECG) signals that were compressed with compression ratios in the range of 50%-90%. For 90% compression ratio of EMG (ECG) signals, the percent residual difference after compression decreased from (mean +/- SD) 48.6 +/- 9.9% (21.5 +/- 8.4%) with discrete wavelet transform (DWT) using the wavelet leading to poorest performance to 28.4 +/- 3.0% (6.7 +/- 1.9%) with DWPT, with optimal basis selection and wavelet optimization. In conclusion, best basis selection and optimization of the mother wavelet through parameterization led to substantial improvement of performance in signal compression with respect to DWT and randon selection of the mother wavelet. The method provides an adaptive approach for optimal signal representation for compression and can thus be applied to any type of biomedical signal.

  12. Optimal selection on water-supply pipe of building based on analytic hierarchy process

    NASA Astrophysics Data System (ADS)

    Wei, Tianyun; Chen, Guiqing

    2017-04-01

    The main problem of pipes used in water-supply system was analyzed, and the commonly used pipe and their main features were introduced in this paper. The principles that the selection on water-supply pipes should follow were pointed out. Analytic Hierarchy Process (AHP) using 9 scaling was applied to optimize water-supply pipes quantitatively. The optimal water-supply pipes were determined according to the sorting result of comprehensive evaluation index. It could provide the reference to select the reasonable water-supply pipes for the engineers.

  13. Improving Pertuzumab production by gene optimization and proper signal peptide selection.

    PubMed

    Ramezani, Amin; Mahmoudi Maymand, Elham; Yazdanpanah-Samani, Mahsa; Hosseini, Ahmad; Toghraie, Fatemeh Sadat; Ghaderi, Abbas

    2017-07-01

    Using proper signal peptide and codon optimization are important factors that must be considered when designing the vector to increase protein expression in Chinese Hamster Ovary (CHO) cells. The aim of the present study is to investigate how to enhance Pertuzumab production through heavy and light chain coding gene optimization and proper signal peptide selection. First, CHO-K1 cells were transiently transfected with whole-antibody-gene-optimized, variable-regions-optimized and non-optimized constructs and then we employed five different signal peptides to improve the secretion efficiency of Pertuzumab. Compared to the native antibody gene, a 3.8 fold increase in Pertuzumab production rate was achieved with the whole heavy and light chain sequence optimization. Although an overall two fold increase in monoclonal antibody production was achieved by human albumin signal peptide compared to the control signal peptide, this overproduction was not statistically significant. Selected signal peptides had no effect on the binding of Pertuzumab to the ErbB2 antigen. The combined data indicate that human albumin signal peptide along with whole antibody sequence optimization can be used to improve Pertuzumab production rates. This sequence was used to produce Pertuzumab producing CHO-K1 stably transfected cells. This result is useful for producing Pertuzumab as a biosimilar drug. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. selectSNP – An R package for selecting SNPs optimal for genetic evaluation

    USDA-ARS?s Scientific Manuscript database

    There has been a huge increase in the number of SNPs in the public repositories. This has made it a challenge to design low and medium density SNP panels, which requires careful selection of available SNPs considering many criteria, such as map position, allelic frequency, possible biological functi...

  15. Combining predictors to achieve optimal trade-offs between selection quality and adverse impact.

    PubMed

    De Corte, Wilfried; Lievens, Filip; Sackett, Paul R

    2007-09-01

    The authors propose a procedure to determine (a) predictor composites that result in a Pareto-optimal trade-off between the often competing goals in personnel selection of quality and adverse impact and (b) the relative importance of the quality and impact objectives that correspond to each of these trade-offs. They also investigated whether the obtained Pareto-optimal composites continue to perform well under variability of the selection parameters that characterize the intended selection decision. The results of this investigation indicate that this is indeed the case. The authors suggest that the procedure be used as one of a number of potential strategies for addressing the quality-adverse impact problem in settings where estimates of the selection parameters (e.g., validity estimates, predictor intercorrelations, subgroup mean differences on the predictors and criteria) are available from either a local validation study or meta-analytic research. (c) 2007 APA.

  16. Selective waste collection optimization in Romania and its impact to urban climate

    NASA Astrophysics Data System (ADS)

    Mihai, Šercǎianu; Iacoboaea, Cristina; Petrescu, Florian; Aldea, Mihaela; Luca, Oana; Gaman, Florian; Parlow, Eberhard

    2016-08-01

    According to European Directives, transposed in national legislation, the Member States should organize separate collection systems at least for paper, metal, plastic, and glass until 2015. In Romania, since 2011 only 12% of municipal collected waste was recovered, the rest being stored in landfills, although storage is considered the last option in the waste hierarchy. At the same time there was selectively collected only 4% of the municipal waste. Surveys have shown that the Romanian people do not have selective collection bins close to their residencies. The article aims to analyze the current situation in Romania in the field of waste collection and management and to make a proposal for selective collection containers layout, using geographic information systems tools, for a case study in Romania. Route optimization is used based on remote sensing technologies and network analyst protocols. Optimizing selective collection system the greenhouse gases, particles and dust emissions can be reduced.

  17. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    PubMed

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  18. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems

    PubMed Central

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method. PMID:28781591

  19. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    PubMed

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  20. Direct-aperture optimization applied to selection of beam orientations in intensity-modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Bedford, J. L.; Webb, S.

    2007-01-01

    Direct-aperture optimization (DAO) was applied to iterative beam-orientation selection in intensity-modulated radiation therapy (IMRT), so as to ensure a realistic segmental treatment plan at each iteration. Nested optimization engines dealt separately with gantry angles, couch angles, collimator angles, segment shapes, segment weights and wedge angles. Each optimization engine performed a random search with successively narrowing step sizes. For optimization of segment shapes, the filtered backprojection (FBP) method was first used to determine desired fluence, the fluence map was segmented, and then constrained direct-aperture optimization was used thereafter. Segment shapes were fully optimized when a beam angle was perturbed, and minimally re-optimized otherwise. The algorithm was compared with a previously reported method using FBP alone at each orientation iteration. An example case consisting of a cylindrical phantom with a hemi-annular planning target volume (PTV) showed that for three-field plans, the method performed better than when using FBP alone, but for five or more fields, neither method provided much benefit over equally spaced beams. For a prostate case, improved bladder sparing was achieved through the use of the new algorithm. A plan for partial scalp treatment showed slightly improved PTV coverage and lower irradiated volume of brain with the new method compared to FBP alone. It is concluded that, although the method is computationally intensive and not suitable for searching large unconstrained regions of beam space, it can be used effectively in conjunction with prior class solutions to provide individually optimized IMRT treatment plans.

  1. Direct-aperture optimization applied to selection of beam orientations in intensity-modulated radiation therapy.

    PubMed

    Bedford, J L; Webb, S

    2007-01-21

    Direct-aperture optimization (DAO) was applied to iterative beam-orientation selection in intensity-modulated radiation therapy (IMRT), so as to ensure a realistic segmental treatment plan at each iteration. Nested optimization engines dealt separately with gantry angles, couch angles, collimator angles, segment shapes, segment weights and wedge angles. Each optimization engine performed a random search with successively narrowing step sizes. For optimization of segment shapes, the filtered backprojection (FBP) method was first used to determine desired fluence, the fluence map was segmented, and then constrained direct-aperture optimization was used thereafter. Segment shapes were fully optimized when a beam angle was perturbed, and minimally re-optimized otherwise. The algorithm was compared with a previously reported method using FBP alone at each orientation iteration. An example case consisting of a cylindrical phantom with a hemi-annular planning target volume (PTV) showed that for three-field plans, the method performed better than when using FBP alone, but for five or more fields, neither method provided much benefit over equally spaced beams. For a prostate case, improved bladder sparing was achieved through the use of the new algorithm. A plan for partial scalp treatment showed slightly improved PTV coverage and lower irradiated volume of brain with the new method compared to FBP alone. It is concluded that, although the method is computationally intensive and not suitable for searching large unconstrained regions of beam space, it can be used effectively in conjunction with prior class solutions to provide individually optimized IMRT treatment plans.

  2. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    PubMed Central

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-01-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060

  3. Intrinsically disordered regions as affinity tuners in protein-DNA interactions.

    PubMed

    Vuzman, Dana; Levy, Yaakov

    2012-01-01

    Intrinsically disordered regions, terminal tails, and flexible linkers are abundant in DNA-binding proteins and play a crucial role by increasing the affinity and specificity of DNA binding. Disordered tails often undergo a disorder-to-order transition during interactions with DNA and improve both the kinetics and thermodynamics of specific DNA binding. The DNA search by proteins that interact nonspecifically with DNA can be supported by disordered tails as well. The disordered tail may increase the overall protein-DNA interface and thus increase the affinity of the protein to the DNA and its sliding propensity while slowing linear diffusion. The exact effect of the disordered tails on the sliding rate depends on the degree of positive charge clustering, as has been shown for homeodomains and p53 transcription factors. The disordered tails, which may be viewed as DNA recognizing subdomains, can facilitate intersegment transfer events that occur via a "monkey bar" mechanism in which the domains bridge two different DNA fragments simultaneously. The "monkey bar" mechanism can be facilitated by internal disordered linkers in multidomain proteins that mediate the cross-talks between the constituent domains and especially their brachiation dynamics and thus their overall capability to search DNA efficiently. The residue sequence of the disordered tails has unique characteristics that were evolutionarily selected to achieve the optimized function that is unique to each protein. Perturbation of the electrostatic characteristics of the disordered tails by post-translational modifications, such as acetylation and phosphorylation, may affect protein affinity to DNA and therefore can serve to regulate DNA recognition. Modifying the disordered protein tails or the flexibility of the inter-domain linkers of multidomain proteins may affect the cross-talk between the constituent domains so as to facilitate the search kinetics of non-specific DNA sequences and increase affinity to

  4. Optimal band selection for high dimensional remote sensing data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfeng; Sun, Quan; Li, Jonathan

    2009-06-01

    A 'fused' method may not be suitable for reducing the dimensionality of data and a band/feature selection method needs to be used for selecting an optimal subset of original data bands. This study examined the efficiency of GA in band selection for remote sensing classification. A GA-based algorithm for band selection was designed deliberately in which a Bhattacharyya distance index that indicates separability between classes of interest is used as fitness function. A binary string chromosome is designed in which each gene location has a value of 1 representing a feature being included or 0 representing a band being not included. The algorithm was implemented in MATLAB programming environment, and a band selection task for lithologic classification in the Chocolate Mountain area (California) was used to test the proposed algorithm. The proposed feature selection algorithm can be useful in multi-source remote sensing data preprocessing, especially in hyperspectral dimensionality reduction.

  5. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification.

    PubMed

    Zhang, Yong; Gong, Dun-Wei; Cheng, Jian

    2017-01-01

    Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.

  6. Ant Colony Optimization Based Feature Selection Method for QEEG Data Classification

    PubMed Central

    Ozekes, Serhat; Gultekin, Selahattin; Tarhan, Nevzat

    2014-01-01

    Objective Many applications such as biomedical signals require selecting a subset of the input features in order to represent the whole set of features. A feature selection algorithm has recently been proposed as a new approach for feature subset selection. Methods Feature selection process using ant colony optimization (ACO) for 6 channel pre-treatment electroencephalogram (EEG) data from theta and delta frequency bands is combined with back propagation neural network (BPNN) classification method for 147 major depressive disorder (MDD) subjects. Results BPNN classified R subjects with 91.83% overall accuracy and 95.55% subjects detection sensitivity. Area under ROC curve (AUC) value after feature selection increased from 0.8531 to 0.911. The features selected by the optimization algorithm were Fp1, Fp2, F7, F8, F3 for theta frequency band and eliminated 7 features from 12 to 5 feature subset. Conclusion ACO feature selection algorithm improves the classification accuracy of BPNN. Using other feature selection algorithms or classifiers to compare the performance for each approach is important to underline the validity and versatility of the designed combination. PMID:25110496

  7. Adaptive feature selection using v-shaped binary particle swarm optimization

    PubMed Central

    Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850

  8. Multiobjective binary biogeography based optimization for feature selection using gene expression data.

    PubMed

    Li, Xiangtao; Yin, Minghao

    2013-12-01

    Gene expression data play an important role in the development of efficient cancer diagnoses and classification. However, gene expression data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a multi-objective biogeography based optimization method is proposed to select the small subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the Fisher-Markov selector is used to choose the 60 top gene expression data. Secondly, to make biogeography based optimization suitable for the discrete problem, binary biogeography based optimization, as called BBBO, is proposed based on a binary migration model and a binary mutation model. Then, multi-objective binary biogeography based optimization, as we called MOBBBO, is proposed by integrating the non-dominated sorting method and the crowding distance method into the BBBO framework. Finally, the MOBBBO method is used for gene selection, and support vector machine is used as the classifier with the leave-one-out cross-validation method (LOOCV). In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on ten gene expression dataset benchmarks. Experimental results demonstrate that the proposed method is better or at least comparable with previous particle swarm optimization (PSO) algorithm and support vector machine (SVM) from literature when considering the quality of the solutions obtained.

  9. Quantum-behaved particle swarm optimization: analysis of individual particle behavior and parameter selection.

    PubMed

    Sun, Jun; Fang, Wei; Wu, Xiaojun; Palade, Vasile; Xu, Wenbo

    2012-01-01

    Quantum-behaved particle swarm optimization (QPSO), motivated by concepts from quantum mechanics and particle swarm optimization (PSO), is a probabilistic optimization algorithm belonging to the bare-bones PSO family. Although it has been shown to perform well in finding the optimal solutions for many optimization problems, there has so far been little analysis on how it works in detail. This paper presents a comprehensive analysis of the QPSO algorithm. In the theoretical analysis, we analyze the behavior of a single particle in QPSO in terms of probability measure. Since the particle's behavior is influenced by the contraction-expansion (CE) coefficient, which is the most important parameter of the algorithm, the goal of the theoretical analysis is to find out the upper bound of the CE coefficient, within which the value of the CE coefficient selected can guarantee the convergence or boundedness of the particle's position. In the experimental analysis, the theoretical results are first validated by stochastic simulations for the particle's behavior. Then, based on the derived upper bound of the CE coefficient, we perform empirical studies on a suite of well-known benchmark functions to show how to control and select the value of the CE coefficient, in order to obtain generally good algorithmic performance in real world applications. Finally, a further performance comparison between QPSO and other variants of PSO on the benchmarks is made to show the efficiency of the QPSO algorithm with the proposed parameter control and selection methods.

  10. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme

    PubMed Central

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    Purpose: Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. Methods: An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Results: Among these four methods, SFFS has highest efficacy, which takes 3%–5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC

  11. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme.

    PubMed

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized

  12. Optimization of highly selective 2,4-diaminopyrimidine-5-carboxamide inhibitors of Sky kinase.

    PubMed

    Powell, Noel A; Hoffman, Jennifer K; Ciske, Fred L; Kohrt, Jeffrey T; Baxi, Sangita M; Peng, Yun-Wen; Zhong, Min; Catana, Cornel; Ohren, Jeff; Perrin, Lisa A; Edmunds, Jeremy J

    2013-02-15

    Optimization of the ADME properties of a series of 2,4-diaminopyrimidine-5-carboxamide inhibitors of Sky kinase resulted in the identification of highly selective compounds with properties suitable for use as in vitro and in vivo tools to probe the effects of Sky inhibition. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems

    ERIC Educational Resources Information Center

    Robinson, Stephanie A.; Rickenbach, Elizabeth H.; Lachman, Margie E.

    2016-01-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to…

  14. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems

    ERIC Educational Resources Information Center

    Robinson, Stephanie A.; Rickenbach, Elizabeth H.; Lachman, Margie E.

    2016-01-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to…

  15. Selection, Optimization, and Compensation: An Action-Related Approach to Work and Partnership.

    ERIC Educational Resources Information Center

    Wiese, Bettina S.; Baltes, Paul B.; Freund, Alexandra M.

    2000-01-01

    Data from German professionals (n=206) were used to test selective optimization with compensation (SOC)--goal setting in career and partnership domains and use of means to achieve goals. A positive relationship was found between SOC behaviors and successful life management; it was more predictive for the partnership domain. (Contains 82…

  16. Subjective Career Success and Emotional Well-Being: Longitudinal Predictive Power of Selection, Optimization, and Compensation.

    ERIC Educational Resources Information Center

    Wiese, Bettina S.; Freund, Alexandra M.; Baltes, Paul B.

    2002-01-01

    A 3-year study of 82 young professionals found that work-related well-being was predicted by selection (commitment to personal goals), optimization (application of goal-related skills), and compensation (maintaining goals in the face of loss). The degree of compensation predicted emotional well-being and job satisfaction 3 years later. (Contains…

  17. Optimal cytoreduction with neutral argon plasma energy in selected patients with ovarian and primitive peritoneal cancer.

    PubMed

    Renaud, Marie Claude; Sebastianelli, Alexandra

    2013-01-01

    Epithelial ovarian cancer (EOC) is a deadly disease for which optimal cytoreduction to microscopic disease has shown the best correlation with survival. Electrically neutral argon plasma technology is a novel surgical tool to allow aggressive cytoreduction in selected patients with EOC, primary peritoneal cancer, and tubal cancer. We conducted a prospective feasibility study of the use of neutral argon plasma technology to complete cytoreductive surgery in order to assess its ability to obtain optimal cytoreduction. Six patients had their surgery completed with the neutral argon plasma device. None of the patients would have had optimal surgery unless the device had been available. All patients had cytoreduction to less than 5 mm to 10 mm without additional morbidity. One patient had complete cytoreduction, and two had residual disease of less than 2 mm. Electrically neutral plasma argon technology is a useful technology to maximize cytoreduction and to reduce tumour burden in selected cases of EOC.

  18. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    PubMed Central

    Liu, Sheng; Jin, Haiqiang; Mao, Xiaojun; Zhai, Binbin; Zhan, Ye; Feng, Xiaofei

    2013-01-01

    This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches. PMID:23766717

  19. Optimization of a Dibenzodiazepine Hit to a Potent and Selective Allosteric PAK1 Inhibitor

    PubMed Central

    2015-01-01

    The discovery of inhibitors targeting novel allosteric kinase sites is very challenging. Such compounds, however, once identified could offer exquisite levels of selectivity across the kinome. Herein we report our structure-based optimization strategy of a dibenzodiazepine hit 1, discovered in a fragment-based screen, yielding highly potent and selective inhibitors of PAK1 such as 2 and 3. Compound 2 was cocrystallized with PAK1 to confirm binding to an allosteric site and to reveal novel key interactions. Compound 3 modulated PAK1 at the cellular level and due to its selectivity enabled valuable research to interrogate biological functions of the PAK1 kinase. PMID:26191365

  20. Exploring the optimal performances of irreversible single resonance energy selective electron refrigerators

    NASA Astrophysics Data System (ADS)

    Zhou, Junle; Chen, Lingen; Ding, Zemin; Sun, Fengrui

    2016-05-01

    Applying finite-time thermodynamics (FTT) and electronic transport theory, the optimal performances of irreversible single resonance energy selective electron (ESE) refrigerator are analyzed. The effects of heat leakage between two electron reservoirs on optimal performances are discussed. The influences of system operating parameters on cooling load, coefficient of performance (COP), figure of merit and ecological function are demonstrated using numerical examples. Comparative performance analyses among different objective functions show that performance characteristics at maximum ecological function and maximum figure of merit are of great practical significance. Combining the two optimization objectives of maximum ecological function and maximum figure of merit together, more specific optimal ranges of cooling load and COP are obtained. The results can provide some advices to the design of practical electronic machine systems.

  1. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  2. An experimental and theoretical investigation of a fuel system tuner for the suppression of combustion driven oscillations

    NASA Astrophysics Data System (ADS)

    Scarborough, David E.

    Manufacturers of commercial, power-generating, gas turbine engines continue to develop combustors that produce lower emissions of nitrogen oxides (NO x) in order to meet the environmental standards of governments around the world. Lean, premixed combustion technology is one technique used to reduce NOx emissions in many current power and energy generating systems. However, lean, premixed combustors are susceptible to thermo-acoustic oscillations, which are pressure and heat-release fluctuations that occur because of a coupling between the combustion process and the natural acoustic modes of the system. These pressure oscillations lead to premature failure of system components, resulting in very costly maintenance and downtime. Therefore, a great deal of work has gone into developing methods to prevent or eliminate these combustion instabilities. This dissertation presents the results of a theoretical and experimental investigation of a novel Fuel System Tuner (FST) used to damp detrimental combustion oscillations in a gas turbine combustor by changing the fuel supply system impedance, which controls the amplitude and phase of the fuel flowrate. When the FST is properly tuned, the heat release oscillations resulting from the fuel-air ratio oscillations damp, rather than drive, the combustor acoustic pressure oscillations. A feasibility study was conducted to prove the validity of the basic idea and to develop some basic guidelines for designing the FST. Acoustic models for the subcomponents of the FST were developed, and these models were experimentally verified using a two-microphone impedance tube. Models useful for designing, analyzing, and predicting the performance of the FST were developed and used to demonstrate the effectiveness of the FST. Experimental tests showed that the FST reduced the acoustic pressure amplitude of an unstable, model, gas-turbine combustor over a wide range of operating conditions and combustor configurations. Finally, combustor

  3. A general method to select representative models for decision making and optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Shirangi, Mehrdad G.; Durlofsky, Louis J.

    2016-11-01

    The optimization of subsurface flow processes under geological uncertainty technically requires flow simulation to be performed over a large set of geological realizations for each function evaluation at every iteration of the optimizer. Because flow simulation over many permeability realizations (only permeability is considered to be uncertain in this study) may entail excessive computation, simulations are often performed for only a subset of 'representative' realizations. It is however challenging to identify a representative subset that provides flow statistics in close agreement with those from the full set, especially when the decision parameters (e.g., time-varying well pressures, well locations) are unknown a priori, as they are in optimization problems. In this work, we introduce a general framework, based on clustering, for selecting a representative subset of realizations for use in simulations involving 'new' sets of decision parameters. Prior to clustering, each realization is represented by a low-dimensional feature vector that contains a combination of permeability-based and flow-based quantities. Calculation of flow-based features requires the specification of a (base) flow problem and simulation over the full set of realizations. Permeability information is captured concisely through use of principal component analysis. By computing the difference between the flow response for the subset and the full set, we quantify the performance of various realization-selection methods. The impact of different weightings for flow and permeability information in the cluster-based selection procedure is assessed for a range of examples involving different types of decision parameters. These decision parameters are generated either randomly, in a manner that is consistent with the solutions proposed in global stochastic optimization procedures such as GA and PSO, or through perturbation around a base case, consistent with the solutions considered in pattern search

  4. A Novel Method of Failure Sample Selection for Electrical Systems Using Ant Colony Optimization

    PubMed Central

    Tian, Shulin; Yang, Chenglin; Liu, Cheng

    2016-01-01

    The influence of failure propagation is ignored in failure sample selection based on traditional testability demonstration experiment method. Traditional failure sample selection generally causes the omission of some failures during the selection and this phenomenon could lead to some fearful risks of usage because these failures will lead to serious propagation failures. This paper proposes a new failure sample selection method to solve the problem. First, the method uses a directed graph and ant colony optimization (ACO) to obtain a subsequent failure propagation set (SFPS) based on failure propagation model and then we propose a new failure sample selection method on the basis of the number of SFPS. Compared with traditional sampling plan, this method is able to improve the coverage of testing failure samples, increase the capacity of diagnosis, and decrease the risk of using. PMID:27738424

  5. The effect of genomic information on optimal contribution selection in livestock breeding programs.

    PubMed

    Clark, Samuel A; Kinghorn, Brian P; Hickey, John M; van der Werf, Julius H J

    2013-10-30

    Long-term benefits in animal breeding programs require that increases in genetic merit be balanced with the need to maintain diversity (lost due to inbreeding). This can be achieved by using optimal contribution selection. The availability of high-density DNA marker information enables the incorporation of genomic data into optimal contribution selection but this raises the question about how this information affects the balance between genetic merit and diversity. The effect of using genomic information in optimal contribution selection was examined based on simulated and real data on dairy bulls. We compared the genetic merit of selected animals at various levels of co-ancestry restrictions when using estimated breeding values based on parent average, genomic or progeny test information. Furthermore, we estimated the proportion of variation in estimated breeding values that is due to within-family differences. Optimal selection on genomic estimated breeding values increased genetic gain. Genetic merit was further increased using genomic rather than pedigree-based measures of co-ancestry under an inbreeding restriction policy. Using genomic instead of pedigree relationships to restrict inbreeding had a significant effect only when the population consisted of many large full-sib families; with a half-sib family structure, no difference was observed. In real data from dairy bulls, optimal contribution selection based on genomic estimated breeding values allowed for additional improvements in genetic merit at low to moderate inbreeding levels. Genomic estimated breeding values were more accurate and showed more within-family variation than parent average breeding values; for genomic estimated breeding values, 30 to 40% of the variation was due to within-family differences. Finally, there was no difference between constraining inbreeding via pedigree or genomic relationships in the real data. The use of genomic estimated breeding values increased genetic gain in

  6. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization

    SciTech Connect

    Kim, Minsun Stewart, Robert D.; Phillips, Mark H.

    2015-11-15

    Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T{sub d}), and the size and location of tumor target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D{sub mean} ≤ 45 Gy), lungs (D{sub mean} ≤ 20 Gy), cord (D{sub max} ≤ 45 Gy), esophagus (D{sub max} ≤ 63 Gy), and unspecified tissues (D{sub 05} ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D{sub 95} of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T{sub d} (3–100 days), tumor lag-time (T{sub k} = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D{sub 95} were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T{sub d} and T{sub k} used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating

  7. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization.

    PubMed

    Kim, Minsun; Stewart, Robert D; Phillips, Mark H

    2015-11-01

    To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (Td), and the size and location of tumor target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (Dmean≤45 Gy), lungs (Dmean≤20 Gy), cord (Dmax≤45 Gy), esophagus (Dmax≤63 Gy), and unspecified tissues (D05≤60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D95 of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of Td (3-100 days), tumor lag-time (Tk=0-10 days), and the size of tumors on optimal fractionation schedule. Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D95 were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on Td and Tk used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating tumors with Td less than 10 days, there was no significant increase in tumor BED but the treatment course could be

  8. General equations for optimal selection of diagnostic image acquisition parameters in clinical X-ray imaging.

    PubMed

    Zheng, Xiaoming

    2017-08-18

    The purpose of this work was to examine the effects of relationship functions between diagnostic image quality and radiation dose on the governing equations for image acquisition parameter variations in X-ray imaging. Various equations were derived for the optimal selection of peak kilovoltage (kVp) and exposure parameter (milliAmpere second, mAs) in computed tomography (CT), computed radiography (CR), and direct digital radiography. Logistic, logarithmic, and linear functions were employed to establish the relationship between radiation dose and diagnostic image quality. The radiation dose to the patient, as a function of image acquisition parameters (kVp, mAs) and patient size (d), was used in radiation dose and image quality optimization. Both logistic and logarithmic functions resulted in the same governing equation for optimal selection of image acquisition parameters using a dose efficiency index. For image quality as a linear function of radiation dose, the same governing equation was derived from the linear relationship. The general equations should be used in guiding clinical X-ray imaging through optimal selection of image acquisition parameters. The radiation dose to the patient could be reduced from current levels in medical X-ray imaging.

  9. Optimizing SNR for indoor visible light communication via selecting communicating LEDs

    NASA Astrophysics Data System (ADS)

    Wang, Lang; Wang, Chunyue; Chi, Xuefen; Zhao, Linlin; Dong, Xiaoli

    2017-03-01

    In this paper, we investigate the layout of LED to optimize SNR by selecting communicating LEDs (C-LEDs) in indoor visible light communication (VLC) system. Due to the inter-symbol interference (ISI) caused by the different arrival time of different optical rays, the SNR for any user is not optimal if a simple layout is adopted. It is interesting to investigate the LEDs layout for achieving optimal SNR in indoor VLC system. For a single user, LED signal is divided into the positive and negative components, they provide the power of desired signal and the power of ISI respectively. We introduce the concept of valid ratio (VR) which refers to the value of positive component over the negative component. Then we propose a VR threshold-based LED selection scheme which chooses C-LEDs by their VRs. For downlink broadcast VLC with multiple users, the SNRs of all users are different in a layout of C-LEDs. It is difficult to find a proper layout of C-LEDs to guarantee the BER of all users. To solve this problem, we propose an evolutionary algorithm (EA)-based scheme to optimize the SNR. The simulation results show that it is an effective method to improve SNR by selecting C-LEDs.

  10. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  11. Isotope selective photoionization of NaK by optimal control: theory and experiment.

    PubMed

    Schäfer-Bung, Boris; Bonacić-Koutecký, Vlasta; Sauer, Franziska; Weber, Stefan M; Wöste, Ludger; Lindinger, Albrecht

    2006-12-07

    We present a joint theoretical and experimental study of the maximization of the isotopomer ratio (23)Na(39)K(23)Na(41)K using tailored phase-only as well as amplitude and phase modulated femtosecond laser fields obtained in the framework of optimal control theory and closed loop learning (CLL) technique. A good agreement between theoretically and experimentally optimized pulse shapes is achieved which allows to assign the optimized processes directly to the pulse shapes obtained by the experimental isotopomer selective CLL approach. By analyzing the dynamics induced by the optimized pulses we show that the mechanism involving the dephasing of the wave packets between the isotopomers (23)Na (39)K and (23)Na (41)K on the first excited state is responsible for high isotope selective ionization. Amplitude and phase modulated pulses, moreover, allow to establish the connection between the spectral components of the pulse and corresponding occupied vibronic states. It will be also shown that the leading features of the theoretically shaped pulses are independent from the initial conditions. Since the underlying processes can be assigned to the individual features of the shaped pulses, we show that optimal control can be used as a tool for analysis.

  12. Space debris selection and optimal guidance for removal in the SSO with low-thrust propulsion

    NASA Astrophysics Data System (ADS)

    Olympio, J. T.; Frouvelle, N.

    2014-06-01

    The current paper deals with the mission design of a generic active space debris removal spacecraft. Considered space debris are all on sun-synchronous orbits. A perturbed Lambert's problem, modelling the transfer between two space debris is devised to take into account J2 perturbation, and to quickly evaluate mission scenarios. A robust approach, using techniques of global optimisation, is followed to find the optimal space debris sequence and mission strategy. Low-thrust optimisation is then performed to turn bi-impulse transfers into optimal low-thrust transfers, and refine the selected scenarios.

  13. An integrated sandstone acidizing fluid selection and simulation to optimize treatment design

    SciTech Connect

    Sumotarto, U.; Hill, A.D.; Sepehrnoori, K.

    1995-12-31

    An optimized design of a matrix treatment involves fluid selection and acidizing simulations to predict the outcome of the treatment. Many matrix acidizing treatment designs and fluid selections have been successfully accomplished by utilizing expert system technology. However, none of these present a complete and optimized result (i.e., by utilizing the output of the expert system to predict the acidizing outcome using an acidizing numerical simulator). In the meantime, several acidizing computer simulation studies have been conducted separately. This paper presents a study which integrates the treatment design, particularly the fluid selection process, and acidizing simulation for sandstone formations. Required parameters for sandstone acidizing such as acid type, concentration, volume, and injection rate/pressure are first selected using an expert system. The output from the expert system is further used for the input to an acidizing numerical simulator (UTACID). A new sandstone acidizing reaction model, appropriate for a high-temperature environment, and anisotropic medium have been implemented into UTACID to enhance the performance of the simulator. The expert system and the simulator have been integrated to provide an optimization tool for sandstone acidizing treatment design and simulation.

  14. Optimal selection of individuals for repeated covariate measurements in follow-up studies.

    PubMed

    Reinikainen, Jaakko; Karvanen, Juha; Tolonen, Hanna

    2016-12-01

    Repeated covariate measurements bring important information on the time-varying risk factors in long epidemiological follow-up studies. However, due to budget limitations, it may be possible to carry out the repeated measurements only for a subset of the cohort. We study cost-efficient alternatives for the simple random sampling in the selection of the individuals to be remeasured. The proposed selection criteria are based on forms of the D-optimality. The selection methods are compared with the simulation studies and illustrated with the data from the East-West study carried out in Finland from 1959 to 1999. The results indicate that cost savings can be achieved if the selection is focused on the individuals with high expected risk of the event and, on the other hand, on those with extreme covariate values in the previous measurements. © The Author(s) 2014.

  15. Optoelectronic optimization of mode selective converter based on liquid crystal on silicon

    NASA Astrophysics Data System (ADS)

    Wang, Yongjiao; Liang, Lei; Yu, Dawei; Fu, Songnian

    2016-03-01

    We carry out comprehensive optoelectronic optimization of mode selective converter used for the mode division multiplexing, based on liquid crystal on silicon (LCOS) in binary mode. The conversion error of digital-to-analog (DAC) is investigated quantitatively for the purpose of driving the LCOS in the application of mode selective conversion. Results indicate the DAC must have a resolution of 8-bit, in order to achieve high mode extinction ratio (MER) of 28 dB. On the other hand, both the fast axis position error of half-wave-plate (HWP) and rotation angle error of Faraday rotator (FR) have negative influence on the performance of mode selective conversion. However, the commercial products provide enough angle error tolerance for the LCOS-based mode selective converter, taking both of insertion loss (IL) and MER into account.

  16. In Vitro Selection of Optimal DNA Substrates for Ligation by a Water-Soluble Carbodiimide

    NASA Technical Reports Server (NTRS)

    Harada, Kazuo; Orgel, Leslie E.

    1994-01-01

    We have used in vitro selection to investigate the sequence requirements for efficient template-directed ligation of oligonucleotides at 0 deg C using a water-soluble carbodiimide as condensing agent. We find that only 2 bp at each side of the ligation junction are needed. We also studied chemical ligation of substrate ensembles that we have previously selected as optimal by RNA ligase or by DNA ligase. As anticipated, we find that substrates selected with DNA ligase ligate efficiently with a chemical ligating agent, and vice versa. Substrates selected using RNA ligase are not ligated by the chemical condensing agent and vice versa. The implications of these results for prebiotic chemistry are discussed.

  17. In Vitro Selection of Optimal DNA Substrates for Ligation by a Water-Soluble Carbodiimide

    NASA Technical Reports Server (NTRS)

    Harada, Kazuo; Orgel, Leslie E.

    1994-01-01

    We have used in vitro selection to investigate the sequence requirements for efficient template-directed ligation of oligonucleotides at 0 deg C using a water-soluble carbodiimide as condensing agent. We find that only 2 bp at each side of the ligation junction are needed. We also studied chemical ligation of substrate ensembles that we have previously selected as optimal by RNA ligase or by DNA ligase. As anticipated, we find that substrates selected with DNA ligase ligate efficiently with a chemical ligating agent, and vice versa. Substrates selected using RNA ligase are not ligated by the chemical condensing agent and vice versa. The implications of these results for prebiotic chemistry are discussed.

  18. On the non-stationarity of financial time series: impact on optimal portfolio selection

    NASA Astrophysics Data System (ADS)

    Livan, Giacomo; Inoue, Jun-ichi; Scalas, Enrico

    2012-07-01

    We investigate the possible drawbacks of employing the standard Pearson estimator to measure correlation coefficients between financial stocks in the presence of non-stationary behavior, and we provide empirical evidence against the well-established common knowledge that using longer price time series provides better, more accurate, correlation estimates. Then, we investigate the possible consequences of instabilities in empirical correlation coefficient measurements on optimal portfolio selection. We rely on previously published works which provide a framework allowing us to take into account possible risk underestimations due to the non-optimality of the portfolio weights being used in order to distinguish such non-optimality effects from risk underestimations genuinely due to non-stationarities. We interpret such results in terms of instabilities in some spectral properties of portfolio correlation matrices.

  19. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  20. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  1. Optimal Needle Grasp Selection for Automatic Execution of Suturing Tasks in Robotic Minimally Invasive Surgery

    PubMed Central

    Liu, Taoming; Çavuşoğlu, M. Cenk

    2015-01-01

    This paper presents algorithms for optimal selection of needle grasp, for autonomous robotic execution of the minimally invasive surgical suturing task. In order to minimize the tissue trauma during the suturing motion, the best practices of needle path planning that are used by surgeons are applied for autonomous robotic surgical suturing tasks. Once an optimal needle trajectory in a well-defined suturing scenario is chosen, another critical issue for suturing is the choice of needle grasp for the robotic system. Inappropriate needle grasp increases operating time requiring multiple re-grasps to complete the desired task. The proposed methods use manipulability, dexterity and torque metrics for needle grasp selection. A simulation demonstrates the proposed methods and recommends a variety of grasps. Then a realistic demonstration compares the performances of the manipulator using different grasps. PMID:26413382

  2. Optimal Needle Grasp Selection for Automatic Execution of Suturing Tasks in Robotic Minimally Invasive Surgery.

    PubMed

    Liu, Taoming; Çavuşoğlu, M Cenk

    2015-05-01

    This paper presents algorithms for optimal selection of needle grasp, for autonomous robotic execution of the minimally invasive surgical suturing task. In order to minimize the tissue trauma during the suturing motion, the best practices of needle path planning that are used by surgeons are applied for autonomous robotic surgical suturing tasks. Once an optimal needle trajectory in a well-defined suturing scenario is chosen, another critical issue for suturing is the choice of needle grasp for the robotic system. Inappropriate needle grasp increases operating time requiring multiple re-grasps to complete the desired task. The proposed methods use manipulability, dexterity and torque metrics for needle grasp selection. A simulation demonstrates the proposed methods and recommends a variety of grasps. Then a realistic demonstration compares the performances of the manipulator using different grasps.

  3. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1993-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  4. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1992-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  5. Imaging multicellular specimens with real-time optimized tiling light-sheet selective plane illumination microscopy

    PubMed Central

    Fu, Qinyi; Martin, Benjamin L.; Matus, David Q.; Gao, Liang

    2016-01-01

    Despite the progress made in selective plane illumination microscopy, high-resolution 3D live imaging of multicellular specimens remains challenging. Tiling light-sheet selective plane illumination microscopy (TLS-SPIM) with real-time light-sheet optimization was developed to respond to the challenge. It improves the 3D imaging ability of SPIM in resolving complex structures and optimizes SPIM live imaging performance by using a real-time adjustable tiling light sheet and creating a flexible compromise between spatial and temporal resolution. We demonstrate the 3D live imaging ability of TLS-SPIM by imaging cellular and subcellular behaviours in live C. elegans and zebrafish embryos, and show how TLS-SPIM can facilitate cell biology research in multicellular specimens by studying left-right symmetry breaking behaviour of C. elegans embryos. PMID:27004937

  6. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    NASA Astrophysics Data System (ADS)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  7. Maximal area and conformal welding heuristics for optimal slice selection in splenic volume estimation

    NASA Astrophysics Data System (ADS)

    Gutenko, Ievgeniia; Peng, Hao; Gu, Xianfeng; Barish, Mathew; Kaufman, Arie

    2016-03-01

    Accurate estimation of splenic volume is crucial for the determination of disease progression and response to treatment for diseases that result in enlargement of the spleen. However, there is no consensus with respect to the use of single or multiple one-dimensional, or volumetric measurement. Existing methods for human reviewers focus on measurement of cross diameters on a representative axial slice and craniocaudal length of the organ. We propose two heuristics for the selection of the optimal axial plane for splenic volume estimation: the maximal area axial measurement heuristic and the novel conformal welding shape-based heuristic. We evaluate these heuristics on time-variant data derived from both healthy and sick subjects and contrast them to established heuristics. Under certain conditions our heuristics are superior to standard practice volumetric estimation methods. We conclude by providing guidance on selecting the optimal heuristic for splenic volume estimation.

  8. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors.

    PubMed

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-07-03

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor.

  9. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors

    PubMed Central

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-01-01

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor. PMID:26151212

  10. Discovery, Optimization, and Characterization of Novel D2 Dopamine Receptor Selective Antagonists

    PubMed Central

    2015-01-01

    The D2 dopamine receptor (D2 DAR) is one of the most validated drug targets for neuropsychiatric and endocrine disorders. However, clinically approved drugs targeting D2 DAR display poor selectivity between the D2 and other receptors, especially the D3 DAR. This lack of selectivity may lead to undesirable side effects. Here we describe the chemical and pharmacological characterization of a novel D2 DAR antagonist series with excellent D2 versus D1, D3, D4, and D5 receptor selectivity. The final probe 65 was obtained through a quantitative high-throughput screening campaign, followed by medicinal chemistry optimization, to yield a selective molecule with good in vitro physical properties, metabolic stability, and in vivo pharmacokinetics. The optimized molecule may be a useful in vivo probe for studying D2 DAR signal modulation and could also serve as a lead compound for the development of D2 DAR-selective druglike molecules for the treatment of multiple neuropsychiatric and endocrine disorders. PMID:24666157

  11. An Approach to Feature Selection Based on Ant Colony Optimization and Rough Set

    NASA Astrophysics Data System (ADS)

    Wu, Junyun; Qiu, Taorong; Wang, Lu; Huang, Haiquan

    Feature selection plays an important role in many fields. This paper proposes a method for feature selection which combined the rough set method and ant colony optimization algorithm. The algorithm used the attribute dependence and the attribute importance as the inspiration factor which applied to the transfer rules. For further, the quality of classification based on rough set method and the length of the feature subset were used to build the pheromone update strategy. Through the test of data set, results show that the proposed method is feasible.

  12. Discovery of GSK2656157: An Optimized PERK Inhibitor Selected for Preclinical Development.

    PubMed

    Axten, Jeffrey M; Romeril, Stuart P; Shu, Arthur; Ralph, Jeffrey; Medina, Jesús R; Feng, Yanhong; Li, William Hoi Hong; Grant, Seth W; Heerding, Dirk A; Minthorn, Elisabeth; Mencken, Thomas; Gaul, Nathan; Goetz, Aaron; Stanley, Thomas; Hassell, Annie M; Gampe, Robert T; Atkins, Charity; Kumar, Rakesh

    2013-10-10

    We recently reported the discovery of GSK2606414 (1), a selective first in class inhibitor of protein kinase R (PKR)-like endoplasmic reticulum kinase (PERK), which inhibited PERK activation in cells and demonstrated tumor growth inhibition in a human tumor xenograft in mice. In continuation of our drug discovery program, we applied a strategy to decrease inhibitor lipophilicity as a means to improve physical properties and pharmacokinetics. This report describes our medicinal chemistry optimization culminating in the discovery of the PERK inhibitor GSK2656157 (6), which was selected for advancement to preclinical development.

  13. Selection, optimization, and compensation as strategies of life management: correction to Freund and Baltes (1998)

    PubMed

    Freund; Baltes

    1999-12-01

    Because of a scoring error, the data reported in Freund and Baltes (1998) were reanalyzed. Except for finding a lower positive manifold involving the 3 components of selection, optimization, and compensation (SOC), the outcome of this reanalysis supports the major findings previously reported: Old and very old participants of the Berlin Aging Study reporting SOC-related behaviors also reported higher levels of well-being and aging well. Corrected versions of Tables 3, 6, and 7 are presented.

  14. Ant-cuckoo colony optimization for feature selection in digital mammogram.

    PubMed

    Jona, J B; Nagaveni, N

    2014-01-15

    Digital mammogram is the only effective screening method to detect the breast cancer. Gray Level Co-occurrence Matrix (GLCM) textural features are extracted from the mammogram. All the features are not essential to detect the mammogram. Therefore identifying the relevant feature is the aim of this work. Feature selection improves the classification rate and accuracy of any classifier. In this study, a new hybrid metaheuristic named Ant-Cuckoo Colony Optimization a hybrid of Ant Colony Optimization (ACO) and Cuckoo Search (CS) is proposed for feature selection in Digital Mammogram. ACO is a good metaheuristic optimization technique but the drawback of this algorithm is that the ant will walk through the path where the pheromone density is high which makes the whole process slow hence CS is employed to carry out the local search of ACO. Support Vector Machine (SVM) classifier with Radial Basis Kernal Function (RBF) is done along with the ACO to classify the normal mammogram from the abnormal mammogram. Experiments are conducted in miniMIAS database. The performance of the new hybrid algorithm is compared with the ACO and PSO algorithm. The results show that the hybrid Ant-Cuckoo Colony Optimization algorithm is more accurate than the other techniques.

  15. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis

    PubMed Central

    Li, Qiang; Zhao, Xuehua; Cai, ZhenNao; Tong, Changfei; Liu, Wenbin; Tian, Xin

    2017-01-01

    In this study, a new predictive framework is proposed by integrating an improved grey wolf optimization (IGWO) and kernel extreme learning machine (KELM), termed as IGWO-KELM, for medical diagnosis. The proposed IGWO feature selection approach is used for the purpose of finding the optimal feature subset for medical data. In the proposed approach, genetic algorithm (GA) was firstly adopted to generate the diversified initial positions, and then grey wolf optimization (GWO) was used to update the current positions of population in the discrete searching space, thus getting the optimal feature subset for the better classification purpose based on KELM. The proposed approach is compared against the original GA and GWO on the two common disease diagnosis problems in terms of a set of performance metrics, including classification accuracy, sensitivity, specificity, precision, G-mean, F-measure, and the size of selected features. The simulation results have proven the superiority of the proposed method over the other two competitive counterparts. PMID:28246543

  16. Enhanced selectivity and search speed for method development using one-segment-per-component optimization strategies.

    PubMed

    Tyteca, Eva; Vanderlinden, Kim; Favier, Maxime; Clicq, David; Cabooter, Deirdre; Desmet, Gert

    2014-09-05

    Linear gradient programs are very frequently used in reversed phase liquid chromatography to enhance the selectivity compared to isocratic separations. Multi-linear gradient programs on the other hand are only scarcely used, despite their intrinsically larger separation power. Because the gradient-conformity of the latest generation of instruments has greatly improved, a renewed interest in more complex multi-segment gradient liquid chromatography can be expected in the future, raising the need for better performing gradient design algorithms. We explored the possibilities of a new type of multi-segment gradient optimization algorithm, the so-called "one-segment-per-group-of-components" optimization strategy. In this gradient design strategy, the slope is adjusted after the elution of each individual component of the sample, letting the retention properties of the different analytes auto-guide the course of the gradient profile. Applying this method experimentally to four randomly selected test samples, the separation time could on average be reduced with about 40% compared to the best single linear gradient. Moreover, the newly proposed approach performed equally well or better than the multi-segment optimization mode of a commercial software package. Carrying out an extensive in silico study, the experimentally observed advantage could also be generalized over a statistically significant amount of different 10 and 20 component samples. In addition, the newly proposed gradient optimization approach enables much faster searches than the traditional multi-step gradient design methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Successful aging at work: an applied study of selection, optimization, and compensation through impression management.

    PubMed

    Abraham, J D; Hansson, R O

    1995-03-01

    Although many abilities basic to human performance appear to decrease with age, research has shown that job performance does not generally show comparable declines. Baltes and Baltes (1990) have proposed a model of successful aging involving Selection, Optimization, and Compensation (SOC), that may help explain how individuals maintain important competencies despite age-related losses. In the present study, involving a total of 224 working adults ranging in age from 40 to 69 years, occupational measures of Selection, Optimization, and Compensation through impression management (Compensation-IM) were developed. The three measures were factorially distinct and reliable (Cronbach's alpha > .80). Moderated regression analyses indicated that: (1) the relationship between Selection and self-reported ability/performance maintenance increased with age (p < or = .05); and (2) the relationship between both Optimization and Compensation-IM and goal attainment (i.e., importance-weighted ability/performance maintenance) increased with age (p < or = .05). Results suggest that the SOC model of successful aging may be useful in explaining how older workers can maintain important job competencies. Correlational evidence also suggests, however, that characteristics of the job, workplace, and individual may mediate the initiation and effectiveness of SOC behaviors.

  18. Optimization of Type Ia Supernovae Selection, Photometric Typing, and Cosmology Constraints

    NASA Astrophysics Data System (ADS)

    Gjergo, Eda; Duggan, Jefferson; Cunningham, John; Kuhlmann, Steve; Biswas, Rahul; Kovacs, Eve

    2012-03-01

    We present results of an optimization study of selection criteria and photometric identification of Type Ia supernovae. The optimization study is the first to include detailed constraints on cosmology, including a time-dependent component of accelerated expansion. The study is performed on a simulated sample of Type Ia and core collapse supernovae from the Dark Energy Survey. In the next decade the number of detected Type Ia supernovae will increase dramatically (Bernstein et al. 2011, Abel et al. 2009), surpassing the resources available for spectroscopic confirmation of each supernova. This has produced an increased interest in the photometric identification of Type Ia supernovae. In order to improve the constraints on the accelerated expansion of the universe, discovered with Type Ia supernovae in the 1990's (Ries et al. 1998, Perlmutter et al. 1999), photometric typing of SN must be very robust. In this study we compare the template-based PSNID algorithm (Sako et al. 2010), with two Type Ia models MLCS2k2 (Riess et al. 2009) and SALT2 (Guy et al. 2007). We allow the pre-selection cuts, based on signal-to-noise ratios, to vary for each model. The optimal model plus pre-selection cuts is determined from the best cosmology constraint.

  19. Optimal landing site selection based on safety index during planetary descent

    NASA Astrophysics Data System (ADS)

    Cui, Pingyuan; Ge, Dantong; Gao, Ai

    2017-03-01

    Landing safety is the prior concern in planetary exploration missions. With the development of precise landing technology, future missions require vehicles to land on places of great scientific interest which are usually surrounded by rocks and craters. In order to perform a safe landing, the vehicle should be capable of detecting hazards, estimating its fuel consumption as well as touchdown performance, and locating a safe spot to land. The landing site selection process can be treated as an optimization problem which, however, cannot be efficiently solved through traditional optimization methods due to its complexity. Hence, the paper proposes a synthetic landing area assessment criterion, safety index, as a solution of the problem, which selects the best landing site by assessing terrain safety, fuel consumption and touchdown performance during descent. The computation effort is cut down after reducing the selection scope and the optimal landing site is found through a quick one-dimensional search. A typical example based on the Mars Science Laboratory mission is simulated to demonstrate the capability of the method. It is proved that the proposed strategy manages to pick out a safe landing site for the mission effectively. The safety index can be applied in various planetary descent phases and provides reference for future mission designs.

  20. Selecting the optimal healthcare centers with a modified P-median model: a visual analytic perspective.

    PubMed

    Jia, Tao; Tao, Hongbing; Qin, Kun; Wang, Yulong; Liu, Chengkun; Gao, Qili

    2014-10-22

    In a conventional P-median model, demanding points are likely assigned to the closest supplying facilities, but this method exhibits evident limitations in real cases. This paper proposed a modified P-median model in which exact and approximate strategies are used. The first strategy aims to enumerate all of the possible combinations of P facilities, and the second strategy adopts simulated annealing to allocate resources considering capacity constraint and spatial compactness constraint. These strategies allow us to choose optimal locations by applying visual analytics, which is rarely employed in location allocation planning. This model is applied to a case study in Henan Province, China, where three optimal healthcare centers are selected from candidate cities. First, the weighting factor in spatial compactness constraint is visually evaluated to obtain a plausible spatial pattern. Second, three optimal healthcare centers, namely, Zhengzhou, Xinxiang, and Nanyang, are identified in a hybrid transportation network by performing visual analytics. Third, alternative healthcare centers are obtained in a road network and compared with the above solution to understand the impacts of transportation network types. The optimal healthcare centers are visually detected by employing an improved P-median model, which considers both geographic accessibility and service quality. The optimal solutions are obtained in two transportation networks, which suggest high-speed railways and highways play a significant role respectively.

  1. A Topography Analysis Incorporated Optimization Method for the Selection and Placement of Best Management Practices

    PubMed Central

    Shen, Zhenyao; Chen, Lei; Xu, Liang

    2013-01-01

    Best Management Practices (BMPs) are one of the most effective methods to control nonpoint source (NPS) pollution at a watershed scale. In this paper, the use of a topography analysis incorporated optimization method (TAIOM) was proposed, which integrates topography analysis with cost-effective optimization. The surface status, slope and the type of land use were evaluated as inputs for the optimization engine. A genetic algorithm program was coded to obtain the final optimization. The TAIOM was validated in conjunction with the Soil and Water Assessment Tool (SWAT) in the Yulin watershed in Southwestern China. The results showed that the TAIOM was more cost-effective than traditional optimization methods. The distribution of selected BMPs throughout landscapes comprising relatively flat plains and gentle slopes, suggests the need for a more operationally effective scheme, such as the TAIOM, to determine the practicability of BMPs before widespread adoption. The TAIOM developed in this study can easily be extended to other watersheds to help decision makers control NPS pollution. PMID:23349917

  2. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial.

    PubMed

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-08-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.

  3. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    SciTech Connect

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram; Baumgartner, Gerald; Ramanujam, J.; Sadayappan, Ponnuswamy

    2012-03-01

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empirically measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.

  4. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial

    PubMed Central

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-01-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes. PMID:28763039

  5. A topography analysis incorporated optimization method for the selection and placement of best management practices.

    PubMed

    Shen, Zhenyao; Chen, Lei; Xu, Liang

    2013-01-01

    Best Management Practices (BMPs) are one of the most effective methods to control nonpoint source (NPS) pollution at a watershed scale. In this paper, the use of a topography analysis incorporated optimization method (TAIOM) was proposed, which integrates topography analysis with cost-effective optimization. The surface status, slope and the type of land use were evaluated as inputs for the optimization engine. A genetic algorithm program was coded to obtain the final optimization. The TAIOM was validated in conjunction with the Soil and Water Assessment Tool (SWAT) in the Yulin watershed in Southwestern China. The results showed that the TAIOM was more cost-effective than traditional optimization methods. The distribution of selected BMPs throughout landscapes comprising relatively flat plains and gentle slopes, suggests the need for a more operationally effective scheme, such as the TAIOM, to determine the practicability of BMPs before widespread adoption. The TAIOM developed in this study can easily be extended to other watersheds to help decision makers control NPS pollution.

  6. A multi-objective optimization tool for the selection and placement of BMPs for pesticide control

    NASA Astrophysics Data System (ADS)

    Maringanti, C.; Chaubey, I.; Arabi, M.; Engel, B.

    2008-07-01

    Pesticides (particularly atrazine used in corn fields) are the foremost source of water contamination in many of the water bodies in Midwestern corn belt, exceeding the 3 ppb MCL established by the U.S. EPA for drinking water. Best management practices (BMPs), such as buffer strips and land management practices, have been proven to effectively reduce the pesticide pollution loads from agricultural areas. However, selection and placement of BMPs in watersheds to achieve an ecologically effective and economically feasible solution is a daunting task. BMP placement decisions under such complex conditions require a multi-objective optimization algorithm that would search for the best possible solution that satisfies the given watershed management objectives. Genetic algorithms (GA) have been the most popular optimization algorithms for the BMP selection and placement problem. Most optimization models also had a dynamic linkage with the water quality model, which increased the computation time considerably thus restricting them to apply models on field scale or relatively smaller (11 or 14 digit HUC) watersheds. However, most previous works have considered the two objectives individually during the optimization process by introducing a constraint on the other objective, therefore decreasing the degree of freedom to find the solution. In this study, the optimization for atrazine reduction is performed by considering the two objectives simultaneously using a multi-objective genetic algorithm (NSGA-II). The limitation with the dynamic linkage with a distributed parameter watershed model was overcome through the utilization of a BMP tool, a database that stores the pollution reduction and cost information of different BMPs under consideration. The model was used for the selection and placement of BMPs in Wildcat Creek Watershed (located in Indiana, for atrazine reduction. The most ecologically effective solution from the model had an annual atrazine concentration reduction

  7. Near optimal energy selective x-ray imaging system performance with simple detectors

    SciTech Connect

    Alvarez, Robert E.

    2010-02-15

    Purpose: This article describes a method to achieve near optimal performance with low energy resolution detectors. Tapiovaara and Wagner [Phys. Med. Biol. 30, 519-529 (1985)] showed that an energy selective x-ray system using a broad spectrum source can produce images with a larger signal to noise ratio (SNR) than conventional systems using energy integrating or photon counting detectors. They showed that there is an upper limit to the SNR and that it can be achieved by measuring full spectrum information and then using an optimal energy dependent weighting. Methods: A performance measure is derived by applying statistical detection theory to an abstract vector space of the line integrals of the basis set coefficients of the two function approximation to the x-ray attenuation coefficient. The approach produces optimal results that utilize all the available energy dependent data. The method can be used with any energy selective detector and is applied not only to detectors using pulse height analysis (PHA) but also to a detector that simultaneously measures the total photon number and integrated energy, as discussed by Roessl et al. [Med. Phys. 34, 959-966 (2007)]. A generalization of this detector that improves the performance is introduced. A method is described to compute images with the optimal SNR using projections in a ''whitened'' vector space transformed so the noise is uncorrelated and has unit variance in both coordinates. Material canceled images with optimal SNR can also be computed by projections in this space. Results: The performance measure is validated by showing that it provides the Tapiovaara-Wagner optimal results for a detector with full energy information and also a conventional detector. The performance with different types of detectors is compared to the ideal SNR as a function of x-ray tube voltage and subject thickness. A detector that combines two bin PHA with a simultaneous measurement of integrated photon energy provides near ideal

  8. An enhancement of binary particle swarm optimization for gene selection in classifying cancer classes

    PubMed Central

    2013-01-01

    Background Gene expression data could likely be a momentous help in the progress of proficient cancer diagnoses and classification platforms. Lately, many researchers analyze gene expression data using diverse computational intelligence methods, for selecting a small subset of informative genes from the data for cancer classification. Many computational methods face difficulties in selecting small subsets due to the small number of samples compared to the huge number of genes (high-dimension), irrelevant genes, and noisy genes. Methods We propose an enhanced binary particle swarm optimization to perform the selection of small subsets of informative genes which is significant for cancer classification. Particle speed, rule, and modified sigmoid function are introduced in this proposed method to increase the probability of the bits in a particle’s position to be zero. The method was empirically applied to a suite of ten well-known benchmark gene expression data sets. Results The performance of the proposed method proved to be superior to other previous related works, including the conventional version of binary particle swarm optimization (BPSO) in terms of classification accuracy and the number of selected genes. The proposed method also requires lower computational time compared to BPSO. PMID:23617960

  9. Optimal Electrode Selection for Electrical Resistance Tomography in Carbon Fiber Reinforced Polymer Composites

    PubMed Central

    Escalona Galvis, Luis Waldo; Diaz-Montiel, Paulina; Venkataraman, Satchi

    2017-01-01

    Electrical Resistance Tomography (ERT) offers a non-destructive evaluation (NDE) technique that takes advantage of the inherent electrical properties in carbon fiber reinforced polymer (CFRP) composites for internal damage characterization. This paper investigates a method of optimum selection of sensing configurations for delamination detection in thick cross-ply laminates using ERT. Reduction in the number of sensing locations and measurements is necessary to minimize hardware and computational effort. The present work explores the use of an effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations resulting from selecting sensing electrode pairs. Singular Value Decomposition (SVD) is applied to obtain a spectral representation of the resistance measurements in the laminate for subsequent EI based reduction to take place. The electrical potential field in a CFRP laminate is calculated using finite element analysis (FEA) applied on models for two different laminate layouts considering a set of specified delamination sizes and locations with two different sensing arrangements. The effectiveness of the EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set and the reduced set of resistance measurements. This investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERT based damage detection. PMID:28772485

  10. Optimal Electrode Selection for Electrical Resistance Tomography in Carbon Fiber Reinforced Polymer Composites.

    PubMed

    Escalona Galvis, Luis Waldo; Diaz-Montiel, Paulina; Venkataraman, Satchi

    2017-02-04

    Electrical Resistance Tomography (ERT) offers a non-destructive evaluation (NDE) technique that takes advantage of the inherent electrical properties in carbon fiber reinforced polymer (CFRP) composites for internal damage characterization. This paper investigates a method of optimum selection of sensing configurations for delamination detection in thick cross-ply laminates using ERT. Reduction in the number of sensing locations and measurements is necessary to minimize hardware and computational effort. The present work explores the use of an effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations resulting from selecting sensing electrode pairs. Singular Value Decomposition (SVD) is applied to obtain a spectral representation of the resistance measurements in the laminate for subsequent EI based reduction to take place. The electrical potential field in a CFRP laminate is calculated using finite element analysis (FEA) applied on models for two different laminate layouts considering a set of specified delamination sizes and locations with two different sensing arrangements. The effectiveness of the EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set and the reduced set of resistance measurements. This investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERT based damage detection.

  11. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  12. Temporal artifact minimization in sonoelastography through optimal selection of imaging parameters.

    PubMed

    Torres, Gabriela; Chau, Gustavo R; Parker, Kevin J; Castaneda, Benjamin; Lavarello, Roberto J

    2016-07-01

    Sonoelastography is an ultrasonic technique that uses Kasai's autocorrelation algorithms to generate qualitative images of tissue elasticity using external mechanical vibrations. In the absence of synchronization between the mechanical vibration device and the ultrasound system, the random initial phase and finite ensemble length of the data packets result in temporal artifacts in the sonoelastography frames and, consequently, in degraded image quality. In this work, the analytic derivation of an optimal selection of acquisition parameters (i.e., pulse repetition frequency, vibration frequency, and ensemble length) is developed in order to minimize these artifacts, thereby eliminating the need for complex device synchronization. The proposed rule was verified through experiments with heterogeneous phantoms, where the use of optimally selected parameters increased the average contrast-to-noise ratio (CNR) by more than 200% and reduced the CNR standard deviation by 400% when compared to the use of arbitrarily selected imaging parameters. Therefore, the results suggest that the rule for specific selection of acquisition parameters becomes an important tool for producing high quality sonoelastography images.

  13. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    PubMed

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  14. On Sparse representation for Optimal Individualized Treatment Selection with Penalized Outcome Weighted Learning

    PubMed Central

    Song, Rui; Kosorok, Michael; Zeng, Donglin; Zhao, Yingqi; Laber, Eric; Yuan, Ming

    2015-01-01

    As a new strategy for treatment which takes individual heterogeneity into consideration, personalized medicine is of growing interest. Discovering individualized treatment rules (ITRs) for patients who have heterogeneous responses to treatment is one of the important areas in developing personalized medicine. As more and more information per individual is being collected in clinical studies and not all of the information is relevant for treatment discovery, variable selection becomes increasingly important in discovering individualized treatment rules. In this article, we develop a variable selection method based on penalized outcome weighted learning through which an optimal treatment rule is considered as a classification problem where each subject is weighted proportional to his or her clinical outcome. We show that the resulting estimator of the treatment rule is consistent and establish variable selection consistency and the asymptotic distribution of the estimators. The performance of the proposed approach is demonstrated via simulation studies and an analysis of chronic depression data. PMID:25883393

  15. Selection of optimal artificial boundary condition (ABC) frequencies for structural damage identification

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Lu, Yong

    2016-07-01

    In this paper, the sensitivities of artificial boundary condition (ABC) frequencies to the damages are investigated, and the optimal sensors are selected to provide the reliable structural damage identification. The sensitivity expressions for one-pin and two-pin ABC frequencies, which are the natural frequencies from structures with one and two additional constraints to its original boundary condition, respectively, are proposed. Based on the expressions, the contributions of the underlying mode shapes in the ABC frequencies can be calculated and used to select more sensitive ABC frequencies. Selection criteria are then defined for different conditions, and their performance in structural damage identification is examined with numerical studies. From the findings, conclusions are given.

  16. Selection for carcass and feedlot traits considering alternative slaughter end points and optimized management.

    PubMed

    Wilton, J W; Goddard, M E

    1996-01-01

    Profit was defined as a function of the genotype of animals and variables controlled by management. Alternative parameterizations of management variables were examined to compare the effect of controlling age at slaughter, weight at slaughter, or fat depth at slaughter. The various parameterizations are shown to result in equivalent economic weights for genetic variables, provided management variables are optimized for the current genotype. The implication is that economic weights and selection indexes can be conveniently calculated for age constant end points even though commercial production may involve weight or backfat depth constant slaughter points. An example of selection for profit in the feedlot phase of beef production is presented. Three genotype-management combinations were considered. Economic weights and subsequent selection index weights were shown to depend on both average genotypic means and management (feeding and marketing program) factors.

  17. [Selection of back-ground electrolyte in capillary zone electrophoresis by triangle and tetrahedron optimization methods].

    PubMed

    Sun, Guoxiang; Song, Wenjing; Lin, Ting

    2008-03-01

    The triangle and tetrahedron optimization methods were developed for the selection of back-ground electrolyte (BGE) in capillary zone electrophoresis (CZE). Chromatographic fingerprint index F and chromatographic fingerprint relative index F(r) were used as the objective functions for the evaluation, and the extract of Saussurea involucrate by water was used as the sample. The BGE was composed of borax, boric acid, dibasic sodium phosphate and sodium dihydrogen phosphate solution with different concentrations using triangle and tetrahedron optimization methods. Re-optimization was carried out by adding organic modifier to the BGE and adjusting the pH value. In triangle method, when 50 mmol/L borax-150 mmol/L sodium dihydrogen phosphate (containing 3% acetonitrile) (1 : 1, v/v) was used as BGE, the isolation was considered to be satisfactory. In tetrahedron method, the best BGE was 50 mmol/L borax-150 mmol/L sodium dihydrogen phosphate-200 mmol/L boric acid (1 : 1 : 2, v/v/v; adjusting the pH value to 8.55 by 0.1 mol/L sodium hydroxide). There were 28 peaks and 25 peaks under the different conditions respectively. The results showed that the methods could be applied to the selection of BGE in CZE of the extract of traditional Chinese medicine by water or ethanol.

  18. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    PubMed

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Memory control beliefs and everyday forgetfulness in adulthood: the effects of selection, optimization, and compensation strategies.

    PubMed

    Scheibner, Gunnar Benjamin; Leathem, Janet

    2012-01-01

    Controlling for age, gender, education, and self-rated health, the present study used regression analyses to examine the relationships between memory control beliefs and self-reported forgetfulness in the context of the meta-theory of Selective Optimization with Compensation (SOC). Findings from this online survey (N = 409) indicate that, among adult New Zealanders, a higher sense of memory control accounts for a 22.7% reduction in self-reported forgetfulness. Similarly, optimization was found to account for a 5% reduction in forgetfulness while the strategies of selection and compensation were not related to self-reports of forgetfulness. Optimization partially mediated the beneficial effects that some memory beliefs (e.g., believing that memory decline is inevitable and believing in the potential for memory improvement) have on forgetfulness. It was concluded that memory control beliefs are important predictors of self-reported forgetfulness while the support for the SOC model in the context of memory controllability and everyday forgetfulness is limited.

  20. Systematic optimization model and algorithm for binding sequence selection in computational enzyme design

    PubMed Central

    Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan

    2013-01-01

    A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589

  1. Systematic optimization model and algorithm for binding sequence selection in computational enzyme design.

    PubMed

    Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan

    2013-07-01

    A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions.

  2. Selection of Representative Models for Decision Making and Optimization under Geological Uncertainty

    NASA Astrophysics Data System (ADS)

    Gharib Shirangi, M.; Durlofsky, L. J.

    2016-12-01

    In subsurface flow applications such as aquifer management or oil/gas production, decision making and optimization under uncertainty require flow simulation to be performed over a large set of geological models. Because computational cost scales directly with the number of models employed, it is preferable to use as few models as possible. It is however challenging to identify a representative subset that provides flow statistics in close agreement with those from the full set, especially when the decision parameters (e.g., time-varying well settings, well locations) are unknown a priori. In this talk, we introduce a new approach, based on clustering, for the selection of representative sets of models. Prior to clustering, each realization is represented by a low-dimensional feature vector that contains a combination of permeability-based and flow-based quantities. The use of both full-physics and proxy-based flow information is considered for calculation of flow-based features. Permeability information is captured in reduced form via principal component analysis. We investigate the impact of different weightings for flow and permeability information in the clustering. New well configurations and settings, of the types encountered during computational optimization, are considered. If we designate fsub as the expected `flow response vector' computed with the selected subset of models, and ffull as the analogous vector computed for the full set of models, the goal is to find the weights in the clustering that lead to models that provide an fsub that is closest to ffull (for a given number of selected models). We present detailed assessments that show flow-based clustering is preferable for problems involving new well settings (e.g., time-varying well pressures), while both permeability-based and flow-based clustering provide similar results for (new) random multiwell configurations. The various procedures are then applied to select small subsets of models for use in

  3. Selection of optimal measures of growth and reproduction for the sublethal Leptocheirus plumulosus sediment bioassay

    SciTech Connect

    Gray, B.R.; Wright, R.B.; Duke, B.M.; Farrar, J.D.; Emery, V.L. Jr.; Brandon, D.L.; Moore, D.W.

    1998-11-01

    This article describes the selection process used to identify optimal measures of growth and reproduction for the proposed 28-d sublethal sediment bioassay with the estuarine amphipod Leptocheirus plumulosus. The authors used four criteria (relevance of each measure to its respective endpoint, signal-to-noise ratio, redundancy relative to other measures of the same endpoint, and cost) to evaluate nine growth and seven reproductive measures. Optimal endpoint measures were identified as those receiving relatively high scores for all or most criteria. Measures of growth scored similarly on all criteria, except for cost. The cost of the pooled (female plus male) growth measures was substantially lower than the cost of the female and male growth measures because the latter required more labor (by approx. 25 min per replicate). Pooled dry weight was identified as the optimal growth measure over pooled length because the latter required additional labor and nonstandard software and equipment. Embryo and neonate measures of reproduction exhibited wide differences in labor costs but yielded similar scores for other criteria. In contrast, brooding measures of reproduction scored relatively low on endpoint relevance, signal-to-noise ratio, and redundancy criteria. The authors recommend neonates/survivor as the optimal measure of L. plumulosus reproduction because it exhibited high endpoint relevance and signal-to-noise ratios, was redundant to other reproductive measures, and required minimal time.

  4. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    PubMed

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness.

  5. Selection of optimal recording sites for limited lead body surface potential mapping: A sequential selection based approach

    PubMed Central

    Finlay, Dewar D; Nugent, Chris D; Donnelly, Mark P; Lux, Robert L; McCullagh, Paul J; Black, Norman D

    2006-01-01

    Background In this study we propose the development of a new algorithm for selecting optimal recording sites for limited lead body surface potential mapping. The proposed algorithm differs from previously reported methods in that it is based upon a simple and intuitive data driven technique that does not make any presumptions about deterministic characteristics of the data. It uses a forward selection based search technique to find the best combination of electrocardiographic leads. Methods The study was conducted using a dataset consisting of body surface potential maps (BSPM) recorded from 116 subjects which included 59 normals and 57 subjects exhibiting evidence of old Myocardial Infarction (MI). The performance of the algorithm was evaluated using spatial RMS voltage error and correlation coefficient to compare original and reconstructed map frames. Results In all, three configurations of the algorithm were evaluated and it was concluded that there was little difference in the performance of the various configurations. In addition to observing the performance of the selection algorithm, several lead subsets of 32 electrodes as chosen by the various configurations of the algorithm were evaluated. The rationale for choosing this number of recording sites was to allow comparison with a previous study that used a different algorithm, where 32 leads were deemed to provide an acceptable level of reconstruction performance. Conclusion It was observed that although the lead configurations suggested in this study were not identical to that suggested in the previous work, the systems did bear similar characteristics in that recording sites were chosen with greatest density in the precordial region. PMID:16503972

  6. Particle swarm optimization for feature selection in classification: a multi-objective approach.

    PubMed

    Xue, Bing; Zhang, Mengjie; Browne, Will N

    2013-12-01

    Classification problems often have a large number of features in the data sets, but not all of them are useful for classification. Irrelevant and redundant features may even reduce the performance. Feature selection aims to choose a small number of relevant features to achieve similar or even better classification performance than using all features. It has two main conflicting objectives of maximizing the classification performance and minimizing the number of features. However, most existing feature selection algorithms treat the task as a single objective problem. This paper presents the first study on multi-objective particle swarm optimization (PSO) for feature selection. The task is to generate a Pareto front of nondominated solutions (feature subsets). We investigate two PSO-based multi-objective feature selection algorithms. The first algorithm introduces the idea of nondominated sorting into PSO to address feature selection problems. The second algorithm applies the ideas of crowding, mutation, and dominance to PSO to search for the Pareto front solutions. The two multi-objective algorithms are compared with two conventional feature selection methods, a single objective feature selection method, a two-stage feature selection algorithm, and three well-known evolutionary multi-objective algorithms on 12 benchmark data sets. The experimental results show that the two PSO-based multi-objective algorithms can automatically evolve a set of nondominated solutions. The first algorithm outperforms the two conventional methods, the single objective method, and the two-stage algorithm. It achieves comparable results with the existing three well-known multi-objective algorithms in most cases. The second algorithm achieves better results than the first algorithm and all other methods mentioned previously.

  7. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  8. Selecting a proper design period for heliostat field layout optimization using Campo code

    NASA Astrophysics Data System (ADS)

    Saghafifar, Mohammad; Gadalla, Mohamed

    2016-09-01

    In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.

  9. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    PubMed

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  10. Optimal feature selection in the classification of synchronous fluorescence of petroleum oils

    NASA Astrophysics Data System (ADS)

    Siddiqui, Khalid J.; Eastwood, DeLyle

    1996-03-01

    Pattern classification of UV-visible synchronous fluorescence of petroleum oils is performed using a composite system developed by the authors. The system consists of three phases, namely, feature extraction, feature selection and pattern classification. Each of these phases are briefly reviewed, focusing particularly on the feature selection method. Without assuming any particular classification algorithm the method extracts as much information (features) from spectra as conveniently possible and then applies the proposed successive feature elimination process to remove the redundant features. From the remaining features a significantly smaller, yet optimal, feature subset is selected that enhances the recognition performance of the classifier. The successive feature elimination process and optimal feature selection method are formally described. These methods are successfully applied for the classification of UV-visible synchronous fluorescence spectra. The features selected by the algorithm are used to classify twenty different sets of petroleum oils (the design set). A proximity index classifier using the Mahalanobis distance as the proximity criterion is developed using the smaller feature subset. The system was trained on the design set. The recognition performance on the design set was 100%. The recognition performance on the testing set was over 93% by successfully identifying 28 out of 30 samples in six classes. This performance is very encouraging. In addition, the method is computationally inexpensive and is equally useful for large data set problems as it always partitions the problem into a set of two class problems. The method further reduces the need for a careful feature determination problem which a system designer usually encounters during the initial design phase of a pattern classifier.

  11. Optimization of selection of chain amine scrubbers for CO2 capture.

    PubMed

    Al-Marri, Mohammed J; Khader, Mahmoud M; Giannelis, Emmanuel P; Shibl, Mohamed F

    2014-12-01

    In order to optimize the selection of a suitable amine molecule for CO2 scrubbers, a series of ab initio calculations were performed at the B3LYP/6-31+G(d,p) level of theory. Diethylenetriamine was used as a simple chain amine. Methyl and hydroxyl groups served as examples of electron donors, and electron withdrawing groups like trifluoromethyl and nitro substituents were also evaluated. Interaction distances and binding energies were employed as comparison operators. Moreover, natural bond orbital (NBO) analysis, namely the second order perturbation approach, was applied to determine whether the amine-CO2 interaction is chemical or physical. Different sizes of substituents affect the capture ability of diethylenetriamine. For instance, trifluoromethyl shields the nitrogen atom to which it attaches from the interaction with CO2. The results presented here provide a means of optimizing the choice of amine molecules for developing new amine scrubbers.

  12. Optimization of electron transfer dissociation via informed selection of reagents and operating parameters.

    PubMed

    Compton, Philip D; Strukl, Joseph V; Bai, Dina L; Shabanowitz, Jeffrey; Hunt, Donald F

    2012-02-07

    Electron transfer dissociation (ETD) has improved the mass spectrometric analysis of proteins and peptides with labile post-translational modifications and larger intact masses. Here, the parameters governing the reaction rate of ETD are examined experimentally. Currently, due to reagent injection and isolation events as well as longer reaction times, ETD spectra require significantly more time to acquire than collision-induced dissociation (CID) spectra (>100 ms), resulting in a trade-off in the dynamic range of tandem MS analyses when ETD-based methods are compared to CID-based methods. Through fine adjustment of reaction parameters and the selection of reagents with optimal characteristics, we demonstrate a drastic reduction in the time taken per ETD event. In fact, ETD can be performed with optimal efficiency in nearly the same time as CID at low precursor charge state (z = +3) and becomes faster at higher charge state (z > +3).

  13. Pretreatment of wastewater: optimal coagulant selection using Partial Order Scaling Analysis (POSA).

    PubMed

    Tzfati, Eran; Sein, Maya; Rubinov, Angelika; Raveh, Adi; Bick, Amos

    2011-06-15

    Jar-test is a well-known tool for chemical selection for physical-chemical wastewater treatment. Jar test results show the treatment efficiency in terms of suspended matter and organic matter removal. However, in spite of having all these results, coagulant selection is not an easy task because one coagulant can remove efficiently the suspended solids but at the same time increase the conductivity. This makes the final selection of coagulants very dependent on the relative importance assigned to each measured parameter. In this paper, the use of Partial Order Scaling Analysis (POSA) and multi-criteria decision analysis is proposed to help the selection of the coagulant and its concentration in a sequencing batch reactor (SBR). Therefore, starting from the parameters fixed by the jar-test results, these techniques will allow to weight these parameters, according to the judgments of wastewater experts, and to establish priorities among coagulants. An evaluation of two commonly used coagulation/flocculation aids (Alum and Ferric Chloride) was conducted and based on jar tests and POSA model, Ferric Chloride (100 ppm) was the best choice. The results obtained show that POSA and multi-criteria techniques are useful tools to select the optimal chemicals for the physical-technical treatment. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Impact of cultivar selection and process optimization on ethanol yield from different varieties of sugarcane

    PubMed Central

    2014-01-01

    Background The development of ‘energycane’ varieties of sugarcane is underway, targeting the use of both sugar juice and bagasse for ethanol production. The current study evaluated a selection of such ‘energycane’ cultivars for the combined ethanol yields from juice and bagasse, by optimization of dilute acid pretreatment optimization of bagasse for sugar yields. Method A central composite design under response surface methodology was used to investigate the effects of dilute acid pretreatment parameters followed by enzymatic hydrolysis on the combined sugar yield of bagasse samples. The pressed slurry generated from optimum pretreatment conditions (maximum combined sugar yield) was used as the substrate during batch and fed-batch simultaneous saccharification and fermentation (SSF) processes at different solid loadings and enzyme dosages, aiming to reach an ethanol concentration of at least 40 g/L. Results Significant variations were observed in sugar yields (xylose, glucose and combined sugar yield) from pretreatment-hydrolysis of bagasse from different cultivars of sugarcane. Up to 33% difference in combined sugar yield between best performing varieties and industrial bagasse was observed at optimal pretreatment-hydrolysis conditions. Significant improvement in overall ethanol yield after SSF of the pretreated bagasse was also observed from the best performing varieties (84.5 to 85.6%) compared to industrial bagasse (74.5%). The ethanol concentration showed inverse correlation with lignin content and the ratio of xylose to arabinose, but it showed positive correlation with glucose yield from pretreatment-hydrolysis. The overall assessment of the cultivars showed greater improvement in the final ethanol concentration (26.9 to 33.9%) and combined ethanol yields per hectare (83 to 94%) for the best performing varieties with respect to industrial sugarcane. Conclusions These results suggest that the selection of sugarcane variety to optimize ethanol

  15. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  16. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  17. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    NASA Astrophysics Data System (ADS)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model

  18. A reliable computational workflow for the selection of optimal screening libraries.

    PubMed

    Gilad, Yocheved; Nadassy, Katalin; Senderowitz, Hanoch

    2015-01-01

    The experimental screening of compound collections is a common starting point in many drug discovery projects. Successes of such screening campaigns critically depend on the quality of the screened library. Many libraries are currently available from different vendors yet the selection of the optimal screening library for a specific project is challenging. We have devised a novel workflow for the rational selection of project-specific screening libraries. The workflow accepts as input a set of virtual candidate libraries and applies the following steps to each library: (1) data curation; (2) assessment of ADME/T profile; (3) assessment of the number of promiscuous binders/frequent HTS hitters; (4) assessment of internal diversity; (5) assessment of similarity to known active compound(s) (optional); (6) assessment of similarity to in-house or otherwise accessible compound collections (optional). For ADME/T profiling, Lipinski's and Veber's rule-based filters were implemented and a new blood brain barrier permeation model was developed and validated (85 and 74 % success rate for training set and test set, respectively). Diversity and similarity descriptors which demonstrated best performances in terms of their ability to select either diverse or focused sets of compounds from three databases (Drug Bank, CMC and CHEMBL) were identified and used for diversity and similarity assessments. The workflow was used to analyze nine common screening libraries available from six vendors. The results of this analysis are reported for each library providing an assessment of its quality. Furthermore, a consensus approach was developed to combine the results of these analyses into a single score for selecting the optimal library under different scenarios. We have devised and tested a new workflow for the rational selection of screening libraries under different scenarios. The current workflow was implemented using the Pipeline Pilot software yet due to the usage of generic

  19. Optimization of multi-environment trials for genomic selection based on crop models.

    PubMed

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  20. Optimal selection of space transportation fleet to meet multi-mission space program needs

    NASA Technical Reports Server (NTRS)

    Morgenthaler, George W.; Montoya, Alex J.

    1989-01-01

    A space program that spans several decades will be comprised of a collection of missions such as low earth orbital space station, a polar platform, geosynchronous space station, lunar base, Mars astronaut mission, and Mars base. The optimal selection of a fleet of several recoverable and expendable launch vehicles, upper stages, and interplanetary spacecraft necessary to logistically establish and support these space missions can be examined by means of a linear integer programming optimization model. Such a selection must be made because the economies of scale which comes from producing large quantities of a few standard vehicle types, rather than many, will be needed to provide learning curve effects to reduce the overall cost of space transportation if these future missions are to be affordable. Optimization model inputs come from data and from vehicle designs. Each launch vehicle currently in existence has a launch history, giving rise to statistical estimates of launch reliability. For future, not-yet-developed launch vehicles, theoretical reliabilities corresponding to the maturity of the launch vehicles' technology and the degree of design redundancy must be estimated. Also, each such launch vehicle has a certain historical or estimated development cost, tooling cost, and a variable cost. The cost of a launch used in this paper includes the variable cost plus an amortized portion of the fixed and development costs. The integer linear programming model will have several constraint equations based on assumptions of mission mass requirements, volume requirements, and number of astronauts needed. The model will minimize launch vehicle logistic support cost and will select the most desirable launch vehicle fleet.

  1. Extracting fetal heart beats from maternal abdominal recordings: selection of the optimal principal components.

    PubMed

    Di Maria, Costanzo; Liu, Chengyu; Zheng, Dingchang; Murray, Alan; Langley, Philip

    2014-08-01

    This study presents a systematic comparison of different approaches to the automated selection of the principal components (PC) which optimise the detection of maternal and fetal heart beats from non-invasive maternal abdominal recordings.A public database of 75 4-channel non-invasive maternal abdominal recordings was used for training the algorithm. Four methods were developed and assessed to determine the optimal PC: (1) power spectral distribution, (2) root mean square, (3) sample entropy, and (4) QRS template. The sensitivity of the performance of the algorithm to large-amplitude noise removal (by wavelet de-noising) and maternal beat cancellation methods were also assessed. The accuracy of maternal and fetal beat detection was assessed against reference annotations and quantified using the detection accuracy score F1 [2*PPV*Se / (PPV + Se)], sensitivity (Se), and positive predictive value (PPV). The best performing implementation was assessed on a test dataset of 100 recordings and the agreement between the computed and the reference fetal heart rate (fHR) and fetal RR (fRR) time series quantified.The best performance for detecting maternal beats (F1 99.3%, Se 99.0%, PPV 99.7%) was obtained when using the QRS template method to select the optimal maternal PC and applying wavelet de-noising. The best performance for detecting fetal beats (F1 89.8%, Se 89.3%, PPV 90.5%) was obtained when the optimal fetal PC was selected using the sample entropy method and utilising a fixed-length time window for the cancellation of the maternal beats. The performance on the test dataset was 142.7 beats(2)/min(2) for fHR and 19.9 ms for fRR, ranking respectively 14 and 17 (out of 29) when compared to the other algorithms presented at the Physionet Challenge 2013.

  2. Selection and optimization of mooring cables on floating platform for special purposes

    NASA Astrophysics Data System (ADS)

    Ma, Guang-ying; Yao, Yun-long; Zhao, Chen-yao

    2017-08-01

    This paper studied a new type of assembled marine floating platform for special purposes. The selection and optimization of mooring cables on the floating platform are studied. By using ANSYS AQWA software, the hydrodynamic model of the platform was established to calculate the time history response of the platform motion under complex water environments, such as wind, wave, current and mooring. On this basis, motion response and cable tension were calculated with different cable mooring states under the designed environmental load. Finally, the best mooring scheme to meet the cable strength requirements was proposed, which can lower the motion amplitude of the platform effectively.

  3. Screening and selection of synthetic peptides for a novel and optimized endotoxin detection method.

    PubMed

    Mujika, M; Zuzuarregui, A; Sánchez-Gómez, S; Martínez de Tejada, G; Arana, S; Pérez-Lorenzo, E

    2014-09-30

    The current validated endotoxin detection methods, in spite of being highly sensitive, present several drawbacks in terms of reproducibility, handling and cost. Therefore novel approaches are being carried out in the scientific community to overcome these difficulties. Remarkable efforts are focused on the development of endotoxin-specific biosensors. The key feature of these solutions relies on the proper definition of the capture protocol, especially of the bio-receptor or ligand. The aim of the presented work is the screening and selection of a synthetic peptide specifically designed for LPS detection, as well as the optimization of a procedure for its immobilization onto gold substrates for further application to biosensors.

  4. Optimizing Network-Coded Cooperative Communications via Joint Session Grouping and Relay Node Selection

    DTIC Science & Technology

    2011-01-01

    Optimizing Network-Coded Cooperative Communications via Joint Session Grouping and Relay Node Selection Sushant Sharma Yi Shi Y. Thomas Hou Hanif...that for a single relay node, we can group as many sessions as we want. But, in a recent study [20], Sharma et al. showed that there exists a so-called...destination wireless network. In [20], Sharma et al. considered NC-CC with only one relay node. Their analysis showed that NC is not always good for CC, and

  5. Heuristic Optimization Approach to Selecting a Transport Connection in City Public Transport

    NASA Astrophysics Data System (ADS)

    Kul'ka, Jozef; Mantič, Martin; Kopas, Melichar; Faltinová, Eva; Kachman, Daniel

    2017-02-01

    The article presents a heuristic optimization approach to select a suitable transport connection in the framework of a city public transport. This methodology was applied on a part of the public transport in Košice, because it is the second largest city in the Slovak Republic and its network of the public transport creates a complex transport system, which consists of three different transport modes, namely from the bus transport, tram transport and trolley-bus transport. This solution focused on examining the individual transport services and their interconnection in relevant interchange points.

  6. Optimization of 1,2,5-Thiadiazole Carbamates as Potent and Selective ABHD6 Inhibitors #

    PubMed Central

    Patel, Jayendra Z.; Nevalainen, Tapio J.; Savinainen, Juha R.; Adams, Yahaya; Laitinen, Tuomo; Runyon, Robert S.; Vaara, Miia; Ahenkorah, Stephen; Kaczor, Agnieszka A.; Navia-Paldanius, Dina; Gynther, Mikko; Aaltonen, Niina; Joharapurkar, Amit A.; Jain, Mukul R.; Haka, Abigail S.; Maxfield, Frederick R.; Laitinen, Jarmo T.; Parkkari, Teija

    2015-01-01

    At present, inhibitors of α/β-hydrolase domain 6 (ABHD6) are viewed as a promising approach to treat inflammation and metabolic disorders. This article describes the optimization of 1,2,5-thiadiazole carbamates as ABHD6 inhibitors. Altogether, 34 compounds were synthesized and their inhibitory activity was tested using lysates of HEK293 cells transiently expressing human ABHD6 (hABHD6). Among the compound series, 4-morpholino-1,2,5-thiadiazol-3-yl cyclooctyl(methyl)carbamate (JZP-430, 55) potently and irreversibly inhibited hABHD6 (IC50 44 nM) and showed good selectivity (∼230 fold) over fatty acid amide hydrolase (FAAH) and lysosomal acid lipase (LAL), the main off-targets of related compounds. Additionally, activity-based protein profiling (ABPP) indicated that compound 55 (JZP-430) displayed good selectivity among the serine hydrolases of mouse brain membrane proteome. PMID:25504894

  7. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  8. Optimal channel selection for analysis of EEG-sleep patterns of neonates.

    PubMed

    Piryatinska, Alexandra; Woyczynski, Wojbor A; Scher, Mark S; Loparo, Kenneth A

    2012-04-01

    This paper extends our previous work on automated detection and classification of neonate EEG sleep stages. In [19] we adapted and integrated a range of computational, mathematical and statistical tools for the analysis of neonatal electroencephalogram (EEG) sleep recordings with the aim of facilitating the assessment of neonatal brain maturation and dismaturity by studying the structure and temporal patterns of their sleep. That work relied on algorithms using a single channel of EEG. The present paper builds on our previous work by incorporating a larger selection of EEG channels that capture both the spatial distribution and temporal patterns of EEG during sleep. Using a multivariate analysis approach, we obtain the "optimal" selection of the EEG channels and characteristics that are most suitable for EEG sleep state separation. Copyright © 2011. Published by Elsevier Ireland Ltd.

  9. Optimal Intermittence in Search Strategies under Speed-Selective Target Detection

    NASA Astrophysics Data System (ADS)

    Campos, Daniel; Méndez, Vicenç; Bartumeus, Frederic

    2012-01-01

    Random search theory has been previously explored for both continuous and intermittent scanning modes with full target detection capacity. Here we present a new class of random search problems in which a single searcher performs flights of random velocities, the detection probability when it passes over a target location being conditioned to the searcher speed. As a result, target detection involves an N-passage process for which the mean search time is here analytically obtained through a renewal approximation. We apply the idea of speed-selective detection to random animal foraging since a fast movement is known to significantly degrade perception abilities in many animals. We show that speed-selective detection naturally introduces an optimal level of behavioral intermittence in order to solve the compromise between fast relocations and target detection capability.

  10. Burnout and job performance: the moderating role of selection, optimization, and compensation strategies.

    PubMed

    Demerouti, Evangelia; Bakker, Arnold B; Leiter, Michael

    2014-01-01

    The present study aims to explain why research thus far has found only low to moderate associations between burnout and performance. We argue that employees use adaptive strategies that help them to maintain their performance (i.e., task performance, adaptivity to change) at acceptable levels despite experiencing burnout (i.e., exhaustion, disengagement). We focus on the strategies included in the selective optimization with compensation model. Using a sample of 294 employees and their supervisors, we found that compensation is the most successful strategy in buffering the negative associations of disengagement with supervisor-rated task performance and both disengagement and exhaustion with supervisor-rated adaptivity to change. In contrast, selection exacerbates the negative relationship of exhaustion with supervisor-rated adaptivity to change. In total, 42% of the hypothesized interactions proved to be significant. Our study uncovers successful and unsuccessful strategies that people use to deal with their burnout symptoms in order to achieve satisfactory job performance.

  11. Optimization of the excitation light sheet in selective plane illumination microscopy

    PubMed Central

    Gao, Liang

    2015-01-01

    Selective plane illumination microscopy (SPIM) allows rapid 3D live fluorescence imaging on biological specimens with high 3D spatial resolution, good optical sectioning capability and minimal photobleaching and phototoxic effect. SPIM gains its advantage by confining the excitation light near the detection focal plane, and its performance is determined by the ability to create a thin, large and uniform excitation light sheet. Several methods have been developed to create such an excitation light sheet for SPIM. However, each method has its own strengths and weaknesses, and tradeoffs must be made among different aspects in SPIM imaging. In this work, we present a strategy to select the excitation light sheet among the latest SPIM techniques, and to optimize its geometry based on spatial resolution, field of view, optical sectioning capability, and the sample to be imaged. Besides the light sheets discussed in this work, the proposed strategy is also applicable to estimate the SPIM performance using other excitation light sheets. PMID:25798312

  12. Contrast based band selection for optimized weathered oil detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Levaux, Florian; Bostater, Charles R., Jr.; Neyt, Xavier

    2012-09-01

    Hyperspectral imagery offers unique benefits for detection of land and water features due to the information contained in reflectance signatures such as the bi-directional reflectance distribution function or BRDF. The reflectance signature directly shows the relative absorption and backscattering features of targets. These features can be very useful in shoreline monitoring or surveillance applications, for example to detect weathered oil. In real-time detection applications, processing of hyperspectral data can be an important tool and Optimal band selection is thus important in real time applications in order to select the essential bands using the absorption and backscatter information. In the present paper, band selection is based upon the optimization of target detection using contrast algorithms. The common definition of the contrast (using only one band out of all possible combinations available within a hyperspectral image) is generalized in order to consider all the possible combinations of wavelength dependent contrasts using hyperspectral images. The inflection (defined here as an approximation of the second derivative) is also used in order to enhance the variations in the reflectance spectra as well as in the contrast spectrua in order to assist in optimal band selection. The results of the selection in term of target detection (false alarms and missed detection) are also compared with a previous method to perform feature detection, namely the matched filter. In this paper, imagery is acquired using a pushbroom hyperspectral sensor mounted at the bow of a small vessel. The sensor is mechanically rotated using an optical rotation stage. This opto-mechanical scanning system produces hyperspectral images with pixel sizes on the order of mm to cm scales, depending upon the distance between the sensor and the shoreline being monitored. The motion of the platform during the acquisition induces distortions in the collected HSI imagery. It is therefore

  13. An ant colony optimization based feature selection for web page classification.

    PubMed

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  14. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection

    PubMed Central

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme

    2016-01-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  15. An Ant Colony Optimization Based Feature Selection for Web Page Classification

    PubMed Central

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods. PMID:25136678

  16. Selective mapping: a strategy for optimizing the construction of high-density linkage maps.

    PubMed Central

    Vision, T J; Brown, D G; Shmoys, D B; Durrett, R T; Tanksley, S D

    2000-01-01

    Historically, linkage mapping populations have consisted of large, randomly selected samples of progeny from a given pedigree or cell lines from a panel of radiation hybrids. We demonstrate that, to construct a map with high genome-wide marker density, it is neither necessary nor desirable to genotype all markers in every individual of a large mapping population. Instead, a reduced sample of individuals bearing complementary recombinational or radiation-induced breakpoints may be selected for genotyping subsequent markers from a large, but sparsely genotyped, mapping population. Choosing such a sample can be reduced to a discrete stochastic optimization problem for which the goal is a sample with breakpoints spaced evenly throughout the genome. We have developed several different methods for selecting such samples and have evaluated their performance on simulated and actual mapping populations, including the Lister and Dean Arabidopsis thaliana recombinant inbred population and the GeneBridge 4 human radiation hybrid panel. Our methods quickly and consistently find much-reduced samples with map resolution approaching that of the larger populations from which they are derived. This approach, which we have termed selective mapping, can facilitate the production of high-quality, high-density genome-wide linkage maps. PMID:10790413

  17. Optimal sequence selection in proteins of known structure by simulated evolution.

    PubMed Central

    Hellinga, H W; Richards, F M

    1994-01-01

    Rational design of protein structure requires the identification of optimal sequences to carry out a particular function within a given backbone structure. A general solution to this problem requires that a potential function describing the energy of the system as a function of its atomic coordinates be minimized simultaneously over all available sequences and their three-dimensional atomic configurations. Here we present a method that explicitly minimizes a semiempirical potential function simultaneously in these two spaces, using a simulated annealing approach. The method takes the fixed three-dimensional coordinates of a protein backbone and stochastically generates possible sequences through the introduction of random mutations. The corresponding three-dimensional coordinates are constructed for each sequence by "redecorating" the backbone coordinates of the original structure with the corresponding side chains. These are then allowed to vary in their structure by random rotations around free torsional angles to generate a stochastic walk in configurational space. We have named this method protein simulated evolution, because, in loose analogy with natural selection, it randomly selects for allowed solutions in the sequence of a protein subject to the "selective pressure" of a potential function. Energies predicted by this method for sequences of a small group of residues in the hydrophobic core of the phage lambda cI repressor correlate well with experimentally determined biological activities. This "genetic selection by computer" approach has potential applications in protein engineering, rational protein design, and structure-based drug discovery. PMID:8016069

  18. Adaptive Optimal Control Using Frequency Selective Information of the System Uncertainty With Application to Unmanned Aircraft.

    PubMed

    Maity, Arnab; Hocht, Leonhard; Heise, Christian; Holzapfel, Florian

    2016-11-28

    A new efficient adaptive optimal control approach is presented in this paper based on the indirect model reference adaptive control (MRAC) architecture for improvement of adaptation and tracking performance of the uncertain system. The system accounts here for both matched and unmatched unknown uncertainties that can act as plant as well as input effectiveness failures or damages. For adaptation of the unknown parameters of these uncertainties, the frequency selective learning approach is used. Its idea is to compute a filtered expression of the system uncertainty using multiple filters based on online instantaneous information, which is used for augmentation of the update law. It is capable of adjusting a sudden change in system dynamics without depending on high adaptation gains and can satisfy exponential parameter error convergence under certain conditions in the presence of structured matched and unmatched uncertainties as well. Additionally, the controller of the MRAC system is designed using a new optimal control method. This method is a new linear quadratic regulator-based optimal control formulation for both output regulation and command tracking problems. It provides a closed-form control solution. The proposed overall approach is applied in a control of lateral dynamics of an unmanned aircraft problem to show its effectiveness.

  19. An optimal setup planning selection approach in a complex product machining process

    NASA Astrophysics Data System (ADS)

    Zhu, Fang

    2011-10-01

    Setup planning has very important influence on the product quality in a Complex Product Machining Process (CPMP). Part production in a CPMP involves multiple setup plans, which will lead into variation propagation and lead to extreme complexity in final product quality. Current approaches of setup planning in a CPMP are experience-based that lead to adopt higher machining process cost to ensure the final product quality, and most approaches are used for a single machining process. This work attempts to solve those challenging problems and aims to develop a method to obtain an optimal setup planning in a CPMP, which can satisfies the quality specifications and minimizes the expected value of the sum of machining costs. To this end, a machining process model is established to describe the variation propagation effect of setup plan throughout all stages in a CPMP firstly and then a quantitative setup plan evaluation methods driven by cost constraint is proposed to clarify what is optimality of setup plans. Based on the above procedures, an optimal setup planning is obtained through a dynamic programming solver. At last, a case study is provided to illustrate the validity and the significance of the proposed setup planning selective method.

  20. Pareto archived dynamically dimensioned search with hypervolume-based selection for multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Asadzadeh, Masoud; Tolson, Bryan

    2013-12-01

    Pareto archived dynamically dimensioned search (PA-DDS) is a parsimonious multi-objective optimization algorithm with only one parameter to diminish the user's effort for fine-tuning algorithm parameters. This study demonstrates that hypervolume contribution (HVC) is a very effective selection metric for PA-DDS and Monte Carlo sampling-based HVC is very effective for higher dimensional problems (five objectives in this study). PA-DDS with HVC performs comparably to algorithms commonly applied to water resources problems (ɛ-NSGAII and AMALGAM under recommended parameter values). Comparisons on the CEC09 competition show that with sufficient computational budget, PA-DDS with HVC performs comparably to 13 benchmark algorithms and shows improved relative performance as the number of objectives increases. Lastly, it is empirically demonstrated that the total optimization runtime of PA-DDS with HVC is dominated (90% or higher) by solution evaluation runtime whenever evaluation exceeds 10 seconds/solution. Therefore, optimization algorithm runtime associated with the unbounded archive of PA-DDS is negligible in solving computationally intensive problems.

  1. Exemplar-Based Policy with Selectable Strategies and its Optimization Using GA

    NASA Astrophysics Data System (ADS)

    Ikeda, Kokolo; Kobayashi, Shigenobu; Kita, Hajime

    As an approach for dynamic control problems and decision making problems, usually formulated as Markov Decision Processes (MDPs), we focus direct policy search (DPS), where a policy is represented by a model with parameters, and the parameters are optimized so as to maximize the evaluation function by applying the parameterized policy to the problem. In this paper, a novel framework for DPS, an exemplar-based policy optimization using genetic algorithm (EBP-GA) is presented and analyzed. In this approach, the policy is composed of a set of virtual exemplars and a case-based action selector, and the set of exemplars are selected and evolved by a genetic algorithm. Here, an exemplar is a real or virtual, free-styled and suggestive information such as ``take the action A at the state S'' or ``the state S1 is better to attain than S2''. One advantage of EBP-GA is the generalization and localization ability for policy expression, based on case-based reasoning methods. Another advantage is that both the introduction of prior knowledge and the extraction of knowledge after optimization are relatively straightforward. These advantages are confirmed through the proposal of two new policy expressions, experiments on two different problems and their analysis.

  2. Design-Optimization and Material Selection for a Proximal Radius Fracture-Fixation Implant

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Xie, X.; Arakere, G.; Grujicic, A.; Wagner, D. W.; Vallejo, A.

    2010-11-01

    The problem of optimal size, shape, and placement of a proximal radius-fracture fixation-plate is addressed computationally using a combined finite-element/design-optimization procedure. To expand the set of physiological loading conditions experienced by the implant during normal everyday activities of the patient, beyond those typically covered by the pre-clinical implant-evaluation testing procedures, the case of a wheel-chair push exertion is considered. Toward that end, a musculoskeletal multi-body inverse-dynamics analysis is carried out of a human propelling a wheelchair. The results obtained are used as input to a finite-element structural analysis for evaluation of the maximum stress and fatigue life of the parametrically defined implant design. While optimizing the design of the radius-fracture fixation-plate, realistic functional requirements pertaining to the attainment of the required level of the devise safety factor and longevity/lifecycle were considered. It is argued that the type of analyses employed in the present work should be: (a) used to complement the standard experimental pre-clinical implant-evaluation tests (the tests which normally include a limited number of daily-living physiological loading conditions and which rely on single pass/fail outcomes/decisions with respect to a set of lower-bound implant-performance criteria) and (b) integrated early in the implant design and material/manufacturing-route selection process.

  3. Optimizing the StackSlide setup and data selection for continuous-gravitational-wave searches in realistic detector data

    NASA Astrophysics Data System (ADS)

    Shaltev, M.

    2016-02-01

    The search for continuous gravitational waves in a wide parameter space at a fixed computing cost is most efficiently done with semicoherent methods, e.g., StackSlide, due to the prohibitive computing cost of the fully coherent search strategies. Prix and Shaltev [Phys. Rev. D 85, 084010 (2012)] have developed a semianalytic method for finding optimal StackSlide parameters at a fixed computing cost under ideal data conditions, i.e., gapless data and a constant noise floor. In this work, we consider more realistic conditions by allowing for gaps in the data and changes in the noise level. We show how the sensitivity optimization can be decoupled from the data selection problem. To find optimal semicoherent search parameters, we apply a numerical optimization using as an example the semicoherent StackSlide search. We also describe three different data selection algorithms. Thus, the outcome of the numerical optimization consists of the optimal search parameters and the selected data set. We first test the numerical optimization procedure under ideal conditions and show that we can reproduce the results of the analytical method. Then we gradually relax the conditions on the data and find that a compact data selection algorithm yields higher sensitivity compared to a greedy data selection procedure.

  4. Optimality and stability of symmetric evolutionary games with applications in genetic selection.

    PubMed

    Huang, Yuanyuan; Hao, Yiping; Wang, Min; Zhou, Wen; Wu, Zhijun

    2015-06-01

    Symmetric evolutionary games, i.e., evolutionary games with symmetric fitness matrices, have important applications in population genetics, where they can be used to model for example the selection and evolution of the genotypes of a given population. In this paper, we review the theory for obtaining optimal and stable strategies for symmetric evolutionary games, and provide some new proofs and computational methods. In particular, we review the relationship between the symmetric evolutionary game and the generalized knapsack problem, and discuss the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We also derive more efficient computational methods for the evaluation of the conditions than conventional approaches. We demonstrate how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders.

  5. Cancer microarray data feature selection using multi-objective binary particle swarm optimization algorithm.

    PubMed

    Annavarapu, Chandra Sekhara Rao; Dara, Suresh; Banka, Haider

    2016-01-01

    Cancer investigations in microarray data play a major role in cancer analysis and the treatment. Cancer microarray data consists of complex gene expressed patterns of cancer. In this article, a Multi-Objective Binary Particle Swarm Optimization (MOBPSO) algorithm is proposed for analyzing cancer gene expression data. Due to its high dimensionality, a fast heuristic based pre-processing technique is employed to reduce some of the crude domain features from the initial feature set. Since these pre-processed and reduced features are still high dimensional, the proposed MOBPSO algorithm is used for finding further feature subsets. The objective functions are suitably modeled by optimizing two conflicting objectives i.e., cardinality of feature subsets and distinctive capability of those selected subsets. As these two objective functions are conflicting in nature, they are more suitable for multi-objective modeling. The experiments are carried out on benchmark gene expression datasets, i.e., Colon, Lymphoma and Leukaemia available in literature. The performance of the selected feature subsets with their classification accuracy and validated using 10 fold cross validation techniques. A detailed comparative study is also made to show the betterment or competitiveness of the proposed algorithm.

  6. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  7. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  8. Analysis Methodology for Optimal Selection of Ground Station Site in Space Missions

    NASA Astrophysics Data System (ADS)

    Nieves-Chinchilla, J.; Farjas, M.; Martínez, R.

    2013-12-01

    Optimization of ground station sites is especially important in complex missions that include several small satellites (clusters or constellations) such as the QB50 project, where one ground station would be able to track several spatial vehicles, even simultaneously. In this regard the design of the communication system has to carefully take into account the ground station site and relevant signal phenomena, depending on the frequency band. To propose the optimal location of the ground station, these aspects become even more relevant to establish a trusted communication link due to the ground segment site in urban areas and/or selection of low orbits for the space segment. In addition, updated cartography with high resolution data of the location and its surroundings help to develop recommendations in the design of its location for spatial vehicles tracking and hence to improve effectiveness. The objectives of this analysis methodology are: completion of cartographic information, modelling the obstacles that hinder communication between the ground and space segment and representation in the generated 3D scene of the degree of impairment in the signal/noise of the phenomena that interferes with communication. The integration of new technologies of geographic data capture, such as 3D Laser Scan, determine that increased optimization of the antenna elevation mask, in its AOS and LOS azimuths along the horizon visible, maximizes visibility time with spatial vehicles. Furthermore, from the three-dimensional cloud of points captured, specific information is selected and, using 3D modeling techniques, the 3D scene of the antenna location site and surroundings is generated. The resulting 3D model evidences nearby obstacles related to the cartographic conditions such as mountain formations and buildings, and any additional obstacles that interfere with the operational quality of the antenna (other antennas and electronic devices that emit or receive in the same bandwidth

  9. Optimal energy window selection of a CZT-based small-animal SPECT for quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Park, Su-Jin; Yu, A. Ram; Choi, Yun Young; Kim, Kyeong Min; Kim, Hee-Joung

    2015-05-01

    Cadmium zinc telluride (CZT)-based small-animal single-photon emission computed tomography (SPECT) has desirable characteristics such as superior energy resolution, but data acquisition for SPECT imaging has been widely performed with a conventional energy window. The aim of this study was to determine the optimal energy window settings for technetium-99 m (99mTc) and thallium-201 (201Tl), the most commonly used isotopes in SPECT imaging, using CZT-based small-animal SPECT for quantitative accuracy. We experimentally investigated quantitative measurements with respect to primary count rate, contrast-to-noise ratio (CNR), and scatter fraction (SF) within various energy window settings using Triumph X-SPECT. The two ways of energy window settings were considered: an on-peak window and an off-peak window. In the on-peak window setting, energy centers were set on the photopeaks. In the off-peak window setting, the ratios of energy differences between the photopeak from the lower- and higher-threshold varied from 4:6 to 3:7. In addition, the energy-window width for 99mTc varied from 5% to 20%, and that for 201Tl varied from 10% to 30%. The results of this study enabled us to determine the optimal energy windows for each isotope in terms of primary count rate, CNR, and SF. We selected the optimal energy window that increases the primary count rate and CNR while decreasing SF. For 99mTc SPECT imaging, the energy window of 138-145 keV with a 5% width and off-peak ratio of 3:7 was determined to be the optimal energy window. For 201Tl SPECT imaging, the energy window of 64-85 keV with a 30% width and off-peak ratio of 3:7 was selected as the optimal energy window. Our results demonstrated that the proper energy window should be carefully chosen based on quantitative measurements in order to take advantage of desirable characteristics of CZT-based small-animal SPECT. These results provided valuable reference information for the establishment of new protocol for CZT

  10. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction

    SciTech Connect

    Stassi, D.; Ma, H.; Schmidt, T. G.; Dutta, S.; Soderman, A.; Pazzani, D.; Gros, E.; Okerlund, D.

    2016-01-15

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three

  11. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    PubMed

    Read, Mark N; Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-09-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  12. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection

    PubMed Central

    Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-01-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  13. Optimized diffusion of buck semen for saving genetic variability in selected dairy goat populations

    PubMed Central

    2011-01-01

    Background Current research on quantitative genetics has provided efficient guidelines for the sustainable management of selected populations: genetic gain is maximized while the loss of genetic diversity is maintained at a reasonable rate. However, actual selection schemes are complex, especially for large domestic species, and they have to take into account many operational constraints. This paper deals with the actual selection of dairy goats where the challenge is to optimize diffusion of buck semen on the field. Three objectives are considered simultaneously: i) natural service buck replacement (NSR); ii) goat replacement (GR); iii) semen distribution of young bucks to be progeny-tested. An appropriate optimization method is developed, which involves five analytical steps. Solutions are obtained by simulated annealing and the corresponding algorithms are presented in detail. Results The whole procedure was tested on two French goat populations (Alpine and Saanen breeds) and the results presented in the abstract were based on the average of the two breeds. The procedure induced an immediate acceleration of genetic gain in comparison with the current annual genetic gain (0.15 genetic standard deviation unit), as shown by two facts. First, the genetic level of replacement natural service (NS) bucks was predicted, 1.5 years ahead at the moment of reproduction, to be equivalent to that of the progeny-tested bucks in service, born from the current breeding scheme. Second, the genetic level of replacement goats was much higher than that of their dams (0.86 unit), which represented 6 years of selection, although dams were only 3 years older than their replacement daughters. This improved genetic gain could be achieved while decreasing inbreeding coefficients substantially. Inbreeding coefficients (%) of NS bucks was lower than that of the progeny-tested bucks (-0.17). Goats were also less inbred than their dams (-0.67). Conclusions It was possible to account for

  14. MicroRNAs: Non-coding fine tuners of receptor tyrosine kinase signalling in cancer.

    PubMed

    Donzelli, Sara; Cioce, Mario; Muti, Paola; Strano, Sabrina; Yarden, Yosef; Blandino, Giovanni

    2016-02-01

    Emerging evidence point to a crucial role for non-coding RNAs in modulating homeostatic signaling under physiological and pathological conditions. MicroRNAs, the best-characterized non-coding RNAs to date, can exquisitely integrate spatial and temporal signals in complex networks, thereby confer specificity and sensitivity to tissue response to changes in the microenvironment. MicroRNAs appear as preferential partners for Receptor Tyrosine Kinases (RTKs) in mediating signaling under stress conditions. Stress signaling can be especially relevant to disease. Here we focus on the ability of microRNAs to mediate RTK signaling in cancer, by acting as both tumor suppressors and oncogenes. We will provide a few general examples of microRNAs modulating specific tumorigenic functions downstream of RTK signaling and integrate oncogenic signals from multiple RTKs. A special focus will be devoted to epidermal growth factor receptor (EGFR) signaling, a system offering relatively rich information. We will explore the role of selected microRNAs as bidirectional modulators of EGFR functions in cancer cells. In addition, we will present the emerging evidence for microRNAs being specifically modulated by oncogenic EGFR mutants and we will discuss how this impinges on EGFRmut driven chemoresistance, which fits into the tumor heterogeneity-driven cancer progression. Finally, we discuss how other non-coding RNA species are emerging as important modulators of cancer progression and why the scenario depicted herein is destined to become increasingly complex in the future.

  15. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods.

    PubMed

    Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.

  16. Near-optimal experimental design for model selection in systems biology

    PubMed Central

    Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M.

    2013-01-01

    Motivation: Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. Results: We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Availability: Toolbox ‘NearOED’ available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org). Contact: busettoa@inf.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23900189

  17. A strategy that iteratively retains informative variables for selecting optimal variable subset in multivariate calibration.

    PubMed

    Yun, Yong-Huan; Wang, Wei-Ting; Tan, Min-Li; Liang, Yi-Zeng; Li, Hong-Dong; Cao, Dong-Sheng; Lu, Hong-Mei; Xu, Qing-Song

    2014-01-07

    Nowadays, with a high dimensionality of dataset, it faces a great challenge in the creation of effective methods which can select an optimal variables subset. In this study, a strategy that considers the possible interaction effect among variables through random combinations was proposed, called iteratively retaining informative variables (IRIV). Moreover, the variables are classified into four categories as strongly informative, weakly informative, uninformative and interfering variables. On this basis, IRIV retains both the strongly and weakly informative variables in every iterative round until no uninformative and interfering variables exist. Three datasets were employed to investigate the performance of IRIV coupled with partial least squares (PLS). The results show that IRIV is a good alternative for variable selection strategy when compared with three outstanding and frequently used variable selection methods such as genetic algorithm-PLS, Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS) and competitive adaptive reweighted sampling (CARS). The MATLAB source code of IRIV can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Optimization methods for selecting founder individuals for captive breeding or reintroduction of endangered species.

    PubMed

    Miller, Webb; Wright, Stephen J; Zhang, Yu; Schuster, Stephan C; Hayes, Vanessa M

    2010-01-01

    Methods from genetics and genomics can be employed to help save endangered species. One potential use is to provide a rational strategy for selecting a population of founders for a captive breeding program. The hope is to capture most of the available genetic diversity that remains in the wild population, to provide a safe haven where representatives of the species can be bred, and eventually to release the progeny back into the wild. However, the founders are often selected based on a random-sampling strategy whose validity is based on unrealistic assumptions. Here we outline an approach that starts by using cutting-edge genome sequencing and genotyping technologies to objectively assess the available genetic diversity. We show how combinatorial optimization methods can be applied to these data to guide the selection of the founder population. In particular, we develop a mixed-integer linear programming technique that identifies a set of animals whose genetic profile is as close as possible to specified abundances of alleles (i.e., genetic variants), subject to constraints on the number of founders and their genders and ages.

  19. Online Stimulus Optimization Rapidly Reveals Multidimensional Selectivity in Auditory Cortical Neurons

    PubMed Central

    Hancock, Kenneth E.; Sen, Kamal

    2014-01-01

    Neurons in sensory brain regions shape our perception of the surrounding environment through two parallel operations: decomposition and integration. For example, auditory neurons decompose sounds by separately encoding their frequency, temporal modulation, intensity, and spatial location. Neurons also integrate across these various features to support a unified perceptual gestalt of an auditory object. At higher levels of a sensory pathway, neurons may select for a restricted region of feature space defined by the intersection of multiple, independent stimulus dimensions. To further characterize how auditory cortical neurons decompose and integrate multiple facets of an isolated sound, we developed an automated procedure that manipulated five fundamental acoustic properties in real time based on single-unit feedback in awake mice. Within several minutes, the online approach converged on regions of the multidimensional stimulus manifold that reliably drove neurons at significantly higher rates than predefined stimuli. Optimized stimuli were cross-validated against pure tone receptive fields and spectrotemporal receptive field estimates in the inferior colliculus and primary auditory cortex. We observed, from midbrain to cortex, increases in both level invariance and frequency selectivity, which may underlie equivalent sparseness of responses in the two areas. We found that onset and steady-state spike rates increased proportionately as the stimulus was tailored to the multidimensional receptive field. By separately evaluating the amount of leverage each sound feature exerted on the overall firing rate, these findings reveal interdependencies between stimulus features as well as hierarchical shifts in selectivity and invariance that may go unnoticed with traditional approaches. PMID:24990917

  20. Identification and Optimization of the First Highly Selective GLUT1 Inhibitor BAY‐876

    PubMed Central

    Siebeneicher, Holger; Cleve, Arwed; Rehwinkel, Hartmut; Neuhaus, Roland; Heisler, Iring; Müller, Thomas; Bauser, Marcus

    2016-01-01

    Abstract Despite the long‐known fact that the facilitative glucose transporter GLUT1 is one of the key players safeguarding the increase in glucose consumption of many tumor entities even under conditions of normal oxygen supply (known as the Warburg effect), only few endeavors have been undertaken to find a GLUT1‐selective small‐molecule inhibitor. Because other transporters of the GLUT1 family are involved in crucial processes, these transporters should not be addressed by such an inhibitor. A high‐throughput screen against a library of ∼3 million compounds was performed to find a small molecule with this challenging potency and selectivity profile. The N‐(1H‐pyrazol‐4‐yl)quinoline‐4‐carboxamides were identified as an excellent starting point for further compound optimization. After extensive structure–activity relationship explorations, single‐digit nanomolar inhibitors with a selectivity factor of >100 against GLUT2, GLUT3, and GLUT4 were obtained. The most promising compound, BAY‐876 [N 4‐[1‐(4‐cyanobenzyl)‐5‐methyl‐3‐(trifluoromethyl)‐1H‐pyrazol‐4‐yl]‐7‐fluoroquinoline‐2,4‐dicarboxamide], showed good metabolic stability in vitro and high oral bioavailability in vivo. PMID:27552707

  1. Optimal feature selection for automated classification of FDG-PET in patients with suspected dementia

    NASA Astrophysics Data System (ADS)

    Serag, Ahmed; Wenzel, Fabian; Thiele, Frank; Buchert, Ralph; Young, Stewart

    2009-02-01

    FDG-PET is increasingly used for the evaluation of dementia patients, as major neurodegenerative disorders, such as Alzheimer's disease (AD), Lewy body dementia (LBD), and Frontotemporal dementia (FTD), have been shown to induce specific patterns of regional hypo-metabolism. However, the interpretation of FDG-PET images of patients with suspected dementia is not straightforward, since patients are imaged at different stages of progression of neurodegenerative disease, and the indications of reduced metabolism due to neurodegenerative disease appear slowly over time. Furthermore, different diseases can cause rather similar patterns of hypo-metabolism. Therefore, classification of FDG-PET images of patients with suspected dementia may lead to misdiagnosis. This work aims to find an optimal subset of features for automated classification, in order to improve classification accuracy of FDG-PET images in patients with suspected dementia. A novel feature selection method is proposed, and performance is compared to existing methods. The proposed approach adopts a combination of balanced class distributions and feature selection methods. This is demonstrated to provide high classification accuracy for classification of FDG-PET brain images of normal controls and dementia patients, comparable with alternative approaches, and provides a compact set of features selected.

  2. Resonance Raman enhancement optimization in the visible range by selecting different excitation wavelengths.

    PubMed

    Wang, Zhong; Li, Yuee

    2015-09-01

    Resonance enhancement of Raman spectroscopy (RS) has been used to significantly improve the sensitivity and selectivity of detection for specific components in complicated environments. Resonance RS gives more insight into the biochemical structure and reactivity. In this field, selecting a proper excitation wavelength to achieve optimal resonance enhancement is vital for the study of an individual chemical/biological ingredient with a particular absorption characteristic. Raman spectra of three azo derivatives with absorption spectra in the visible range are studied under the same experimental conditions at 488, 532, and 633 nm excitations. Universal laws in the visible range have been concluded by analyzing resonance Raman (RR) spectra of samples. The long wavelength edge of the absorption spectrum is a better choice for intense enhancement and the integrity of a Raman signal. The obtained results are valuable for applying RR for the selective detection of biochemical constituents whose electronic transitions take place at energies corresponding to the visible spectra, which is much friendlier to biologial samples compared to ultraviolet.

  3. Online stimulus optimization rapidly reveals multidimensional selectivity in auditory cortical neurons.

    PubMed

    Chambers, Anna R; Hancock, Kenneth E; Sen, Kamal; Polley, Daniel B

    2014-07-02

    Neurons in sensory brain regions shape our perception of the surrounding environment through two parallel operations: decomposition and integration. For example, auditory neurons decompose sounds by separately encoding their frequency, temporal modulation, intensity, and spatial location. Neurons also integrate across these various features to support a unified perceptual gestalt of an auditory object. At higher levels of a sensory pathway, neurons may select for a restricted region of feature space defined by the intersection of multiple, independent stimulus dimensions. To further characterize how auditory cortical neurons decompose and integrate multiple facets of an isolated sound, we developed an automated procedure that manipulated five fundamental acoustic properties in real time based on single-unit feedback in awake mice. Within several minutes, the online approach converged on regions of the multidimensional stimulus manifold that reliably drove neurons at significantly higher rates than predefined stimuli. Optimized stimuli were cross-validated against pure tone receptive fields and spectrotemporal receptive field estimates in the inferior colliculus and primary auditory cortex. We observed, from midbrain to cortex, increases in both level invariance and frequency selectivity, which may underlie equivalent sparseness of responses in the two areas. We found that onset and steady-state spike rates increased proportionately as the stimulus was tailored to the multidimensional receptive field. By separately evaluating the amount of leverage each sound feature exerted on the overall firing rate, these findings reveal interdependencies between stimulus features as well as hierarchical shifts in selectivity and invariance that may go unnoticed with traditional approaches.

  4. Optimization of spectrally selective Si/SiO2 based filters for thermophotovoltaic devices

    NASA Astrophysics Data System (ADS)

    Khosroshahi, Farhad Kazemi; Ertürk, Hakan; Pınar Mengüç, M.

    2017-08-01

    Design of a spectrally selective filter based on one-dimensional Si/SiO2 layers is considered for improved performance of thermo-photovoltaic devices. Spectrally selective filters transmit only the convertible radiation from the emitter as non-convertible radiation leads to a reduction in cell efficiency due to heating. The presented Si/SiO2 based filter concept reflects the major part of the undesired range back to the emitter to minimize energy required for the process and it is adaptable to different types of cells and emitters with different temperatures since its cut-off wavelength can be tuned. While this study mainly focuses on InGaSb based thermo-photovoltaic cell, Si, GaSb, and Ga0.78In0.22As0.19Sb0.81 based cells are also examined. Transmittance of the structure is predicted by rigorous coupled wave approach. Genetic algorithm, which is a global optimization method, is used to find the best possible filter structure by considering the overall efficiency as an objective function that is maximized. The simulations show that significant enhancement in the overall system and device efficiency is possible by using such filters with TPV devices. The methodology described in this paper allows for an improved filter design procedure for selected applications.

  5. Identification and Optimization of the First Highly Selective GLUT1 Inhibitor BAY-876.

    PubMed

    Siebeneicher, Holger; Cleve, Arwed; Rehwinkel, Hartmut; Neuhaus, Roland; Heisler, Iring; Müller, Thomas; Bauser, Marcus; Buchmann, Bernd

    2016-10-19

    Despite the long-known fact that the facilitative glucose transporter GLUT1 is one of the key players safeguarding the increase in glucose consumption of many tumor entities even under conditions of normal oxygen supply (known as the Warburg effect), only few endeavors have been undertaken to find a GLUT1-selective small-molecule inhibitor. Because other transporters of the GLUT1 family are involved in crucial processes, these transporters should not be addressed by such an inhibitor. A high-throughput screen against a library of ∼3 million compounds was performed to find a small molecule with this challenging potency and selectivity profile. The N-(1H-pyrazol-4-yl)quinoline-4-carboxamides were identified as an excellent starting point for further compound optimization. After extensive structure-activity relationship explorations, single-digit nanomolar inhibitors with a selectivity factor of >100 against GLUT2, GLUT3, and GLUT4 were obtained. The most promising compound, BAY-876 [N(4) -[1-(4-cyanobenzyl)-5-methyl-3-(trifluoromethyl)-1H-pyrazol-4-yl]-7-fluoroquinoline-2,4-dicarboxamide], showed good metabolic stability in vitro and high oral bioavailability in vivo. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  6. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods

    NASA Astrophysics Data System (ADS)

    Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.

  7. Evaluation of the selection methods used in the exIWO algorithm based on the optimization of multidimensional functions

    NASA Astrophysics Data System (ADS)

    Kostrzewa, Daniel; Josiński, Henryk

    2016-06-01

    The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.

  8. X-ray backscatter imaging for radiography by selective detection and snapshot: Evolution, development, and optimization

    NASA Astrophysics Data System (ADS)

    Shedlock, Daniel

    Compton backscatter imaging (CBI) is a single-sided imaging technique that uses the penetrating power of radiation and unique interaction properties of radiation with matter to image subsurface features. CBI has a variety of applications that include non-destructive interrogation, medical imaging, security and military applications. Radiography by selective detection (RSD), lateral migration radiography (LMR) and shadow aperture backscatter radiography (SABR) are different CBI techniques that are being optimized and developed. Radiography by selective detection (RSD) is a pencil beam Compton backscatter imaging technique that falls between highly collimated and uncollimated techniques. Radiography by selective detection uses a combination of single- and multiple-scatter photons from a projected area below a collimation plane to generate an image. As a result, the image has a combination of first- and multiple-scatter components. RSD techniques offer greater subsurface resolution than uncollimated techniques, at speeds at least an order of magnitude faster than highly collimated techniques. RSD scanning systems have evolved from a prototype into near market-ready scanning devices for use in a variety of single-sided imaging applications. The design has changed to incorporate state-of-the-art detectors and electronics optimized for backscatter imaging with an emphasis on versatility, efficiency and speed. The RSD system has become more stable, about 4 times faster, and 60% lighter while maintaining or improving image quality and contrast over the past 3 years. A new snapshot backscatter radiography (SBR) CBI technique, shadow aperture backscatter radiography (SABR), has been developed from concept and proof-of-principle to a functional laboratory prototype. SABR radiography uses digital detection media and shaded aperture configurations to generate near-surface Compton backscatter images without scanning, similar to how transmission radiographs are taken. Finally, a

  9. [Study on optimal selection of structure of vaneless centrifugal blood pump with constraints on blood perfusion and on blood damage indexes].

    PubMed

    Hu, Zhaoyan; Pan, Youlian; Chen, Zhenglong; Zhang, Tianyi; Lu, Lijun

    2012-12-01

    This paper is aimed to study the optimal selection of structure of vaneless centrifugal blood pump. The optimal objective is determined according to requirements of clinical use. Possible schemes are generally worked out based on structural feature of vaneless centrifugal blood pump. The optimal structure is selected from possible schemes with constraints on blood perfusion and blood damage indexes. Using an optimal selection method one can find the optimum structure scheme from possible schemes effectively. The results of numerical simulation of optimal blood pump showed that the method of constraints of blood perfusion and blood damage is competent for the requirements of selection of the optimal blood pumps.

  10. Research on the optimal selection method of image complexity assessment model index parameter

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Duan, Jin; Qian, Xiaofei; Xiao, Bo

    2015-10-01

    Target recognition is widely used in national economy, space technology and national defense and other fields. There is great difference between the difficulty of the target recognition and target extraction. The image complexity is evaluating the difficulty level of extracting the target from background. It can be used as a prior evaluation index of the target recognition algorithm's effectiveness. The paper, from the perspective of the target and background characteristics measurement, describe image complexity metrics parameters using quantitative, accurate mathematical relationship. For the collinear problems between each measurement parameters, image complexity metrics parameters are clustered with gray correlation method. It can realize the metrics parameters of extraction and selection, improve the reliability and validity of image complexity description and representation, and optimize the image the complexity assessment calculation model. Experiment results demonstrate that when gray system theory is applied to the image complexity analysis, target characteristics image complexity can be measured more accurately and effectively.

  11. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  12. Field trials for corrosion inhibitor selection and optimization, using a new generation of electrical resistance probes

    SciTech Connect

    Ridd, B.; Blakset, T.J.; Queen, D.

    1998-12-31

    Even with today`s availability of corrosion resistant alloys, carbon steels protected by corrosion inhibitors still dominate the material selection for pipework in the oil and gas production. Even though laboratory screening tests of corrosion inhibitor performance provides valuable data, the real performance of the chemical can only be studied through field trials which provide the ultimate test to evaluate the effectiveness of an inhibitor under actual operating conditions. A new generation of electrical resistance probe has been developed, allowing highly sensitive and immediate response to changes in corrosion rates on the internal environment of production pipework. Because of the high sensitivity, the probe responds to small changes in the corrosion rate, and it provides the corrosion engineer with a highly effective method of optimizing the use of inhibitor chemicals resulting in confidence in corrosion control and minimizing detrimental environmental effects.

  13. PROBEmer: a web-based software tool for selecting optimal DNA oligos

    PubMed Central

    Emrich, Scott J.; Lowe, Mary; Delcher, Arthur L.

    2003-01-01

    PROBEmer (http://probemer.cs.loyola.edu) is a web-based software tool that enables a researcher to select optimal oligos for PCR applications and multiplex detection platforms including oligonucleotide microarrays and bead-based arrays. Given two groups of nucleic-acid sequences, a target group and a non-target group, the software identifies oligo sequences that occur in members of the target group, but not in the non-target group. To help predict potential cross hybridization, PROBEmer computes all near neighbors in the non-target group and displays their alignments. The software has been used to obtain genus-specific prokaryotic probes based on the 16S rRNA gene, gene-specific probes for expression analyses and PCR primers. In this paper, we describe how to use PROBEmer, the computational methods it employs, and experimental results for oligos identified by this software tool. PMID:12824409

  14. Compound feature selection and parameter optimization of ELM for fault diagnosis of rolling element bearings.

    PubMed

    Luo, Meng; Li, Chaoshun; Zhang, Xiaoyuan; Li, Ruhai; An, Xueli

    2016-11-01

    This paper proposes a hybrid system named as HGSA-ELM for fault diagnosis of rolling element bearings, in which real-valued gravitational search algorithm (RGSA) is employed to optimize the input weights and bias of ELM, and the binary-valued of GSA (BGSA) is used to select important features from a compound feature set. Three types fault features, namely time and frequency features, energy features and singular value features, are extracted to compose the compound feature set by applying ensemble empirical mode decomposition (EEMD). For fault diagnosis of a typical rolling element bearing system with 56 working condition, comparative experiments were designed to evaluate the proposed method. And results show that HGSA-ELM achieves significant high classification accuracy compared with its original version and methods in literatures.

  15. Selection, optimization, and compensation as strategies of life management: correlations with subjective indicators of successful aging.

    PubMed

    Freund, A M; Baltes, P B

    1998-12-01

    The usefulness of self-reported processes of selection, optimization, and compensation (SOC) for predicting on a correlational level the subjective indicators of successful aging was examined. The sample of Berlin residents was a subset of the participants of the Berlin Aging Study. Three domains (marked by 6 variables) served as outcome measures of successful aging: subjective well-being, positive emotions, and absence of feelings of loneliness. Results confirm the central hypothesis of the SOC model: People who reported using SOC-related life-management behaviors (which were unrelated in content to the outcome measures) had higher scores on the 3 indicators of successful aging. The relationships obtained were robust even after controlling for other measures of successful mastery such as personal life investment, neuroticism, extraversion, openness, control beliefs, intelligence, subjective health, or age.

  16. Adapting to aging losses: do resources facilitate strategies of selection, compensation, and optimization in everyday functioning?

    PubMed

    Lang, Frieder R; Rieckmann, Nina; Baltes, Margret M

    2002-11-01

    Previous cross-sectional research has shown that older people who are rich in sensorimotor-cognitive and social-personality resources are better functioning in everyday life and exhibit fewer negative age differences than resource-poor adults. Longitudinal data from the Berlin Aging Study was used to examine these findings across a 4-year time interval and to compare cross-sectional indicators of adaptive everyday functioning among survivors and nonsurvivors. Apart from their higher survival rate, resource-rich older people (a) invest more social time with their family members, (b) reduce the diversity of activities within the most salient leisure domain, (c) sleep more often and longer during daytime, and (d) increase the variability of time investments across activities after 4 years. Overall, findings suggest a greater use of selection, compensation, and optimization strategies in everyday functioning among resource-rich older adults as compared with resource-poor older adults.

  17. Testability requirement uncertainty analysis in the sensor selection and optimization model for PHM

    NASA Astrophysics Data System (ADS)

    Yang, S. M.; Qiu, J.; Liu, G. J.; Yang, P.; Zhang, Y.

    2012-05-01

    Prognostics and health management (PHM) has been an important part to guarantee the reliability and safety of complex systems. Design for testability (DFT) developed concurrently with system design is considered as a fundamental way to improve PHM performance, and sensor selection and optimization (SSO) is one of the important parts in DFT. To address the problem that testability requirement analysis in the existing SSO models does not take test uncertainty in actual scenario into account, fault detection uncertainty is analyzed from the view of fault attributes, sensor attributes and fault-sensor matching attributes qualitatively. And then, quantitative uncertainty analysis is given, which assigns a rational confidence level to fault size. A case is presented to demonstrate the proposed methodology for an electromechanical servo-controlled system, and application results show the proposed approach is reasonable and feasible.

  18. Spatial filter and feature selection optimization based on EA for multi-channel EEG.

    PubMed

    Wang, Yubo; Mohanarangam, Krithikaa; Mallipeddi, Rammohan; Veluvolu, K C

    2015-01-01

    The EEG signals employed for BCI systems are generally band-limited. The band-limited multiple Fourier linear combiner (BMFLC) with Kalman filter was developed to obtain amplitude estimates of the EEG signal in a pre-fixed frequency band in real-time. However, the high-dimensionality of the feature vector caused by the application of BMFLC to multi-channel EEG based BCI deteriorates the performance of the classifier. In this work, we apply evolutionary algorithm (EA) to tackle this problem. The real-valued EA encodes both the spatial filter and the feature selection into its solution and optimizes it with respect to the classification error. Three BMFLC based BCI configurations are proposed. Our results show that the BMFLC-KF with covariance matrix adaptation evolution strategy (CMAES) has the best overall performance.

  19. Optimal pig donor selection in islet xenotransplantation: current status and future perspectives.

    PubMed

    Zhu, Hai-tao; Yu, Liang; Lyu, Yi; Wang, Bo

    2014-08-01

    Islet transplantation is an attractive treatment of type 1 diabetes mellitus. Xenotransplantation, using the pig as a donor, offers the possibility of an unlimited supply of islet grafts. Published studies demonstrated that pig islets could function in diabetic primates for a long time (>6 months). However, pig-islet xenotransplantation must overcome the selection of an optimal pig donor to obtain an adequate supply of islets with high-quality, to reduce xeno-antigenicity of islet and prolong xenograft survival, and to translate experimental findings into clinical application. This review discusses the suitable pig donor for islet xenotransplantation in terms of pig age, strain, structure/function of islet, and genetically modified pig.

  20. SU-E-T-606: Optimal Emission Angle Selection in Rotating Shield Brachytherapy.

    PubMed

    Liu, Y; Flynn, R; Yang, W; Kim, Y; Wu, X

    2012-06-01

    In this work a general method is presented that enables clinicians to rapidly select Rotating shield brachytherapy (RSBT) emission angles based on the patient-specific tradeoff between delivery time and tumor dose conformity. Cervical cancer cases are used as examples. Anchor plans with high dose conformity but infeasible delivery times are generated with a fine emission angle, with simulated annealing. The RSBT emission angle selector determines the optimal emission angle for each case by efficiently solving a globally-optimal quadratic programming problem that closely reproduces the angular distribution of beam intensities from the anchor plan. Pareto plots of the dosimetric plan quality metrics, such as D90 versus the delivery time, are generated for clinicians. In this work two cervical cancer cases were considered for verification. The RSBT system was assumed to be a Xoft AxxentTM electronic BT(eBT) source with a 0.2mm tungsten shield. The intent for each treatment plans was to maximize tumor D90 while respective the GEC-ESTRO recommended constraints on the D2cc values to OARs. Generating anchor plans with simulated annealing takes 10-20min while emission angle selection can finish within seconds. The shield sequencing algorithm also ensures the balance between D90 and delivery time. One case shows that the D90 can achieve 98.3Gy10 with emission angle 202.5 degree with 8.64min delivery, while the conventional intracavitary plan has D90 65Gy10 with 2.86min delivery. Another case shows RSBT with emission angle 67.5 degree can produce D90 108.7Gy10 with 44min, and the conventional plan uses 2.2min for D90 48.9Gy10. The RSBT emission angle selection algorithm enables the users to rapidly determine the best emission angle for a given cervical cancer case by selecting the most appropriate D90 and delivery time. RSBT may be a less invasive alternative to intracavitary and supplementary interstitial BT for the treatment of cervical cancer tumors, supported in part by

  1. Using natural selection and optimization for smarter vegetation models - challenges and opportunities

    NASA Astrophysics Data System (ADS)

    Franklin, Oskar; Han, Wang; Dieckmann, Ulf; Cramer, Wolfgang; Brännström, Åke; Pietsch, Stephan; Rovenskaya, Elena; Prentice, Iain Colin

    2017-04-01

    Dynamic global vegetation models (DGVMs) are now indispensable for understanding the biosphere and for estimating the capacity of ecosystems to provide services. The models are continuously developed to include an increasing number of processes and to utilize the growing amounts of observed data becoming available. However, while the versatility of the models is increasing as new processes and variables are added, their accuracy suffers from the accumulation of uncertainty, especially in the absence of overarching principles controlling their concerted behaviour. We have initiated a collaborative working group to address this problem based on a 'missing law' - adaptation and optimization principles rooted in natural selection. Even though this 'missing law' constrains relationships between traits, and therefore can vastly reduce the number of uncertain parameters in ecosystem models, it has rarely been applied to DGVMs. Our recent research have shown that optimization- and trait-based models of gross primary production can be both much simpler and more accurate than current models based on fixed functional types, and that observed plant carbon allocations and distributions of plant functional traits are predictable with eco-evolutionary models. While there are also many other examples of the usefulness of these and other theoretical principles, it is not always straight-forward to make them operational in predictive models. In particular on longer time scales, the representation of functional diversity and the dynamical interactions among individuals and species presents a formidable challenge. Here we will present recent ideas on the use of adaptation and optimization principles in vegetation models, including examples of promising developments, but also limitations of the principles and some key challenges.

  2. Markov Chain Model-Based Optimal Cluster Heads Selection for Wireless Sensor Networks.

    PubMed

    Ahmed, Gulnaz; Zou, Jianhua; Zhao, Xi; Sadiq Fareed, Mian Muhammad

    2017-02-23

    The longer network lifetime of Wireless Sensor Networks (WSNs) is a goal which is directly related to energy consumption. This energy consumption issue becomes more challenging when the energy load is not properly distributed in the sensing area. The hierarchal clustering architecture is the best choice for these kind of issues. In this paper, we introduce a novel clustering protocol called Markov chain model-based optimal cluster heads (MOCHs) selection for WSNs. In our proposed model, we introduce a simple strategy for the optimal number of cluster heads selection to overcome the problem of uneven energy distribution in the network. The attractiveness of our model is that the BS controls the number of cluster heads while the cluster heads control the cluster members in each cluster in such a restricted manner that a uniform and even load is ensured in each cluster. We perform an extensive range of simulation using five quality measures, namely: the lifetime of the network, stable and unstable region in the lifetime of the network, throughput of the network, the number of cluster heads in the network, and the transmission time of the network to analyze the proposed model. We compare MOCHs against Sleep-awake Energy Efficient Distributed (SEED) clustering, Artificial Bee Colony (ABC), Zone Based Routing (ZBR), and Centralized Energy Efficient Clustering (CEEC) using the above-discussed quality metrics and found that the lifetime of the proposed model is almost 1095, 2630, 3599, and 2045 rounds (time steps) greater than SEED, ABC, ZBR, and CEEC, respectively. The obtained results demonstrate that the MOCHs is better than SEED, ABC, ZBR, and CEEC in terms of energy efficiency and the network throughput.

  3. Markov Chain Model-Based Optimal Cluster Heads Selection for Wireless Sensor Networks

    PubMed Central

    Ahmed, Gulnaz; Zou, Jianhua; Zhao, Xi; Sadiq Fareed, Mian Muhammad

    2017-01-01

    The longer network lifetime of Wireless Sensor Networks (WSNs) is a goal which is directly related to energy consumption. This energy consumption issue becomes more challenging when the energy load is not properly distributed in the sensing area. The hierarchal clustering architecture is the best choice for these kind of issues. In this paper, we introduce a novel clustering protocol called Markov chain model-based optimal cluster heads (MOCHs) selection for WSNs. In our proposed model, we introduce a simple strategy for the optimal number of cluster heads selection to overcome the problem of uneven energy distribution in the network. The attractiveness of our model is that the BS controls the number of cluster heads while the cluster heads control the cluster members in each cluster in such a restricted manner that a uniform and even load is ensured in each cluster. We perform an extensive range of simulation using five quality measures, namely: the lifetime of the network, stable and unstable region in the lifetime of the network, throughput of the network, the number of cluster heads in the network, and the transmission time of the network to analyze the proposed model. We compare MOCHs against Sleep-awake Energy Efficient Distributed (SEED) clustering, Artificial Bee Colony (ABC), Zone Based Routing (ZBR), and Centralized Energy Efficient Clustering (CEEC) using the above-discussed quality metrics and found that the lifetime of the proposed model is almost 1095, 2630, 3599, and 2045 rounds (time steps) greater than SEED, ABC, ZBR, and CEEC, respectively. The obtained results demonstrate that the MOCHs is better than SEED, ABC, ZBR, and CEEC in terms of energy efficiency and the network throughput. PMID:28241492

  4. A Linked Simulation-Optimization (LSO) Model for Conjunctive Irrigation Management using Clonal Selection Algorithm

    NASA Astrophysics Data System (ADS)

    Islam, Sirajul; Talukdar, Bipul

    2016-09-01

    A Linked Simulation-Optimization (LSO) model based on a Clonal Selection Algorithm (CSA) was formulated for application in conjunctive irrigation management. A series of measures were considered for reducing the computational burden associated with the LSO approach. Certain modifications were incurred to the formulated CSA, so as to decrease the number of function evaluations. In addition, a simple problem specific code for a two dimensional groundwater flow simulation model was developed. The flow model was further simplified by a novel approach of area reduction, in order to save computational time in simulation. The LSO model was applied in the irrigation command of the Pagladiya Dam Project in Assam, India. With a view to evaluate the performance of the CSA, a Genetic Algorithm (GA) was used as a comparison base. The results from the CSA compared well with those from the GA. In fact, the CSA was found to consume less computational time than the GA while converging to the optimal solution, due to the modifications incurred in it.

  5. Charge optimization increases the potency and selectivity of a chorismate mutase inhibitor.

    PubMed

    Mandal, Ajay; Hilvert, Donald

    2003-05-14

    The highest affinity inhibitor for chorismate mutases, a conformationally constrained oxabicyclic dicarboxylate transition state analogue, was modified as suggested by computational charge optimization methods. As predicted, replacement of the C10 carboxylate in this molecule with a nitro group yields an even more potent inhibitor of a chorismate mutase from Bacillus subtilis (BsCM), but the magnitude of the improvement (roughly 3-fold, corresponding to a DeltaDeltaG of -0.7 kcal/mol) is substantially lower than the gain of 2-3 kcal/mol binding free energy anticipated for the reduced desolvation penalty upon binding. Experiments with a truncated version of the enzyme show that the flexible C terminus, which was only partially resolved in the crystal structure and hence omitted from the calculations, provides favorable interactions with the C10 group that partially compensate for its desolvation. Although truncation diminishes the affinity of the enzyme for both inhibitors, the nitro derivative binds 1.7 kcal/mol more tightly than the dicarboxylate, in reasonable agreement with the calculations. Significantly, substitution of the C10 carboxylate with a nitro group also enhances the selectivity of inhibition of BsCM relative to a chorismate mutase from Escherichia coli (EcCM), which has a completely different fold and binding pocket, by 10-fold. These results experimentally verify the utility of charge optimization methods for improving interactions between proteins and low-molecular weight ligands.

  6. A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface.

    PubMed

    Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin

    2016-01-01

    Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system.

  7. Optimization-based Dielectric Metasurfaces for Angle-Selective Multifunctional Beam Deflection.

    PubMed

    Cheng, Jierong; Inampudi, Sandeep; Mosallaei, Hossein

    2017-09-25

    Synthesization of multiple functionalities over a flat metasurface platform offers a promising approach to achieving integrated photonic devices with minimized footprint. Metasurfaces capable of diverse wavefront shaping according to wavelengths and polarizations have been demonstrated. Here we propose a class of angle-selective metasurfaces, over which beams are reflected following different and independent phase gradients in the light of the beam direction. Such powerful feature is achieved by leveraging the local phase modulation and the non-local lattice diffraction via inverse scattered field and geometry optimization in a monolayer dielectric grating, whereas most of the previous designs utilize the local phase modulation only and operate optimally for a specific angle. Beam combiner/splitter and independent multibeam deflections with up to 4 incident angles are numerically demonstrated respectively at the wavelength of 700 nm. The deflection efficiency is around 45% due to the material loss and the compromise of multi-angle responses. Flexibility of the approach is further validated by additional designs of angle-switchable metagratings as splitter/reflector and transparent/opaque mirror. The proposed designs hold great potential for increasing information density of compact optical components from the degree of freedom of angle.

  8. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.

  9. Dynamic optimization approach for integrated supplier selection and tracking control of single product inventory system with product discount

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Heru Tjahjana, R.

    2017-01-01

    In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.

  10. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Solikhin

    2016-06-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.

  11. Use of a Noise Optimized Monoenergetic Algorithm for Patient-Size Independent Selection of an Optimal Energy Level During Dual-Energy CT of the Pancreas.

    PubMed

    Bellini, Davide; Gupta, Sonia; Ramirez-Giraldo, Juan Carlos; Fu, Wanyi; Stinnett, Sandra S; Patel, Bhavik; Mileto, Achille; Marin, Daniele

    2017-01-01

    To investigate the impact of a second-generation noise-optimized monoenergetic algorithm on selection of the optimal energy level, image quality, and effect of patient body habitus for dual-energy multidetector computed tomography of the pancreas. Fifty-nine patients (38 men, 21 women) underwent dual-energy multidetector computed tomography (80/Sn140 kV) in the pancreatic parenchymal phase. Image data sets, at energy levels ranging from 40 to 80 keV (in 5-keV increments), were reconstructed using first-generation and second-generation noise-optimized monoenergetic algorithm. Noise, pancreatic contrast-to-noise ratio (CNRpancreas), and CNR with a noise constraint (CNRNC) were calculated and compared among the different reconstructed data sets. Qualitative assessment of image quality was performed by 3 readers. For all energy levels below 70 keV, noise was significantly lower (P ≤ 0.05) and CNRpancreas significantly higher (P < 0.001), with the second-generation monoenergetic algorithm. Furthermore, the second-generation algorithm was less susceptible to variability related to patient body habitus in the selection of the optimal energy level. The maximal CNRpancreas occurred at 40 keV in 98% (58 of 59) of patients with the second-generation monoenergetic algorithm. However, the CNRNC and readers' image quality scores showed that, even with a second-generation monoenergetic algorithm, higher reconstructed energy levels (60-65 keV) represented the optimal energy level. Second-generation noise-optimized monoenergetic algorithm can improve the image quality of lower-energy monoenergetic images of the pancreas, while decreasing the variability related to patient body habitus in selection of the optimal energy level.

  12. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    NASA Astrophysics Data System (ADS)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  13. A Novel Hybrid Clonal Selection Algorithm with Combinatorial Recombination and Modified Hypermutation Operators for Global Optimization

    PubMed Central

    Lin, Jingjing; Jing, Honglei

    2016-01-01

    Artificial immune system is one of the most recently introduced intelligence methods which was inspired by biological immune system. Most immune system inspired algorithms are based on the clonal selection principle, known as clonal selection algorithms (CSAs). When coping with complex optimization problems with the characteristics of multimodality, high dimension, rotation, and composition, the traditional CSAs often suffer from the premature convergence and unsatisfied accuracy. To address these concerning issues, a recombination operator inspired by the biological combinatorial recombination is proposed at first. The recombination operator could generate the promising candidate solution to enhance search ability of the CSA by fusing the information from random chosen parents. Furthermore, a modified hypermutation operator is introduced to construct more promising and efficient candidate solutions. A set of 16 common used benchmark functions are adopted to test the effectiveness and efficiency of the recombination and hypermutation operators. The comparisons with classic CSA, CSA with recombination operator (RCSA), and CSA with recombination and modified hypermutation operator (RHCSA) demonstrate that the proposed algorithm significantly improves the performance of classic CSA. Moreover, comparison with the state-of-the-art algorithms shows that the proposed algorithm is quite competitive. PMID:27698662

  14. An Optimization Model for the Selection of Bus-Only Lanes in a City

    PubMed Central

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers’ route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model. PMID:26214001

  15. Combining heterogeneous features for face detection using multiscale feature selection with binary particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Pan, Hong; Xia, Si-Yu; Jin, Li-Zuo; Xia, Liang-Zheng

    2011-12-01

    We propose a fast multiscale face detector that boosts a set of SVM-based hierarchy classifiers constructed with two heterogeneous features, i.e. Multi-block Local Binary Patterns (MB-LBP) and Speeded Up Robust Features (SURF), at different image resolutions. In this hierarchical architecture, simple and fast classifiers using efficient MB-LBP descriptors remove large parts of the background in low and intermediate scale layers, thus only a small percentage of background patches look similar to faces and require a more accurate but slower classifier that uses distinctive SURF descriptor to avoid false classifications in the finest scale. By propagating only those patterns that are not classified as background, we can quickly decrease the amount of data need to be processed. To lessen the training burden of the hierarchy classifier, in each scale layer, a feature selection scheme using Binary Particle Swarm Optimization (BPSO) searches the entire feature space and filters out the minimum number of discriminative features that give the highest classification rate on a validation set, then these selected distinctive features are fed into the SVM classifier. We compared detection performance of the proposed face detector with other state-of-the-art methods on the CMU+MIT face dataset. Our detector achieves the best overall detection performance. The training time of our algorithm is 60 times faster than the standard Adaboost algorithm. It takes about 70 ms for our face detector to process a 320×240 image, which is comparable to Viola and Jones' detector.

  16. Structure based lead optimization approach in discovery of selective DPP4 inhibitors.

    PubMed

    Ghate, Manjunath; Jain, Shailesh V

    2013-05-01

    Diabetes mellitus is a chronic progressive metabolic disorder that has profound consequences for individuals, families, and society. To date, main available oral antidiabetic medications target either insulin resistance (metformin, glitazones), or insulin deficiency (sulfonylureas, glinides), but leading to shortfalls in medication. Advancement in modern oral hypoglycemic agents may be encouraged with or in place of traditional therapies. The lower risk for hypoglycemic events as compared with other insulinotropic or insulin-sensitizing agents make DPP-4 inhibitors very promising candidates for a more physiological treatment of type-2 diabetes. Only some DPP-4 inhibitors are currently used for the treatment of type 2 diabetes (T2DM) and various inhibitors currently undergoing animal and human testing. A number of catalytically active DPPs distinct from DPP-4 (DPP II, FAP, DPP-8, and DPP-9) have been described that is associated with side-effect and toxicity. To discover potent and selective and safer drugs in a shorter time frame and with reduced cost it requires using an innovative approach for designing novel inhibitors. This review article focuses on the status of advanced lead candidates of DPP group and their binding affinity with the active site residue of target structure which help in discovery of potent and selective DPP-4 inhibitors by lead optimization approach.

  17. An Optimization Model for the Selection of Bus-Only Lanes in a City.

    PubMed

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers' route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model.

  18. On the incomplete architecture of human ontogeny. Selection, optimization, and compensation as foundation of developmental theory.

    PubMed

    Baltes, P B

    1997-04-01

    Drawing on both evolutionary and ontogenetic perspectives, the basic biological-genetic and social-cultural architecture of human development is outlined. Three principles are involved. First, evolutionary selection pressure predicts a negative age correlation, and therefore, genome-based plasticity and biological potential decrease with age. Second, for growth aspects of human development to extend further into the life span, culture-based resources are required at ever-increasing levels. Third, because of age-related losses in biological plasticity, the efficiency of culture is reduced as life span development unfolds. Joint application of these principles suggests that the life span architecture becomes more and more incomplete with age. Degree of completeness can be defined as the ratio between gains and losses in functioning. Two examples illustrate the implications of the life span architecture proposed. The first is a general theory of development involving the orchestration of 3 component processes: selection, optimization, and compensation. The second considers the task of completing the life course in the sense of achieving a positive balance between gains and losses for all age levels. This goal is increasingly more difficult to attain as human development is extended into advanced old age.

  19. Acceptance and Commitment Therapy and Selective Optimization with Compensation for Institutionalized Older People with Chronic Pain.

    PubMed

    Alonso-Fernández, Miriam; López-López, Almudena; Losada, Andres; González, José Luis; Wetherell, Julie Loebach

    2016-02-01

    Recent studies support the efficacy of Acceptance and Commitment Therapy (ACT) with people with chronic pain. In addition, Selective Optimization with Compensation strategies (SOC) can help the elderly with chronic pain to accept their chronic condition and increase functional autonomy. Our aim was to analyze the efficacy of an ACT treatment program combined with training in SOC strategies for elderly people with chronic pain living in nursing homes. 101 participants (mean age = 82.26; SD = 10.00; 78.6% female) were randomized to the intervention condition (ACT-SOC) or to a minimal support group (MS). Complete data are available for 53 participants (ACT-SOC: n = 27; MS: n = 26). Assessments of functional performance, pain intensity, pain acceptance, SOC strategies, emotional well being and catastrophizing beliefs were done preintervention and postintervention. Significant time by intervention changes (P = 0.05) were found in acceptance, pain related anxiety, compensation strategies, and pain interference in walking ability. Simple effects changes were found in acceptance (P = 0.01), selection strategies (P = 0.05), catastrophizing beliefs (P = 0.03), depressive symptoms (P = 0.05), pain anxiety (P = 0.01) and pain interference in mood and walking ability (P = 0.03) in the ACT-SOC group. No significant changes were found in the MS group. These results suggest that an ACT intervention combined with training in SOC strategies could help older people with pain to improve their emotional well being and their functional capability.

  20. A bi-objective optimization approach for exclusive bus lane selection and scheduling design

    NASA Astrophysics Data System (ADS)

    Khoo, Hooi Ling; Eng Teoh, Lay; Meng, Qiang

    2014-07-01

    This study proposes a methodology to solve the integrated problems of selection and scheduling of the exclusive bus lane. The selection problem intends to determine which roads (links) should have a lane reserved for buses while the scheduling problem intends to find the time period of the application. It is formulated as a bi-objective optimization model that aims to minimize the total travel time of non-bus traffic and buses simultaneously. The proposed model formulation is solved by the hybrid non-dominated sorting genetic algorithm with Paramics. The results show that the proposed methodology is workable. Sets of Pareto solutions are obtained indicating that a trade-off between buses and non-bus traffic for the improvement of the bus transit system is necessary when the exclusive bus lane is applied. This allows the engineer to choose the best solutions that could balance the performance of both modes in a multimode transport system environment to achieve a sustainable transport system.

  1. Optimally invasive exposure in revision total hip arthroplasty: a guide to selection and technique.

    PubMed

    Toms, Andrew; Greidanus, Nelson; Garbuz, Donald; Masri, Bassam A; Duncan, Clive P

    2006-01-01

    Revision total hip arthroplasty requires a careful surgical plan. Selection of the appropriate exposure is an essential step for success. Exposure is important not only for the complete and safe visualization and extraction of components and cement, but also for the achievement of a stable construct at the end of the procedure. In addition, controlled exposure minimizes intraoperative complications and bone and soft-tissue damage, essential considerations for eradication of infection. Three questions need to be addressed at the preoperative stage: (1) Is this a straightforward revision that can be handled with a standard approach? (2) Is this a more complex revision requiring an extensile exposure? (3) Is this an unusual revision requiring special techniques to allow adequate access that cannot be obtained using standard extensile techniques? Each group of exposures presents three further possibilities, each of which has specific indications, advantages, and disadvantages. In conjunction with the preoperative analysis, this knowledge should enable the revision surgeon to select the most appropriate approach, resulting in optimal exposure for each individual revision scenario.

  2. End-to-End Rate-Distortion Optimized MD Mode Selection for Multiple Description Video Coding

    NASA Astrophysics Data System (ADS)

    Heng, Brian A.; Apostolopoulos, John G.; Lim, Jae S.

    2006-12-01

    Multiple description (MD) video coding can be used to reduce the detrimental effects caused by transmission over lossy packet networks. A number of approaches have been proposed for MD coding, where each provides a different tradeoff between compression efficiency and error resilience. How effectively each method achieves this tradeoff depends on the network conditions as well as on the characteristics of the video itself. This paper proposes an adaptive MD coding approach which adapts to these conditions through the use of adaptive MD mode selection. The encoder in this system is able to accurately estimate the expected end-to-end distortion, accounting for both compression and packet loss-induced distortions, as well as for the bursty nature of channel losses and the effective use of multiple transmission paths. With this model of the expected end-to-end distortion, the encoder selects between MD coding modes in a rate-distortion (R-D) optimized manner to most effectively tradeoff compression efficiency for error resilience. We show how this approach adapts to both the local characteristics of the video and network conditions and demonstrates the resulting gains in performance using an H.264-based adaptive MD video coder.

  3. Mechanistic understanding of brain drug disposition to optimize the selection of potential neurotherapeutics in drug discovery.

    PubMed

    Loryan, Irena; Sinha, Vikash; Mackie, Claire; Van Peer, Achiel; Drinkenburg, Wilhelmus; Vermeulen, An; Morrison, Denise; Monshouwer, Mario; Heald, Donald; Hammarlund-Udenaes, Margareta

    2014-08-01

    The current project was undertaken with the aim to propose and test an in-depth integrative analysis of neuropharmacokinetic (neuroPK) properties of new chemical entities (NCEs), thereby optimizing the routine of evaluation and selection of novel neurotherapeutics. Forty compounds covering a wide range of physicochemical properties and various CNS targets were investigated. The combinatory mapping approach was used for the assessment of the extent of blood-brain and cellular barriers transport via estimation of unbound-compound brain (Kp,uu,brain) and cell (Kp,uu,cell) partitioning coefficients. Intra-brain distribution was evaluated using the brain slice method. Intra- and sub-cellular distribution was estimated via calculation of unbound-drug cytosolic and lysosomal partitioning coefficients. Assessment of Kp,uu,brain revealed extensive variability in the brain penetration properties across compounds, with a prevalence of compounds actively effluxed at the blood-brain barrier. Kp,uu,cell was valuable for identification of compounds with a tendency to accumulate intracellularly. Prediction of cytosolic and lysosomal partitioning provided insight into the subcellular accumulation. Integration of the neuroPK parameters with pharmacodynamic readouts demonstrated the value of the proposed approach in the evaluation of target engagement and NCE selection. With the rather easily-performed combinatory mapping approach, it was possible to provide quantitative information supporting the decision making in the drug discovery setting.

  4. Nursing performance under high workload: a diary study on the moderating role of selection, optimization and compensation strategies.

    PubMed

    Baethge, Anja; Müller, Andreas; Rigotti, Thomas

    2016-03-01

    The aim of this study was to investigate whether selective optimization with compensation constitutes an individualized action strategy for nurses wanting to maintain job performance under high workload. High workload is a major threat to healthcare quality and performance. Selective optimization with compensation is considered to enhance the efficient use of intra-individual resources and, therefore, is expected to act as a buffer against the negative effects of high workload. The study applied a diary design. Over five consecutive workday shifts, self-report data on workload was collected at three randomized occasions during each shift. Self-reported job performance was assessed in the evening. Self-reported selective optimization with compensation was assessed prior to the diary reporting. Data were collected in 2010. Overall, 136 nurses from 10 German hospitals participated. Selective optimization with compensation was assessed with a nine-item scale that was specifically developed for nursing. The NASA-TLX scale indicating the pace of task accomplishment was used to measure workload. Job performance was assessed with one item each concerning performance quality and forgetting of intentions. There was a weaker negative association between workload and both indicators of job performance in nurses with a high level of selective optimization with compensation, compared with nurses with a low level. Considering the separate strategies, selection and compensation turned out to be effective. The use of selective optimization with compensation is conducive to nurses' job performance under high workload levels. This finding is in line with calls to empower nurses' individual decision-making. © 2015 John Wiley & Sons Ltd.

  5. Quantum optimal control of the isotope-selective rovibrational excitation of diatomic molecules

    NASA Astrophysics Data System (ADS)

    Kurosaki, Yuzuru; Yokoyama, Keiichi

    2017-08-01

    We carry out optimal control theory calculations for isotope-selective pure rotational and vibrational-rotational excitations of diatomic molecules. The fifty-fifty mixture of diatomic isotopologues, 7Li37Cl and 7Li35Cl, is considered and the molecules are irradiated with a control pulse. In the wave packet propagation we employ the method quantum mechanically rigorous for the two-dimensional system including both the radial and angular motions. We investigate quantum controls of the isotope-selective pure rotational excitation for two total times 1280000 and 2560000 a.u. (31.0 and 61.9 ps) and the vibrational-rotational excitation for three total times, 640000, 1280000, and 2560000 a.u. (15.5, 31.0, and 61.9 ps) The initial state is set to the situation that both the isotopologues are in the ground vibrational and rotational levels, v = 0 and J = 0. The target state for pure rotational excitation is set to 7Li37Cl (v = 0, J = 1) and 7Li35Cl (v = 0, J = 0); that for vibrational-rotational excitation is set to 7Li37Cl (v = 1, J = 1) and 7Li35Cl (v = 0, J = 0). The obtained final yields are quite high and those for the longest total time are calculated to be nearly 1.0. When total time is 1280000 a.u., the final yields for the pure rotational excitation are slightly smaller than those for the vibrational-rotational excitation. This is because the isotope shift (difference in transition energy between the two isotopologues) for the pure rotational transition between low-lying levels is much smaller than that for the vibrational-rotational transition. We thus theoretically succeed in controlling the isotope-selective excitations of diatomic molecules using the method including both radial and angular motions quantum mechanically.

  6. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1998-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  7. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1999-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  8. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1999-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  9. Process optimization for lattice-selective wet etching of crystalline silicon structures

    NASA Astrophysics Data System (ADS)

    Dixson, Ronald G.; Guthrie, William F.; Allen, Richard A.; Orji, Ndubuisi G.; Cresswell, Michael W.; Murabito, Christine E.

    2016-01-01

    Lattice-selective etching of silicon is used in a number of applications, but it is particularly valuable in those for which the lattice-defined sidewall angle can be beneficial to the functional goals. A relatively small but important niche application is the fabrication of tip characterization standards for critical dimension atomic force microscopes (CD-AFMs). CD-AFMs are commonly used as reference tools for linewidth metrology in semiconductor manufacturing. Accurate linewidth metrology using CD-AFM, however, is critically dependent upon calibration of the tip width. Two national metrology institutes and at least two commercial vendors have explored the development of tip calibration standards using lattice-selective etching of crystalline silicon. The National Institute of Standards and Technology standard of this type is called the single crystal critical dimension reference material. These specimens, which are fabricated using a lattice-plane-selective etch on (110) silicon, exhibit near vertical sidewalls and high uniformity and can be used to calibrate CD-AFM tip width to a standard uncertainty of less than 1 nm. During the different generations of this project, we evaluated variations of the starting material and process conditions. Some of our starting materials required a large etch bias to achieve the desired linewidths. During the optimization experiment described in this paper, we found that for potassium hydroxide etching of the silicon features, it was possible to independently tune the target linewidth and minimize the linewidth nonuniformity. Consequently, this process is particularly well suited for small-batch fabrication of CD-AFM linewidth standards.

  10. Optimization of Cu-Zn Massive Sulphide Flotation by Selective Reagents

    NASA Astrophysics Data System (ADS)

    Soltani, F.; Koleini, S. M. J.; Abdollahy, M.

    2014-10-01

    Selective floatation of base metal sulphide minerals can be achieved by using selective reagents. Sequential floatation of chalcopyrite-sphalerite from Taknar (Iran) massive sulphide ore with 3.5 % Zn and 1.26 % Cu was studied. D-optimal design of response surface methodology was used. Four mixed collector types (Aer238 + SIPX, Aero3477 + SIPX, TC1000 + SIPX and X231 + SIPX), two depressant systems (CuCN-ZnSO4 and dextrin-ZnSO4), pH and ZnSO4 dosage were considered as operational factors in the first stage of flotation. Different conditions of pH, CuSO4 dosage and SIPX dosage were studied for sphalerite flotation from first stage tailings. Aero238 + SIPX induced better selectivity for chalcopyrite against pyrite and sphalerite. Dextrin-ZnSO4 was as effective as CuCN-ZnSO4 in sphalerite-pyrite depression. Under optimum conditions, Cu recovery, Zn recovery and pyrite content in Cu concentrate were 88.99, 33.49 and 1.34 % by using Aero238 + SIPX as mixed collector, CuCN-ZnSO4 as depressant system, at ZnSO4 dosage of 200 g/t and pH 10.54. When CuCN was used at the first stage, CuSO4 consumption increased and Zn recovery decreased during the second stage. Maximum Zn recovery was 72.19 % by using 343.66 g/t of CuSO4, 22.22 g/t of SIPX and pH 9.99 at the second stage.

  11. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications.

    PubMed

    Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.

  12. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications

    PubMed Central

    Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096

  13. A novel low-noise linear-in-dB intermediate frequency variable-gain amplifier for DRM/DAB tuners

    NASA Astrophysics Data System (ADS)

    Keping, Wang; Zhigong, Wang; Jianzheng, Zhou; Xuemei, Lei; Mingzhu, Zhou

    2009-03-01

    A broadband CMOS intermediate frequency (IF) variable-gain amplifier (VGA) for DRM/DAB tuners is presented. The VGA comprises two cascaded stages: one is for noise-canceling and another is for signal-summing. The chip is fabricated in a standard 0.18 μm 1P6M RF CMOS process of SMIC. Measured results show a good linear-in-dB gain characteristic in 28 dB dynamic gain range of -10 to 18 dB. It can operate in the frequency range of 30-700 MHz and consumes 27 mW at 1.8 V supply with the on-chip test buffer. The minimum noise figure is only 3.1 dB at maximum gain and the input-referred 1 dB gain compression point at the minimum gain is -3.9 dBm.

  14. Experiments for practical education in process parameter optimization for selective laser sintering to increase workpiece quality

    NASA Astrophysics Data System (ADS)

    Reutterer, Bernd; Traxler, Lukas; Bayer, Natascha; Drauschke, Andreas

    2016-04-01

    Selective Laser Sintering (SLS) is considered as one of the most important additive manufacturing processes due to component stability and its broad range of usable materials. However the influence of the different process parameters on mechanical workpiece properties is still poorly studied, leading to the fact that further optimization is necessary to increase workpiece quality. In order to investigate the impact of various process parameters, laboratory experiments are implemented to improve the understanding of the SLS limitations and advantages on an educational level. Experiments are based on two different workstations, used to teach students the fundamentals of SLS. First of all a 50 W CO2 laser workstation is used to investigate the interaction of the laser beam with the used material in accordance with varied process parameters to analyze a single-layered test piece. Second of all the FORMIGA P110 laser sintering system from EOS is used to print different 3D test pieces in dependence on various process parameters. Finally quality attributes are tested including warpage, dimension accuracy or tensile strength. For dimension measurements and evaluation of the surface structure a telecentric lens in combination with a camera is used. A tensile test machine allows testing of the tensile strength and the interpreting of stress-strain curves. The developed laboratory experiments are suitable to teach students the influence of processing parameters. In this context they will be able to optimize the input parameters depending on the component which has to be manufactured and to increase the overall quality of the final workpiece.

  15. Laser dimpling process parameters selection and optimization using surrogate-driven process capability space

    NASA Astrophysics Data System (ADS)

    Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz

    2017-08-01

    Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii

  16. Neural network cascade optimizes microRNA biomarker selection for nasopharyngeal cancer prognosis.

    PubMed

    Zhu, Wenliang; Kan, Xuan

    2014-01-01

    MicroRNAs (miRNAs) have been shown to be promising biomarkers in predicting cancer prognosis. However, inappropriate or poorly optimized processing and modeling of miRNA expression data can negatively affect prediction performance. Here, we propose a holistic solution for miRNA biomarker selection and prediction model building. This work introduces the use of a neural network cascade, a cascaded constitution of small artificial neural network units, for evaluating miRNA expression and patient outcome. A miRNA microarray dataset of nasopharyngeal carcinoma was retrieved from Gene Expression Omnibus to illustrate the methodology. Results indicated a nonlinear relationship between miRNA expression and patient death risk, implying that direct comparison of expression values is inappropriate. However, this method performs transformation of miRNA expression values into a miRNA score, which linearly measures death risk. Spearman correlation was calculated between miRNA scores and survival status for each miRNA. Finally, a nine-miRNA signature was optimized to predict death risk after nasopharyngeal carcinoma by establishing a neural network cascade consisting of 13 artificial neural network units. Area under the ROC was 0.951 for the internal validation set and had a prediction accuracy of 83% for the external validation set. In particular, the established neural network cascade was found to have strong immunity against noise interference that disturbs miRNA expression values. This study provides an efficient and easy-to-use method that aims to maximize clinical application of miRNAs in prognostic risk assessment of patients with cancer.

  17. Optimization of Culture Parameters for Maximum Polyhydroxybutyrate Production by Selected Bacterial Strains Isolated from Rhizospheric Soils.

    PubMed

    Lathwal, Priyanka; Nehra, Kiran; Singh, Manpreet; Jamdagni, Pragati; Rana, Jogender S

    2015-01-01

    The enormous applications of conventional non-biodegradable plastics have led towards their increased usage and accumulation in the environment. This has become one of the major causes of global environmental concern in the present century. Polyhydroxybutyrate (PHB), a biodegradable plastic is known to have properties similar to conventional plastics, thus exhibiting a potential for replacing conventional non-degradable plastics. In the present study, a total of 303 different bacterial isolates were obtained from soil samples collected from the rhizospheric area of three crops, viz., wheat, mustard and sugarcane. All the isolates were screened for PHB (Poly-3-hydroxy butyric acid) production using Sudan Black staining method, and 194 isolates were found to be PHB positive. Based upon the amount of PHB produced, the isolates were divided into three categories: high, medium and low producers. Representative isolates from each category were selected for biochemical characterization; and for optimization of various culture parameters (carbon source, nitrogen source, C/N ratio, different pH, temperature and incubation time periods) for maximizing PHB accumulation. The highest PHB yield was obtained when the culture medium was supplemented with glucose as the carbon source, ammonium sulphate at a concentration of 1.0 g/l as the nitrogen source, and by maintaining the C/N ratio of the medium as 20:1. The physical growth parameters which supported maximum PHB accumulation included a pH of 7.0, and an incubation temperature of 30 degrees C for a period of 48 h. A few isolates exhibited high PHB accumulation under optimized conditions, thus showing a potential for their industrial exploitation.

  18. Evaluating Varied Label Designs for Use with Medical Devices: Optimized Labels Outperform Existing Labels in the Correct Selection of Devices and Time to Select

    PubMed Central

    Seo, Do Chan; Ladoni, Moslem; Brunk, Eric; Becker, Mark W.

    2016-01-01

    Purpose Effective standardization of medical device labels requires objective study of varied designs. Insufficient empirical evidence exists regarding how practitioners utilize and view labeling. Objective Measure the effect of graphic elements (boxing information, grouping information, symbol use and color-coding) to optimize a label for comparison with those typical of commercial medical devices. Design Participants viewed 54 trials on a computer screen. Trials were comprised of two labels that were identical with regard to graphics, but differed in one aspect of information (e.g., one had latex, the other did not). Participants were instructed to select the label along a given criteria (e.g., latex containing) as quickly as possible. Dependent variables were binary (correct selection) and continuous (time to correct selection). Participants Eighty-nine healthcare professionals were recruited at Association of Surgical Technologists (AST) conferences, and using a targeted e-mail of AST members. Results Symbol presence, color coding and grouping critical pieces of information all significantly improved selection rates and sped time to correct selection (α = 0.05). Conversely, when critical information was graphically boxed, probability of correct selection and time to selection were impaired (α = 0.05). Subsequently, responses from trials containing optimal treatments (color coded, critical information grouped with symbols) were compared to two labels created based on a review of those commercially available. Optimal labels yielded a significant positive benefit regarding the probability of correct choice ((P<0.0001) LSM; UCL, LCL: 97.3%; 98.4%, 95.5%)), as compared to the two labels we created based on commercial designs (92.0%; 94.7%, 87.9% and 89.8%; 93.0%, 85.3%) and time to selection. Conclusions Our study provides data regarding design factors, namely: color coding, symbol use and grouping of critical information that can be used to significantly enhance

  19. Evolutionary game theoretic strategy for optimal drug delivery to influence selection pressure in treatment of HIV-1.

    PubMed

    Wu, Yu; Zhang, Mingjun; Wu, Jing; Zhao, Xiaopeng; Xia, Lijin

    2012-02-01

    Cytotoxic T-lymphocyte (CTL) escape mutation is associated with long-term behaviors of human immunodeficiency virus type 1 (HIV-1). Recent studies indicate heterogeneous behaviors of reversible and conservative mutants while the selection pressure changes. The purpose of this study is to optimize the selection pressure to minimize the long-term virus load. The results can be used to assist in delivery of highly loaded cognate peptide-pulsed dendritic cells (DC) into lymph nodes that could change the selection pressure. This mechanism may be employed for controlled drug delivery. A mathematical model is proposed in this paper to describe the evolutionary dynamics involving viruses and T cells. We formulate the optimization problem into the framework of evolutionary game theory, and solve for the optimal control of the selection pressure as a neighborhood invader strategy. The strategy dynamics can be obtained to evolve the immune system to the best controlled state. The study may shed light on optimal design of HIV-1 therapy based on optimization of adaptive CTL immune response.

  20. Feature selection for linear SVMs under uncertain data: robust optimization based on difference of convex functions algorithms.

    PubMed

    Le Thi, Hoai An; Vo, Xuan Thanh; Pham Dinh, Tao

    2014-11-01

    In this paper, we consider the problem of feature selection for linear SVMs on uncertain data that is inherently prevalent in almost all datasets. Using principles of Robust Optimization, we propose robust schemes to handle data with ellipsoidal model and box model of uncertainty. The difficulty in treating ℓ0-norm in feature selection problem is overcome by using appropriate approximations and Difference of Convex functions (DC) programming and DC Algorithms (DCA). The computational results show that the proposed robust optimization approaches are superior than a traditional approach in immunizing perturbation of the data.

  1. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  2. Protein purification using chromatography: selection of type, modelling and optimization of operating conditions.

    PubMed

    Asenjo, J A; Andrews, B A

    2009-01-01

    To achieve a high level of purity in the purification of recombinant proteins for therapeutic or analytical application, it is necessary to use several chromatographic steps. There is a range of techniques available including anion and cation exchange, which can be carried out at different pHs, hydrophobic interaction chromatography, gel filtration and affinity chromatography. In the case of a complex mixture of partially unknown proteins or a clarified cell extract, there are many different routes one can take in order to choose the minimum and most efficient number of purification steps to achieve a desired level of purity (e.g. 98%, 99.5% or 99.9%). This review shows how an initial 'proteomic' characterization of the complex mixture of target protein and protein contaminants can be used to select the most efficient chromatographic separation steps in order to achieve a specific level of purity with a minimum number of steps. The chosen methodology was implemented in a computer- based Expert System. Two algorithms were developed, the first algorithm was used to select the most efficient purification method to separate a protein from its contaminants based on the physicochemical properties of the protein product and the protein contaminants and the second algorithm was used to predict the number and concentration of contaminants after each separation as well as protein product purity. The application of the Expert System approach was experimentally tested and validated with a mixture of four proteins and the experimental validation was also carried out with a supernatant of Bacillus subtilis producing a recombinant beta-1,3-glucanase. Once the type of chromatography is chosen, optimization of the operating conditions is essential. Chromatographic elution curves for a three-protein mixture (alpha-lactoalbumin, ovalbumin and beta-lactoglobulin), carried out under different flow rates and ionic strength conditions, were simulated using two different mathematical

  3. Optimizing selection of training and auxiliary data for operational land cover classification for the LCMAP initiative

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.

    2016-12-01

    The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow

  4. The relationship between PMI (manA) gene expression and optimal selection pressure in Indica rice transformation.

    PubMed

    Gui, Huaping; Li, Xia; Liu, Yubo; Han, Kai; Li, Xianggan

    2014-07-01

    An efficient mannose selection system was established for transformation of Indica cultivar IR58025B . Different selection pressures were required to achieve optimum transformation frequency for different PMI selectable marker cassettes. This study was conducted to establish an efficient transformation system for Indica rice, cultivar IR58025B. Four combinations of two promoters, rice Actin 1 and maize Ubiquitin 1, and two manA genes, native gene from E. coli (PMI-01) and synthetic maize codon-optimized gene (PMI-09) were compared under various concentrations of mannose. Different selection pressures were required for different gene cassettes to achieve corresponding optimum transformation frequency (TF). Higher TFs as 54 and 53% were obtained when 5 g/L mannose was used for selection of prActin-PMI-01 cassette and 7.5 g/L mannose used for selection of prActin-PMI-09, respectively. TFs as 67 and 56% were obtained when 7.5 and 15 g/L mannose were used for selection of prUbi-PMI-01 and prUbi-PMI-09, respectively. We conclude that higher TFs can be achieved for different gene cassettes when an optimum selection pressure is applied. By investigating the PMI expression level in transgenic calli and leaves, we found there was a significant positive correlation between the protein expression level and the optimal selection pressure. Higher optimal selection pressure is required for those constructs which confer higher expression of PMI protein. The single copy rate of those transgenic events for prActin-PMI-01 cassette is lower than that for other three cassettes. We speculate some of low copy events with low protein expression levels might not have been able to survive in the mannose selection.

  5. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    NASA Astrophysics Data System (ADS)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the

  6. A data driven model for optimal orthosis selection in children with cerebral palsy.

    PubMed

    Ries, Andrew J; Novacheck, Tom F; Schwartz, Michael H

    2014-09-01

    A statistical orthosis selection model was developed using the Random Forest Algorithm (RFA). The model's performance and potential clinical benefit was evaluated. The model predicts which of five orthosis designs - solid (SAFO), posterior leaf spring (PLS), hinged (HAFO), supra-malleolar (SMO), or foot orthosis (FO) - will provide the best gait outcome for individuals with diplegic cerebral palsy (CP). Gait outcome was defined as the change in Gait Deviation Index (GDI) between walking while wearing an orthosis compared to barefoot (ΔGDI=GDIOrthosis-GDIBarefoot). Model development was carried out using retrospective data from 476 individuals who wore one of the five orthosis designs bilaterally. Clinical benefit was estimated by predicting the optimal orthosis and ΔGDI for 1016 individuals (age: 12.6 (6.7) years), 540 of whom did not have an existing orthosis prescription. Among limbs with an orthosis, the model agreed with the prescription only 14% of the time. For 56% of limbs without an orthosis, the model agreed that no orthosis was expected to provide benefit. Using the current standard of care orthosis (i.e. existing orthosis prescriptions), ΔGDI is only +0.4 points on average. Using the orthosis prediction model, average ΔGDI for orthosis users was estimated to improve to +5.6 points. The results of this study suggest that an orthosis selection model derived from the RFA can significantly improve outcomes from orthosis use for the diplegic CP population. Further validation of the model is warranted using data from other centers and a prospective study.

  7. Analysis of boutique arrays: a universal method for the selection of the optimal data normalization procedure.

    PubMed

    Uszczyńska, Barbara; Zyprych-Walczak, Joanna; Handschuh, Luiza; Szabelska, Alicja; Kaźmierczak, Maciej; Woronowicz, Wiesława; Kozłowski, Piotr; Sikorski, Michał M; Komarnicki, Mieczysław; Siatkowski, Idzi; Figlerowicz, Marek

    2013-09-01

    DNA microarrays, which are among the most popular genomic tools, are widely applied in biology and medicine. Boutique arrays, which are small, spotted, dedicated microarrays, constitute an inexpensive alternative to whole-genome screening methods. The data extracted from each microarray-based experiment must be transformed and processed prior to further analysis to eliminate any technical bias. The normalization of the data is the most crucial step of microarray data pre-processing and this process must be carefully considered as it has a profound effect on the results of the analysis. Several normalization algorithms have been developed and implemented in data analysis software packages. However, most of these methods were designed for whole-genome analysis. In this study, we tested 13 normalization strategies (ten for double-channel data and three for single-channel data) available on R Bioconductor and compared their effectiveness in the normalization of four boutique array datasets. The results revealed that boutique arrays can be successfully normalized using standard methods, but not every method is suitable for each dataset. We also suggest a universal seven-step workflow that can be applied for the selection of the optimal normalization procedure for any boutique array dataset. The described workflow enables the evaluation of the investigated normalization methods based on the bias and variance values for the control probes, a differential expression analysis and a receiver operating characteristic curve analysis. The analysis of each component results in a separate ranking of the normalization methods. A combination of the ranks obtained from all the normalization procedures facilitates the selection of the most appropriate normalization method for the studied dataset and determines which methods can be used interchangeably.

  8. Selecting Optimal Peptides for Targeted Proteomic Experiments in Human Plasma Using in vitro Synthesized Proteins as Analytical Standards

    PubMed Central

    Bollinger, James G.; Stergachis, Andrew B.; Johnson, Richard S.; Egertson, Jarrett D.; MacCoss, Michael J.

    2017-01-01

    Summary In targeted proteomics, the development of robust methodologies is dependent upon the selection of a set of optimal peptides for each protein-of-interest. Unfortunately, predicting which peptides and respective product ion transitions provide the greatest signal-to-noise ratio in a particular assay matrix is complicated. Using in vitro synthesized proteins as analytical standards, we report here an empirically driven method for the selection of said peptides in a human plasma assay matrix. PMID:26867746

  9. Selecting Optimal Peptides for Targeted Proteomic Experiments in Human Plasma Using In Vitro Synthesized Proteins as Analytical Standards.

    PubMed

    Bollinger, James G; Stergachis, Andrew B; Johnson, Richard S; Egertson, Jarrett D; MacCoss, Michael J

    2016-01-01

    In targeted proteomics, the development of robust methodologies is dependent upon the selection of a set of optimal peptides for each protein-of-interest. Unfortunately, predicting which peptides and respective product ion transitions provide the greatest signal-to-noise ratio in a particular assay matrix is complicated. Using in vitro synthesized proteins as analytical standards, we report here an empirically driven method for the selection of said peptides in a human plasma assay matrix.

  10. Personalizing colon cancer adjuvant therapy: selecting optimal treatments for individual patients.

    PubMed

    Dienstmann, Rodrigo; Salazar, Ramon; Tabernero, Josep

    2015-06-01

    For more than three decades, postoperative chemotherapy-initially fluoropyrimidines and more recently combinations with oxaliplatin-has reduced the risk of tumor recurrence and improved survival for patients with resected colon cancer. Although universally recommended for patients with stage III disease, there is no consensus about the survival benefit of postoperative chemotherapy in stage II colon cancer. The most recent adjuvant clinical trials have not shown any value for adding targeted agents, namely bevacizumab and cetuximab, to standard chemotherapies in stage III disease, despite improved outcomes in the metastatic setting. However, biomarker analyses of multiple studies strongly support the feasibility of refining risk stratification in colon cancer by factoring in molecular characteristics with pathologic tumor staging. In stage II disease, for example, microsatellite instability supports observation after surgery. Furthermore, the value of BRAF or KRAS mutations as additional risk factors in stage III disease is greater when microsatellite status and tumor location are taken into account. Validated predictive markers of adjuvant chemotherapy benefit for stage II or III colon cancer are lacking, but intensive research is ongoing. Recent advances in understanding the biologic hallmarks and drivers of early-stage disease as well as the micrometastatic environment are expected to translate into therapeutic strategies tailored to select patients. This review focuses on the pathologic, molecular, and gene expression characterizations of early-stage colon cancer; new insights into prognostication; and emerging predictive biomarkers that could ultimately help define the optimal adjuvant treatments for patients in routine clinical practice.

  11. EEG channel selection using particle swarm optimization for the classification of auditory event-related potentials.

    PubMed

    Gonzalez, Alejandro; Nambu, Isao; Hokari, Haruhide; Wada, Yasuhiro

    2014-01-01

    Brain-machine interfaces (BMI) rely on the accurate classification of event-related potentials (ERPs) and their performance greatly depends on the appropriate selection of classifier parameters and features from dense-array electroencephalography (EEG) signals. Moreover, in order to achieve a portable and more compact BMI for practical applications, it is also desirable to use a system capable of accurate classification using information from as few EEG channels as possible. In the present work, we propose a method for classifying P300 ERPs using a combination of Fisher Discriminant Analysis (FDA) and a multiobjective hybrid real-binary Particle Swarm Optimization (MHPSO) algorithm. Specifically, the algorithm searches for the set of EEG channels and classifier parameters that simultaneously maximize the classification accuracy and minimize the number of used channels. The performance of the method is assessed through offline analyses on datasets of auditory ERPs from sound discrimination experiments. The proposed method achieved a higher classification accuracy than that achieved by traditional methods while also using fewer channels. It was also found that the number of channels used for classification can be significantly reduced without greatly compromising the classification accuracy.

  12. a Geographic Analysis of Optimal Signage Location Selection in Scenic Area

    NASA Astrophysics Data System (ADS)

    Ruan, Ling; Long, Ying; Zhang, Ling; Wu, Xiao Ling

    2016-06-01

    As an important part of the scenic area infrastructure services, signage guiding system plays an indispensable role in guiding the way and improving the quality of tourism experience. This paper proposes an optimal method in signage location selection and direction content design in a scenic area based on geographic analysis. The object of the research is to provide a best solution to arrange limited guiding boards in a tourism area to show ways arriving at any scenic spot from any entrance. There are four steps to achieve the research object. First, the spatial distribution of the junction of the scenic road, the passageway and the scenic spots is analyzed. Then, the count of scenic roads intersection on the shortest path between all entrances and all scenic spots is calculated. Next, combing with the grade of the scenic road and scenic spots, the importance of each road intersection is estimated quantitatively. Finally, according to the importance of all road intersections, the most suitable layout locations of signage guiding boards can be provided. In addition, the method is applied in the Ming Tomb scenic area in China and the result is compared with the existing signage guiding space layout.

  13. Wind selection and drift compensation optimize migratory pathways in a high-flying moth.

    PubMed

    Chapman, Jason W; Reynolds, Don R; Mouritsen, Henrik; Hill, Jane K; Riley, Joe R; Sivell, Duncan; Smith, Alan D; Woiwod, Ian P

    2008-04-08

    Numerous insect species undertake regular seasonal migrations in order to exploit temporary breeding habitats [1]. These migrations are often achieved by high-altitude windborne movement at night [2-6], facilitating rapid long-distance transport, but seemingly at the cost of frequent displacement in highly disadvantageous directions (the so-called "pied piper" phenomenon [7]). This has lead to uncertainty about the mechanisms migrant insects use to control their migratory directions [8, 9]. Here we show that, far from being at the mercy of the wind, nocturnal moths have unexpectedly complex behavioral mechanisms that guide their migratory flight paths in seasonally-favorable directions. Using entomological radar, we demonstrate that free-flying individuals of the migratory noctuid moth Autographa gamma actively select fast, high-altitude airstreams moving in a direction that is highly beneficial for their autumn migration. They also exhibit common orientation close to the downwind direction, thus maximizing the rectilinear distance traveled. Most unexpectedly, we find that when winds are not closely aligned with the moth's preferred heading (toward the SSW), they compensate for cross-wind drift, thus increasing the probability of reaching their overwintering range. We conclude that nocturnally migrating moths use a compass and an inherited preferred direction to optimize their migratory track.

  14. Optimal selection of piezoelectric substrates and crystal cuts for SAW-based pressure and temperature sensors.

    PubMed

    Zhang, Xiangwen; Wang, Fei-Yue; Li, Li

    2007-06-01

    In this paper, the perturbation method is used to study the velocity shift of surface acoustic waves (SAW) caused by surface pressure and temperature variations of piezoelectric substrates. Effects of pressures and temperatures on elastic, piezoelectric, and dielectric constants of piezoelectric substrates are fully considered as well as the initial stresses and boundary conditions. First, frequency pressure/temperature coefficients are introduced to reflect the relationship between the SAW resonant frequency and the pressure/temperature of the piezoelectric substrates. Second, delay pressure/temperature coefficients are introduced to reflect the relationship among the SAW delay time/phase and SAW delay line-based sensors' pressure and temperature. An objective function for performance evaluation of piezoelectric substrates is then defined in terms of their effective SAW coupling coefficients, power flow angles (PFA), acoustic propagation losses, and pressure and temperature coefficients. Finally, optimal selections of piezo-electric substrates and crystal cuts for SAW-based pressure, temperature, and pressure/temperature sensors are derived by calculating the corresponding objective function values among the range of X-cut, Y-cut, Z-cut, and rotated Y-cut quartz, lithium niobate, and lithium tantalate crystals in different propagation directions.

  15. EEG Channel Selection Using Particle Swarm Optimization for the Classification of Auditory Event-Related Potentials

    PubMed Central

    Hokari, Haruhide

    2014-01-01

    Brain-machine interfaces (BMI) rely on the accurate classification of event-related potentials (ERPs) and their performance greatly depends on the appropriate selection of classifier parameters and features from dense-array electroencephalography (EEG) signals. Moreover, in order to achieve a portable and more compact BMI for practical applications, it is also desirable to use a system capable of accurate classification using information from as few EEG channels as possible. In the present work, we propose a method for classifying P300 ERPs using a combination of Fisher Discriminant Analysis (FDA) and a multiobjective hybrid real-binary Particle Swarm Optimization (MHPSO) algorithm. Specifically, the algorithm searches for the set of EEG channels and classifier parameters that simultaneously maximize the classification accuracy and minimize the number of used channels. The performance of the method is assessed through offline analyses on datasets of auditory ERPs from sound discrimination experiments. The proposed method achieved a higher classification accuracy than that achieved by traditional methods while also using fewer channels. It was also found that the number of channels used for classification can be significantly reduced without greatly compromising the classification accuracy. PMID:24982944

  16. Selection of plants for optimization of vegetative filter strips treating runoff from turfgrass.

    PubMed

    Smith, Katy E; Putnam, Raymond A; Phaneuf, Clifford; Lanza, Guy R; Dhankher, Om P; Clark, John M

    2008-01-01

    Runoff from turf environments, such as golf courses, is of increasing concern due to the associated chemical contamination of lakes, reservoirs, rivers, and ground water. Pesticide runoff due to fungicides, herbicides, and insecticides used to maintain golf courses in acceptable playing condition is a particular concern. One possible approach to mitigate such contamination is through the implementation of effective vegetative filter strips (VFS) on golf courses and other recreational turf environments. The objective of the current study was to screen ten aesthetically acceptable plant species for their ability to remove four commonly-used and degradable pesticides: chlorpyrifos (CP), chlorothalonil (CT), pendimethalin (PE), and propiconazole (PR) from soil in a greenhouse setting, thus providing invaluable information as to the species composition that would be most efficacious for use in VFS surrounding turf environments. Our results revealed that blue flag iris (Iris versicolor) (76% CP, 94% CT, 48% PE, and 33% PR were lost from soil after 3 mo of plant growth), eastern gama grass (Tripsacum dactyloides) (47% CP, 95% CT, 17% PE, and 22% PR were lost from soil after 3 mo of plant growth), and big blue stem (Andropogon gerardii) (52% CP, 91% CT, 19% PE, and 30% PR were lost from soil after 3 mo of plant growth) were excellent candidates for the optimization of VFS as buffer zones abutting turf environments. Blue flag iris was most effective at removing selected pesticides from soil and had the highest aesthetic value of the plants tested.

  17. Toward optimized light utilization in nanowire arrays using scalable nanosphere lithography and selected area growth.

    PubMed

    Madaria, Anuj R; Yao, Maoqing; Chi, Chunyung; Huang, Ningfeng; Lin, Chenxi; Li, Ruijuan; Povinelli, Michelle L; Dapkus, P Daniel; Zhou, Chongwu

    2012-06-13

    Vertically aligned, catalyst-free semiconducting nanowires hold great potential for photovoltaic applications, in which achieving scalable synthesis and optimized optical absorption simultaneously is critical. Here, we report combining nanosphere lithography (NSL) and selected area metal-organic chemical vapor deposition (SA-MOCVD) for the first time for scalable synthesis of vertically aligned gallium arsenide nanowire arrays, and surprisingly, we show that such nanowire arrays with patterning defects due to NSL can be as good as highly ordered nanowire arrays in terms of optical absorption and reflection. Wafer-scale patterning for nanowire synthesis was done using a polystyrene nanosphere template as a mask. Nanowires grown from substrates patterned by NSL show similar structural features to those patterned using electron beam lithography (EBL). Reflection of photons from the NSL-patterned nanowire array was used as a measure of the effect of defects present in the structure. Experimentally, we show that GaAs nanowires as short as 130 nm show reflection of <10% over the visible range of the solar spectrum. Our results indicate that a highly ordered nanowire structure is not necessary: despite the "defects" present in NSL-patterned nanowire arrays, their optical performance is similar to "defect-free" structures patterned by more costly, time-consuming EBL methods. Our scalable approach for synthesis of vertical semiconducting nanowires can have application in high-throughput and low-cost optoelectronic devices, including solar cells.

  18. [System parameters selection and optimization of tunable diode laser absorption spectroscopy].

    PubMed

    Gao, Nan; Du, Zhen-Hui; Tang, Mia; Yang, Jie-Wen; Yang, Chun-Mei; Wang, Yan

    2010-12-01

    The system performance of tunable diode laser absorption spectroscopy (TDLAS) is affected by the modulation parameters such as modulation index, modulation frequency, scanning amplitude and scanning frequency. There is a lack of definite parameters selection basis in practical measurement. Aiming at this problem, the influence of modulation parameters on second harmonic signals was observed by experiment based on a certain theory in the present paper, and the basis and method of modulation parameters optimization for various system functions and demands were summarized by analyzing the signal characteristic including amplitude, signal to noise ratio, symmetry and peak width. For the system of concentration or temperature detection the amplitude and signal to noise ratio will be taken into prior consideration which require optimum modulation index, lower modulation frequency and lower scanning frequency. In condition of pressure detection deduced by lineshape the signal symmetry and peak width are more important to ascertain the modulation parameters according to practical demands. Scanning amplitude will be adjusted to obtain complete signal waveforms, then scanning frequency can be adjusted according to system speed and accuracy requirement. The result of the experiment provided a definite basis for conforming the working state of such system.

  19. Discrepancy among the synonymous codons with respect to their selection as optimal codon in bacteria

    PubMed Central

    Satapathy, Siddhartha Sankar; Powdel, Bhesh Raj; Buragohain, Alak Kumar; Ray, Suvendra Kumar

    2016-01-01

    The different triplets encoding the same amino acid, termed as synonymous codons, are not equally abundant in a genome. Factors such as G + C% and tRNA are known to influence their abundance in a genome. However, the order of the nucleotide in each codon per se might also be another factor impacting on its abundance values. Of the synonymous codons for specific amino acids, some are preferentially used in the high expression genes that are referred to as the ‘optimal codons’ (OCs). In this study, we compared OCs of the 18 amino acids in 221 species of bacteria. It is observed that there is amino acid specific influence for the selection of OCs. There is also influence of phylogeny in the choice of OCs for some amino acids such as Glu, Gln, Lys and Leu. The phenomenon of codon bias is also supported by the comparative studies of the abundance values of the synonymous codons with same G + C. It is likely that the order of the nucleotides in the triplet codon is also perhaps involved in the phenomenon of codon usage bias in organisms. PMID:27426467

  20. Cancer Feature Selection and Classification Using a Binary Quantum-Behaved Particle Swarm Optimization and Support Vector Machine.

    PubMed

    Xi, Maolong; Sun, Jun; Liu, Li; Fan, Fangyun; Wu, Xiaojun

    2016-01-01

    This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO) for cancer feature gene selection, coupling support vector machine (SVM) for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV). Finally, the BQPSO coupling SVM (BQPSO/SVM), binary PSO coupling SVM (BPSO/SVM), and genetic algorithm coupling SVM (GA/SVM) are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms.

  1. Cancer Feature Selection and Classification Using a Binary Quantum-Behaved Particle Swarm Optimization and Support Vector Machine

    PubMed Central

    Sun, Jun; Liu, Li; Fan, Fangyun; Wu, Xiaojun

    2016-01-01

    This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO) for cancer feature gene selection, coupling support vector machine (SVM) for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV). Finally, the BQPSO coupling SVM (BQPSO/SVM), binary PSO coupling SVM (BPSO/SVM), and genetic algorithm coupling SVM (GA/SVM) are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms. PMID:27642363

  2. Comparing the Selection and Placement of Best Management Practices in Improving Water Quality Using a Multiobjective Optimization and Targeting Method

    PubMed Central

    Chiang, Li-Chi; Chaubey, Indrajeet; Maringanti, Chetan; Huang, Tao

    2014-01-01

    Suites of Best Management Practices (BMPs) are usually selected to be economically and environmentally efficient in reducing nonpoint source (NPS) pollutants from agricultural areas in a watershed. The objective of this research was to compare the selection and placement of BMPs in a pasture-dominated watershed using multiobjective optimization and targeting methods. Two objective functions were used in the optimization process, which minimize pollutant losses and the BMP placement areas. The optimization tool was an integration of a multi-objective genetic algorithm (GA) and a watershed model (Soil and Water Assessment Tool—SWAT). For the targeting method, an optimum BMP option was implemented in critical areas in the watershed that contribute the greatest pollutant losses. A total of 171 BMP combinations, which consist of grazing management, vegetated filter strips (VFS), and poultry litter applications were considered. The results showed that the optimization is less effective when vegetated filter strips (VFS) are not considered, and it requires much longer computation times than the targeting method to search for optimum BMPs. Although the targeting method is effective in selecting and placing an optimum BMP, larger areas are needed for BMP implementation to achieve the same pollutant reductions as the optimization method. PMID:24619160

  3. Comparing the selection and placement of best management practices in improving water quality using a multiobjective optimization and targeting method.

    PubMed

    Chiang, Li-Chi; Chaubey, Indrajeet; Maringanti, Chetan; Huang, Tao

    2014-03-11

    Suites of Best Management Practices (BMPs) are usually selected to be economically and environmentally efficient in reducing nonpoint source (NPS) pollutants from agricultural areas in a watershed. The objective of this research was to compare the selection and placement of BMPs in a pasture-dominated watershed using multiobjective optimization and targeting methods. Two objective functions were used in the optimization process, which minimize pollutant losses and the BMP placement areas. The optimization tool was an integration of a multi-objective genetic algorithm (GA) and a watershed model (Soil and Water Assessment Tool-SWAT). For the targeting method, an optimum BMP option was implemented in critical areas in the watershed that contribute the greatest pollutant losses. A total of 171 BMP combinations, which consist of grazing management, vegetated filter strips (VFS), and poultry litter applications were considered. The results showed that the optimization is less effective when vegetated filter strips (VFS) are not considered, and it requires much longer computation times than the targeting method to search for optimum BMPs. Although the targeting method is effective in selecting and placing an optimum BMP, larger areas are needed for BMP implementation to achieve the same pollutant reductions as the optimization method.

  4. A New Methodology to Select the Preferred Solutions from the Pareto-optimal Set: Application to Polymer Extrusion

    NASA Astrophysics Data System (ADS)

    Ferreira, José C.; Fonseca, Carlos M.; Gaspar-Cunha, António

    2007-04-01

    Most of the real world optimization problems involve multiple, usually conflicting, optimization criteria. Generating Pareto optimal solutions plays an important role in multi-objective optimization, and the problem is considered to be solved when the Pareto optimal set is found, i.e., the set of non-dominated solutions. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to produce the complete set of non-dominated solutions. However, this is not allays enough since the aim is not only to know the Pareto set but, also, to obtain one solution from this Pareto set. Thus, the definition of a methodology able to select a single solution from the set of non-dominated solutions (or a region of the Pareto frontier), and taking into account the preferences of a Decision Maker (DM), is necessary. A different method, based on a weighted stress function, is proposed. It is able to integrate the user's preferences in order to find the best region of the Pareto frontier accordingly with these preferences. This method was tested on some benchmark test problems, with two and three criteria, and on a polymer extrusion problem. This methodology is able to select efficiently the best Pareto-frontier region for the specified relative importance of the criteria.

  5. Insights into the Experiences of Older Workers and Change: Through the Lens of Selection, Optimization, and Compensation

    ERIC Educational Resources Information Center

    Unson, Christine; Richardson, Margaret

    2013-01-01

    Purpose: The study examined the barriers faced, the goals selected, and the optimization and compensation strategies of older workers in relation to career change. Method: Thirty open-ended interviews, 12 in the United States and 18 in New Zealand, were conducted, recorded, transcribed verbatim, and analyzed for themes. Results: Barriers to…

  6. Insights into the Experiences of Older Workers and Change: Through the Lens of Selection, Optimization, and Compensation

    ERIC Educational Resources Information Center

    Unson, Christine; Richardson, Margaret

    2013-01-01

    Purpose: The study examined the barriers faced, the goals selected, and the optimization and compensation strategies of older workers in relation to career change. Method: Thirty open-ended interviews, 12 in the United States and 18 in New Zealand, were conducted, recorded, transcribed verbatim, and analyzed for themes. Results: Barriers to…

  7. Simultaneous optimization of variables influencing selectivity and elution strength in micellar liquid chromatography. Effect of organic modifier and micelle concentration.

    PubMed

    Strasters, J K; Breyer, E D; Rodgers, A H; Khaledi, M G

    1990-07-06

    Previously, the simultaneous enhancement of separation selectivity with elution strength was reported in micellar liquid chromatography (MLC) using the hybrid eluents of water-organic solvent-micelles. The practical implication of this phenomenon is that better separations can be achieved in shorter analysis times by using the hybrid eluents. Since both micelle concentration and volume fraction of organic modifier influence selectivity and solvent strength, only an investigation of the effects of a simultaneous variation of these parameters will disclose the full separation capability of the method, i.e. the commonly used sequential solvent optimization approach of adjusting the solvent strength first and then improving selectivity in reversed-phase liquid chromatography is inefficient for the case of MLC with the hybrid eluents. This is illustrated in this paper with two examples: the optimization of the selectivity in the separation of a mixture of phenols and the optimization of a resolution-based criterion determined for the separation of a number of amino acids and small peptides. The large number of variables involved in the separation process in MLC necessitates a structured approach in the development of practical applications of this technique. A regular change in retention behavior is observed with the variation of the surfactant concentration and the concentration of organic modifier, which enables a successful prediction of retention times. Consequently interpretive optimization strategies such as the interative regression method are applicable.

  8. Optimization of an indazole series of selective estrogen receptor degraders: Tumor regression in a tamoxifen-resistant breast cancer xenograft.

    PubMed

    Govek, Steven P; Nagasawa, Johnny Y; Douglas, Karensa L; Lai, Andiliy G; Kahraman, Mehmet; Bonnefous, Celine; Aparicio, Anna M; Darimont, Beatrice D; Grillot, Katherine L; Joseph, James D; Kaufman, Joshua A; Lee, Kyoung-Jin; Lu, Nhin; Moon, Michael J; Prudente, Rene Y; Sensintaffar, John; Rix, Peter J; Hager, Jeffrey H; Smith, Nicholas D

    2015-11-15

    Selective estrogen receptor degraders (SERDs) have shown promise for the treatment of ER+ breast cancer. Disclosed herein is the continued optimization of our indazole series of SERDs. Exploration of ER degradation and antagonism in vitro followed by in vivo antagonism and oral exposure culminated in the discovery of indazoles 47 and 56, which induce tumor regression in a tamoxifen-resistant breast cancer xenograft.

  9. Effect of Selection of Design Parameters on the Optimization of a Horizontal Axis Wind Turbine via Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Alpman, Emre

    2014-06-01

    The effect of selecting the twist angle and chord length distributions on the wind turbine blade design was investigated by performing aerodynamic optimization of a two-bladed stall regulated horizontal axis wind turbine. Twist angle and chord length distributions were defined using Bezier curve using 3, 5, 7 and 9 control points uniformly distributed along the span. Optimizations performed using a micro-genetic algorithm with populations composed of 5, 10, 15, 20 individuals showed that, the number of control points clearly affected the outcome of the process; however the effects were different for different population sizes. The results also showed the superiority of micro-genetic algorithm over a standard genetic algorithm, for the selected population sizes. Optimizations were also performed using a macroevolutionary algorithm and the resulting best blade design was compared with that yielded by micro-genetic algorithm.

  10. Optimizing selective cutting strategies for maximum carbon stocks and yield of Moso bamboo forest using BIOME-BGC model.

    PubMed

    Mao, Fangjie; Zhou, Guomo; Li, Pingheng; Du, Huaqiang; Xu, Xiaojun; Shi, Yongjun; Mo, Lufeng; Zhou, Yufeng; Tu, Guoqing

    2017-04-15

    The selective cutting method currently used in Moso bamboo forests has resulted in a reduction of stand productivity and carbon sequestration capacity. Given the time and labor expense involved in addressing this problem manually, simulation using an ecosystem model is the most suitable approach. The BIOME-BGC model was improved to suit managed Moso bamboo forests, which was adapted to include age structure, specific ecological processes and management measures of Moso bamboo forest. A field selective cutting experiment was done in nine plots with three cutting intensities (high-intensity, moderate-intensity and low-intensity) during 2010-2013, and biomass of these plots was measured for model validation. Then four selective cutting scenarios were simulated by the improved BIOME-BGC model to optimize the selective cutting timings, intervals, retained ages and intensities. The improved model matched the observed aboveground carbon density and yield of different plots, with a range of relative error from 9.83% to 15.74%. The results of different selective cutting scenarios suggested that the optimal selective cutting measure should be cutting 30% culms of age 6, 80% culms of age 7, and all culms thereafter (above age 8) in winter every other year. The vegetation carbon density and harvested carbon density of this selective cutting method can increase by 74.63% and 21.5%, respectively, compared with the current selective cutting measure. The optimized selective cutting measure developed in this study can significantly promote carbon density, yield, and carbon sink capacity in Moso bamboo forests.

  11. Knowledge-Based, Central Nervous System (CNS) Lead Selection and Lead Optimization for CNS Drug Discovery

    PubMed Central

    2011-01-01

    The central nervous system (CNS) is the major area that is affected by aging. Alzheimer’s disease (AD), Parkinson’s disease (PD), brain cancer, and stroke are the CNS diseases that will cost trillions of dollars for their treatment. Achievement of appropriate blood–brain barrier (BBB) penetration is often considered a significant hurdle in the CNS drug discovery process. On the other hand, BBB penetration may be a liability for many of the non-CNS drug targets, and a clear understanding of the physicochemical and structural differences between CNS and non-CNS drugs may assist both research areas. Because of the numerous and challenging issues in CNS drug discovery and the low success rates, pharmaceutical companies are beginning to deprioritize their drug discovery efforts in the CNS arena. Prompted by these challenges and to aid in the design of high-quality, efficacious CNS compounds, we analyzed the physicochemical property and the chemical structural profiles of 317 CNS and 626 non-CNS oral drugs. The conclusions derived provide an ideal property profile for lead selection and the property modification strategy during the lead optimization process. A list of substructural units that may be useful for CNS drug design was also provided here. A classification tree was also developed to differentiate between CNS drugs and non-CNS oral drugs. The combined analysis provided the following guidelines for designing high-quality CNS drugs: (i) topological molecular polar surface area of <76 Å2 (25–60 Å2), (ii) at least one (one or two, including one aliphatic amine) nitrogen, (iii) fewer than seven (two to four) linear chains outside of rings, (iv) fewer than three (zero or one) polar hydrogen atoms, (v) volume of 740–970 Å3, (vi) solvent accessible surface area of 460–580 Å2, and (vii) positive QikProp parameter CNS. The ranges within parentheses may be used during lead optimization. One violation to this proposed profile may be acceptable. The

  12. Automatised selection of load paths to construct reduced-order models in computational damage micromechanics: from dissipation-driven random selection to Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Goury, Olivier; Amsallem, David; Bordas, Stéphane Pierre Alain; Liu, Wing Kam; Kerfriden, Pierre

    2016-08-01

    In this paper, we present new reliable model order reduction strategies for computational micromechanics. The difficulties rely mainly upon the high dimensionality of the parameter space represented by any load path applied onto the representative volume element. We take special care of the challenge of selecting an exhaustive snapshot set. This is treated by first using a random sampling of energy dissipating load paths and then in a more advanced way using Bayesian optimization associated with an interlocked division of the parameter space. Results show that we can insure the selection of an exhaustive snapshot set from which a reliable reduced-order model can be built.

  13. Establishing patient-specific criteria for selecting the optimal upper extremity vascular access procedure

    PubMed Central

    Woo, Karen; Ulloa, Jesus; Allon, Michael; Carsten, Christopher G.; Chemla, Eric S.; Henry, Mitchell L.; Huber, Thomas S.; Lawson, Jeffrey H.; Lok, Charmaine E.; Peden, Eric K.; Scher, Larry; Sidawy, Anton; Maggard-Gibbons, Melinda; Cull, David

    2017-01-01

    index and functional status were not associated with AVG appropriateness. To simulate the surgeon’s decision-making, scenarios were combined to create situations with the same patient characteristics and both AVF and AVG options for access. Of these 864 clinical situations, 311 (36%) were rated appropriate for AVG but inappropriate or indeterminate for AVF. Conclusions The results of this study indicate that patient-specific situations exist wherein AVG is as appropriate as or more appropriate than AVF. These results provide patient-specific recommendations for clinicians to optimize vascular access selection criteria, to standardize care, and to inform payers and policy. Indeterminate scenarios will guide future research. PMID:28222990

  14. Individual selection of gait retraining strategies is essential to optimally reduce medial knee load during gait.

    PubMed

    Gerbrands, T A; Pisters, M F; Vanwanseele, B

    2014-08-01

    The progression of medial knee osteoarthritis seems closely related to a high external knee adduction moment, which could be reduced through gait retraining. We aimed to determine the retraining strategy that reduces this knee moment most effective during gait, and to determine if the same strategy is the most effective for everyone. Thirty-seven healthy participants underwent 3D gait analysis. After normal walking was recorded, participants received verbal instructions on four gait strategies (Trunk Lean, Medial Thrust, Reduced Vertical Acceleration, Toe Out). Knee adduction moment and strategy-specific kinematics were calculated for all conditions. The overall knee adduction moment peak was reduced by Medial Thrust (-0.08Nm/Bw·Ht) and Trunk Lean (-0.07Nm/Bw·Ht), while impulse was reduced by 0.03Nms/Bw·Ht in both conditions. Toeing out reduced late stance peak and impulse significantly but overall peak was not affected. Reducing vertical acceleration at initial contact did not reduce the overall peak. Strategy-specific kinematics (trunk lean angle, knee adduction angle, first peak of the vertical ground reaction force, foot progression angle) showed that multiple parameters were affected by all conditions. Medial Thrust was the most effective strategy in 43% of the participants, while Trunk Lean reduced external knee adduction moment most in 49%. With similar kinematics, the reduction of the knee adduction moment peak and impulse was significantly different between these groups. Although Trunk Lean and Medial Thrust reduced the external knee adduction moment overall, individual selection of gait retraining strategy seems vital to optimally reduce dynamic knee load during gait. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. G-STRATEGY: Optimal Selection of Individuals for Sequencing in Genetic Association Studies

    PubMed Central

    Wang, Miaoyan; Jakobsdottir, Johanna; Smith, Albert V.; McPeek, Mary Sara

    2017-01-01

    In a large-scale genetic association study, the number of phenotyped individuals available for sequencing may, in some cases, be greater than the study’s sequencing budget will allow. In that case, it can be important to prioritize individuals for sequencing in a way that optimizes power for association with the trait. Suppose a cohort of phenotyped individuals is available, with some subset of them possibly already sequenced, and one wants to choose an additional fixed-size subset of individuals to sequence in such a way that the power to detect association is maximized. When the phenotyped sample includes related individuals, power for association can be gained by including partial information, such as phenotype data of ungenotyped relatives, in the analysis, and this should be taken into account when assessing whom to sequence. We propose G-STRATEGY, which uses simulated annealing to choose a subset of individuals for sequencing that maximizes the expected power for association. In simulations, G-STRATEGY performs extremely well for a range of complex disease models and outperforms other strategies with, in many cases, relative power increases of 20–40% over the next best strategy, while maintaining correct type 1 error. G-STRATEGY is computationally feasible even for large datasets and complex pedigrees. We apply G-STRATEGY to data on HDL and LDL from the AGES-Reykjavik and REFINE-Reykjavik studies, in which G-STRATEGY is able to closely-approximate the power of sequencing the full sample by selecting for sequencing a only small subset of the individuals. PMID:27256766

  16. Optimal Wavelength Selection on Hyperspectral Data with Fused Lasso for Biomass Estimation of Tropical Rain Forest

    NASA Astrophysics Data System (ADS)

    Takayama, T.; Iwasaki, A.

    2016-06-01

    Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  17. Optimal selection of on-site generation with combined heat andpower applications

    SciTech Connect

    Siddiqui, Afzal S.; Marnay, Chris; Bailey, Owen; HamachiLaCommare, Kristina

    2004-11-30

    While demand for electricity continues to grow, expansion of the traditional electricity supply system, or macrogrid, is constrained and is unlikely to keep pace with the growing thirst western economies have for electricity. Furthermore, no compelling case has been made that perpetual improvement in the overall power quality and reliability (PQR)delivered is technically possible or economically desirable. An alternative path to providing high PQR for sensitive loads would generate close to them in microgrids, such as the Consortium for Electricity Reliability Technology Solutions (CERTS) Microgrid. Distributed generation would alleviate the pressure for endless improvement in macrogrid PQR and might allow the establishment of a sounder economically based level of universal grid service. Energy conversion from available fuels to electricity close to loads can also provide combined heat and power (CHP) opportunities that can significantly improve the economics of small-scale on-site power generation, especially in hot climates when the waste heat serves absorption cycle cooling equipment that displaces expensive on-peak electricity. An optimization model, the Distributed Energy Resources Customer Adoption Model (DER-CAM), developed at Berkeley Lab identifies the energy bill minimizing combination of on-site generation and heat recovery equipment for sites, given their electricity and heat requirements, the tariffs they face, and a menu of available equipment. DER-CAM is used to conduct a systemic energy analysis of a southern California naval base building and demonstrates atypical current economic on-site power opportunity. Results achieve cost reductions of about 15 percent with DER, depending on the tariff.Furthermore, almost all of the energy is provided on-site, indicating that modest cost savings can be achieved when the microgrid is free to select distributed generation and heat recovery equipment in order to minimize its over all costs.

  18. Controversial issues of optimal surgical timing and patient selection in the treatment planning of otosclerosis.

    PubMed

    Shiao, An-Suey; Kuo, Chin-Lung; Cheng, Hsiu-Lien; Wang, Mao-Che; Chu, Chia-Huei

    2014-05-01

    The aim of this study was to analyze the impact of clinical factors on the outcomes of otosclerosis surgery and support patients' access to evidence-based information in pre-operative counseling to optimize their choices. A total of 109 ears in 93 patients undergoing stapes surgery in a tertiary referral center were included. Variables with a potential impact on hearing outcomes were recorded, with an emphasis on factors that were readily available pre-operatively. Hearing success was defined as a post-operative air-bone gap ≤10 dB. Logistic regression analysis was used to determine the factors independently contributing to the prediction of hearing success. The mean follow-up period was 18.0 months. Univariate and multivariate analyses indicated that none of the pre-operative factors (piston type, age, sex, affected side, tinnitus, vertigo, and pre-operative hearing thresholds) affected hearing success significantly (all p > 0.05). In conclusion, self-crimping Nitinol piston provides comparable hearing outcomes with conventional manual-crimping prostheses. However, Nitinol piston offers a technical simplification of a surgical procedure and an easier surgical choice for patients. In addition, age is not a detriment to hearing gain and instead might result in better use of hearing aids in older adults, thus facilitating social hearing recovery. Finally, hearing success does not depend on the extent of pre-operative hearing loss. Hence, patients with poor cochlear function should not be considered poor candidates for surgery. The predictive model has established recommendations for otologists for better case selection, and factors that are readily available pre-operatively may inform patients more explicitly about expected post-operative audiometric results.

  19. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems.

    PubMed

    Stephanie, Robinson; Margie, Lachman; Elizabeth, Rickenbach

    2016-03-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to aging, low cognitive resources, and daily stress in relation to everyday memory problems. We examined whether SOC usage varied by age and level of constraints, and if the relationship between resources and memory problems was mitigated by SOC usage. A daily diary paradigm was used to explore day-to-day fluctuations in these relationships. Participants (n=145, ages 22 to 94) completed a baseline interview and a daily diary for seven consecutive days. Multilevel models examined between- and within-person relationships between daily SOC use, daily stressors, cognitive resources, and everyday memory problems. Middle-aged adults had the highest SOC usage, although older adults also showed high SOC use if they had high cognitive resources. More SOC strategies were used on high stress compared to low stress days. Moreover, the relationship between daily stress and memory problems was buffered by daily SOC use, such that on high-stress days, those who used more SOC strategies reported fewer memory problems than participants who used fewer SOC strategies. The paradox of resources and SOC use can be qualified by the type of resource-limitation. Deficits in global resources were not tied to SOC usage or benefits. Conversely, under daily constraints tied to stress, the use of SOC increased and led to fewer memory problems.

  20. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems

    PubMed Central

    Stephanie, Robinson; Margie, Lachman; Elizabeth, Rickenbach

    2015-01-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to aging, low cognitive resources, and daily stress in relation to everyday memory problems. We examined whether SOC usage varied by age and level of constraints, and if the relationship between resources and memory problems was mitigated by SOC usage. A daily diary paradigm was used to explore day-to-day fluctuations in these relationships. Participants (n=145, ages 22 to 94) completed a baseline interview and a daily diary for seven consecutive days. Multilevel models examined between- and within-person relationships between daily SOC use, daily stressors, cognitive resources, and everyday memory problems. Middle-aged adults had the highest SOC usage, although older adults also showed high SOC use if they had high cognitive resources. More SOC strategies were used on high stress compared to low stress days. Moreover, the relationship between daily stress and memory problems was buffered by daily SOC use, such that on high-stress days, those who used more SOC strategies reported fewer memory problems than participants who used fewer SOC strategies. The paradox of resources and SOC use can be qualified by the type of resource-limitation. Deficits in global resources were not tied to SOC usage or benefits. Conversely, under daily constraints tied to stress, the use of SOC increased and led to fewer memory problems. PMID:26997686

  1. Synthesis and purification of iodoaziridines involving quantitative selection of the optimal stationary phase for chromatography.

    PubMed

    Boultwood, Tom; Affron, Dominic P; Bull, James A

    2014-05-16

    The highly diastereoselective preparation of cis-N-Ts-iodoaziridines through reaction of diiodomethyllithium with N-Ts aldimines is described. Diiodomethyllithium is prepared by the deprotonation of diiodomethane with LiHMDS, in a THF/diethyl ether mixture, at -78 °C in the dark. These conditions are essential for the stability of the LiCHI2 reagent generated. The subsequent dropwise addition of N-Ts aldimines to the preformed diiodomethyllithium solution affords an amino-diiodide intermediate, which is not isolated. Rapid warming of the reaction mixture to 0 °C promotes cyclization to afford iodoaziridines with exclusive cis-diastereoselectivity. The addition and cyclization stages of the reaction are mediated in one reaction flask by careful temperature control. Due to the sensitivity of the iodoaziridines to purification, assessment of suitable methods of purification is required. A protocol to assess the stability of sensitive compounds to stationary phases for column chromatography is described. This method is suitable to apply to new iodoaziridines, or other potentially sensitive novel compounds. Consequently this method may find application in range of synthetic projects. The procedure involves firstly the assessment of the reaction yield, prior to purification, by (1)H NMR spectroscopy with comparison to an internal standard. Portions of impure product mixture are then exposed to slurries of various stationary phases appropriate for chromatography, in a solvent system suitable as the eluent in flash chromatography. After stirring for 30 min to mimic chromatography, followed by filtering, the samples are analyzed by (1)H NMR spectroscopy. Calculated yields for each stationary phase are then compared to that initially obtained from the crude reaction mixture. The results obtained provide a quantitative assessment of the stability of the compound to the different stationary phases; hence the optimal can be selected. The choice of basic alumina, modified to

  2. In vitro selection of optimal DNA substrates for T4 RNA ligase

    NASA Technical Reports Server (NTRS)

    Harada, Kazuo; Orgel, Leslie E.

    1993-01-01

    We have used in vitro selection techniques to characterize DNA sequences that are ligated efficiently by T4 RNA ligase. We find that the ensemble of selected sequences ligated about 10 times as efficiently as the random mixture of sequences used as the input for selection. Surprisingly, the majority of the selected sequences approximated a well-defined consensus sequence.

  3. In vitro selection of optimal DNA substrates for T4 RNA ligase

    NASA Technical Reports Server (NTRS)

    Harada, Kazuo; Orgel, Leslie E.

    1993-01-01

    We have used in vitro selection techniques to characterize DNA sequences that are ligated efficiently by T4 RNA ligase. We find that the ensemble of selected sequences ligated about 10 times as efficiently as the random mixture of sequences used as the input for selection. Surprisingly, the majority of the selected sequences approximated a well-defined consensus sequence.

  4. Optimizing candidate selection--a vision in business limited conference. 1-2 December 1998, Basel, Switzerland.

    PubMed

    Audus, K L

    1999-02-01

    The pharmaceutical industry is faced with filtering hundreds of thousands of compounds to identify successful drug candidates. Given these numbers, how does the pharmaceutical industry identify optimal therapeutic agents rapidly, efficiently, economically and successfully, with the ultimate result of the patient receiving the best drug? The conference summarized the present and future requirements for evaluating emerging technologies, integrating that technology into a filter for large and growing numbers of compounds, building and linking diverse knowledge bases, and establishing predictive foundations that will optimize and accelerate drug discovery and development. Specific conference topics focused on organizational and management approaches as well as some of the major technologies and emerging techniques for supporting drug candidate selection and optimization. It is predicted that the pharmaceutical industry will be synthesizing and screening a million or more compounds for multiple therapeutic targets in the near future. Pulling together the resources of current and emerging technology, knowledge, and multidisciplinary teamwork, so that discovery and selection of successful drug candidates from this large pool of compounds can take place rapidly, is a significant challenge. This conference focused on the organizational issues and experimental tools that can provide for a shortening of discovery time, identification of current and future selection techniques and criteria, the linking of technologies and business strategies to reduce risk, and novel processes for optimizing candidates more quickly and efficiently. The conference was directed at industrial scientists involved in all stages along the drug discovery and development interface. This conference was well-attended, with approximately 100 participants.

  5. A multi-objective optimization approach for the selection of working fluids of geothermal facilities: Economic, environmental and social aspects.

    PubMed

    Martínez-Gomez, Juan; Peña-Lamas, Javier; Martín, Mariano; Ponce-Ortega, José María

    2017-12-01

    The selection of the working fluid for Organic Rankine Cycles has traditionally been addressed from systematic heuristic methods, which perform a characterization and prior selection considering mainly one objective, thus avoiding a selection considering simultaneously the objectives related to sustainability and safety. The objective of this work is to propose a methodology for the optimal selection of the working fluid for Organic Rankine Cycles. The model is presented as a multi-objective approach, which simultaneously considers the economic, environmental and safety aspects. The economic objective function considers the profit obtained by selling the energy produced. Safety was evaluated in terms of individual risk for each of the components of the Organic Rankine Cycles and it was formulated as a function of the operating conditions and hazardous properties of each working fluid. The environmental function is based on carbon dioxide emissions, considering carbon dioxide mitigation, emission due to the use of cooling water as well emissions due material release. The methodology was applied to the case of geothermal facilities to select the optimal working fluid although it can be extended to waste heat recovery. The results show that the hydrocarbons represent better solutions, thus among a list of 24 working fluids, toluene is selected as the best fluid. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Modeling surgical tool selection patterns as a "traveling salesman problem" for optimizing a modular surgical tool system.

    PubMed

    Nelson, Carl A; Miller, David J; Oleynikov, Dmitry

    2008-01-01

    As modular systems come into the forefront of robotic telesurgery, streamlining the process of selecting surgical tools becomes an important consideration. This paper presents a method for optimal queuing of tools in modular surgical tool systems, based on patterns in tool-use sequences, in order to minimize time spent changing tools. The solution approach is to model the set of tools as a graph, with tool-change frequency expressed as edge weights in the graph, and to solve the Traveling Salesman Problem for the graph. In a set of simulations, this method has shown superior performance at optimizing tool arrangements for streamlining surgical procedures.

  7. Robust design optimization of the vibrating rotor-shaft system subjected to selected dynamic constraints

    NASA Astrophysics Data System (ADS)

    Stocki, R.; Szolc, T.; Tauzowski, P.; Knabel, J.

    2012-05-01

    The commonly observed nowadays tendency to weight minimization of rotor-shafts of the rotating machinery leads to a decrease of shaft bending rigidity making a risk of dangerous stress concentrations and rubbing effects more probable. Thus, a determination of the optimal balance between reducing the rotor-shaft weight and assuring its admissible bending flexibility is a major goal of this study. The random nature of residual unbalances of the rotor-shaft as well as randomness of journal-bearing stiffness have been taken into account in the framework of robust design optimization. Such a formulation of the optimization problem leads to the optimal design that combines an acceptable structural weight with the robustness with respect to uncertainties of residual unbalances - the main source of bending vibrations causing the rubbing effects. The applied robust optimization technique is based on using Latin hypercubes in scatter analysis of the vibration response. The so-called optimal Latin hypercubes are used as experimental plans for building kriging approximations of the objective and constraint functions. The proposed method has been applied for the optimization of the typical single-span rotor-shaft of the 8-stage centrifugal compressor.

  8. Potent and selective inhibitors of the TASK-1 potassium channel through chemical optimization of a bis-amide scaffold

    PubMed Central

    Flaherty, Daniel P.; Simpson, Denise S.; Miller, Melissa; Maki, Brooks E.; Zou, Beiyan; Shi, Jie; Wu, Meng; McManus, Owen B.; Aubé, Jeffrey; Li, Min; Golden, Jennifer E.

    2014-01-01

    TASK-1 is a two-pore domain potassium channel that is important to modulating cell excitability, most notably in the context of neuronal pathways. In order to leverage TASK-1 for therapeutic benefit, its physiological role needs better characterization; however, designing selective inhibitors that avoid the closely related TASK-3 channel has been challenging. In this study, a series of bis-amide derived compounds were found to demonstrate improved TASK-1 selectivity over TASK-3 compared to reported inhibitors. Optimization of a marginally selective hit led to analog 35 which displays a TASK-1 IC50 = 16 nM with 62-fold selectivity over TASK-3 in an orthogonal electrophysiology assay. PMID:25017033

  9. Drug efficiency: a new concept to guide lead optimization programs towards the selection of better clinical candidates.

    PubMed

    Braggio, Simone; Montanari, Dino; Rossi, Tino; Ratti, Emiliangelo

    2010-07-01

    As a result of their wide acceptance and conceptual simplicity, drug-like concepts are having a major influence on the drug discovery process, particularly in the selection of the 'optimal' absorption, distribution, metabolism, excretion and toxicity and physicochemical parameters space. While they have an undisputable value when assessing the potential of lead series or in evaluating inherent risk of a portfolio of drug candidates, they result much less useful in weighing up compounds for the selection of the best potential clinical candidate. We introduce the concept of drug efficiency as a new tool both to guide the drug discovery program teams during the lead optimization phase and to better assess the developability potential of a drug candidate.

  10. High-Efficiency Nonfullerene Polymer Solar Cell Enabling by Integration of Film-Morphology Optimization, Donor Selection, and Interfacial Engineering.

    PubMed

    Zhang, Xin; Li, Weiping; Yao, Jiannian; Zhan, Chuanlang

    2016-06-22

    Carrier mobility is a vital factor determining the electrical performance of organic solar cells. In this paper we report that a high-efficiency nonfullerene organic solar cell (NF-OSC) with a power conversion efficiency of 6.94 ± 0.27% was obtained by optimizing the hole and electron transportations via following judicious selection of polymer donor and engineering of film-morphology and cathode interlayers: (1) a combination of solvent annealing and solvent vapor annealing optimizes the film morphology and hence both hole and electron mobilities, leading to a trade-off of fill factor and short-circuit current density (Jsc); (2) the judicious selection of polymer donor affords a higher hole and electron mobility, giving a higher Jsc; and (3) engineering the cathode interlayer affords a higher electron mobility, which leads to a significant increase in electrical current generation and ultimately the power conversion efficiency (PCE).

  11. Sulfonamides as Selective NaV1.7 Inhibitors: Optimizing Potency and Pharmacokinetics While Mitigating Metabolic Liabilities.

    PubMed

    Weiss, Matthew M; Dineen, Thomas A; Marx, Isaac E; Altmann, Steven; Boezio, Alessandro A; Bregman, Howard; Chu-Moyer, Margaret Y; DiMauro, Erin F; Feric Bojic, Elma; Foti, Robert S; Gao, Hua; Graceffa, Russell F; Gunaydin, Hakan; Guzman-Perez, Angel; Huang, Hongbing; Huang, Liyue; Jarosh, Michael; Kornecook, Thomas; Kreiman, Charles R; Ligutti, Joseph; La, Daniel S; Lin, Min-Hwa Jasmine; Liu, Dong; Moyer, Bryan D; Nguyen, Hanh Nho; Peterson, Emily A; Rose, Paul E; Taborn, Kristin; Youngblood, Beth D; Yu, Violeta L; Fremeau, Robert T

    2017-03-13

    Several reports have recently emerged regarding the identification of heteroarylsulfonamides as NaV1.7 inhibitors that demonstrate high levels of selectivity over other NaV isoforms. The optimization of a series of internal NaV1.7 leads that address a number of metabolic liabilities including bioactivation, PXR activation, as well as CYP3A4 induction and inhibition led to the identification of potent and selective inhibitors that demonstrated favorable pharmacokinetic profiles and were devoid of the aforementioned liabilities. Key to achieving this within a series prone to transporter-mediated clearance was the identification of a small range of optimal cLogD values and the discovery of subtle PXR SAR that was not lipophilicity-dependent. This enabled the identification of compound 20 which was advanced into a target engagement pharmacodynamic model where it exhibited robust reversal of histamine-induced scratching bouts in mice.

  12. Optimal design and patient selection for interventional trials using radiogenomic biomarkers: A REQUITE and Radiogenomics consortium statement.

    PubMed

    De Ruysscher, Dirk; Defraene, Gilles; Ramaekers, Bram L T; Lambin, Philippe; Briers, Erik; Stobart, Hilary; Ward, Tim; Bentzen, Søren M; Van Staa, Tjeerd; Azria, David; Rosenstein, Barry; Kerns, Sarah; West, Catharine

    2016-12-01

    The optimal design and patient selection for interventional trials in radiogenomics seem trivial at first sight. However, radiogenomics do not give binary information like in e.g. targetable mutation biomarkers. Here, the risk to develop severe side effects is continuous, with increasing incidences of side effects with higher doses and/or volumes. In addition, a multi-SNP assay will produce a predicted probability of developing side effects and will require one or more cut-off thresholds for classifying risk into discrete categories. A classical biomarker trial design is therefore not optimal, whereas a risk factor stratification approach is more appropriate. Patient selection is crucial and this should be based on the dose-response relations for a specific endpoint. Alternatives to standard treatment should be available and this should take into account the preferences of patients. This will be discussed in detail.

  13. Predicting genomic selection efficiency to optimize calibration set and to assess prediction accuracy in highly structured populations.

    PubMed

    Rincent, R; Charcosset, A; Moreau, L

    2017-08-09

    We propose a criterion to predict genomic selection efficiency for structured populations. This criterion is useful to define optimal calibration set and to estimate prediction reliability for multiparental populations. Genomic selection refers to the use of genotypic information for predicting the performance of selection candidates. It has been shown that prediction accuracy depends on various parameters including the composition of the calibration set (CS). Assessing the level of accuracy of a given prediction scenario is of highest importance because it can be used to optimize CS sampling before collecting phenotypes, and once the breeding values are predicted it informs the breeders about the reliability of these predictions. Different criteria were proposed to optimize CS sampling in highly diverse panels, which can be useful to screen collections of genotypes. But plant breeders often work on structured material such as biparental or multiparental populations, for which these criteria are less adapted. We derived from the generalized coefficient of determination (CD) theory different criteria to optimize CS sampling and to assess the reliability associated to predictions in structured populations. These criteria were evaluated on two nested association mapping (NAM) populations and two highly diverse panels of maize. They were efficient to sample optimized CS in most situations. They could also estimate at least partly the reliability associated to predictions between NAM families, but they could not estimate differences in the reliability associated to the predictions of NAM families using the highly diverse panels as calibration sets. We illustrated that the CD criteria could be adapted to various prediction scenarios including inter and intra-family predictions, resulting in higher prediction accuracies.

  14. Selection of appropriate training and validation set chemicals for modelling dermal permeability by U-optimal design.

    PubMed

    Xu, G; Hughes-Oliver, J M; Brooks, J D; Yeatts, J L; Baynes, R E

    2013-01-01

    Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].

  15. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    PubMed

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  16. Discovery and optimization of indazoles as potent and selective interleukin-2 inducible T cell kinase (ITK) inhibitors.

    PubMed

    Pastor, Richard M; Burch, Jason D; Magnuson, Steven; Ortwine, Daniel F; Chen, Yuan; De La Torre, Kelly; Ding, Xiao; Eigenbrot, Charles; Johnson, Adam; Liimatta, Marya; Liu, Yichin; Shia, Steven; Wang, Xiaolu; Wu, Lawren C; Pei, Zhonghua

    2014-06-01

    There is evidence that small molecule inhibitors of the non-receptor tyrosine kinase ITK, a component of the T-cell receptor signaling cascade, could represent a novel asthma therapeutic class. Moreover, given the expected chronic dosing regimen of any asthma treatment, highly selective as well as potent inhibitors would be strongly preferred in any potential therapeutic. Here we report hit-to-lead optimization of a series of indazoles that demonstrate sub-nanomolar inhibitory potency against ITK with strong cellular activity and good kinase selectivity. We also elucidate the binding mode of these inhibitors by solving the X-ray crystal structures of the complexes.

  17. Discovery of 7-aminofuro[2,3-c]pyridine inhibitors of TAK1: optimization of kinase selectivity and pharmacokinetics.

    PubMed

    Hornberger, Keith R; Chen, Xin; Crew, Andrew P; Kleinberg, Andrew; Ma, Lifu; Mulvihill, Mark J; Wang, Jing; Wilde, Victoria L; Albertella, Mark; Bittner, Mark; Cooke, Andrew; Kadhim, Salam; Kahler, Jennifer; Maresca, Paul; May, Earl; Meyn, Peter; Romashko, Darlene; Tokar, Brianna; Turton, Roy

    2013-08-15

    The kinase selectivity and pharmacokinetic optimization of a series of 7-aminofuro[2,3-c]pyridine inhibitors of TAK1 is described. The intersection of insights from molecular modeling, computational prediction of metabolic sites, and in vitro metabolite identification studies resulted in a simple and unique solution to both of these problems. These efforts culminated in the discovery of compound 13a, a potent, relatively selective inhibitor of TAK1 with good pharmacokinetic properties in mice, which was active in an in vivo model of ovarian cancer.

  18. Growth Optimal Portfolio Selection Under Proportional Transaction Costs with Obligatory Diversification

    SciTech Connect

    Duncan, T. Pasik Duncan, B.; Stettner, L.

    2011-02-15

    A continuous time long run growth optimal or optimal logarithmic utility portfolio with proportional transaction costs consisting of a fixed proportional cost and a cost proportional to the volume of transaction is considered. The asset prices are modeled as exponent of diffusion with jumps whose parameters depend on a finite state Markov process of economic factors. An obligatory portfolio diversification is introduced, accordingly to which it is required to invest at least a fixed small portion of our wealth in each asset.

  19. Boosting the discriminatory power of sparse survival models via optimization of the concordance index and stability selection.

    PubMed

    Mayr, Andreas; Hofner, Benjamin; Schmid, Matthias

    2016-07-22

    When constructing new biomarker or gene signature scores for time-to-event outcomes, the underlying aims are to develop a discrimination model that helps to predict whether patients have a poor or good prognosis and to identify the most influential variables for this task. In practice, this is often done fitting Cox models. Those are, however, not necessarily optimal with respect to the resulting discriminatory power and are based on restrictive assumptions. We present a combined approach to automatically select and fit sparse discrimination models for potentially high-dimensional survival data based on boosting a smooth version of the concordance index (C-index). Due to this objective function, the resulting prediction models are optimal with respect to their ability to discriminate between patients with longer and shorter survival times. The gradient boosting algorithm is combined with the stability selection approach to enhance and control its variable selection properties. The resulting algorithm fits prediction models based on the rankings of the survival times and automatically selects only the most stable predictors. The performance of the approach, which works best for small numbers of informative predictors, is demonstrated in a large scale simulation study: C-index boosting in combination with stability selection is able to identify a small subset of informative predictors from a much larger set of non-informative ones while controlling the per-family error rate. In an application to discover biomarkers for breast cancer patients based on gene expression data, stability selection yielded sparser models and the resulting discriminatory power was higher than with lasso penalized Cox regression models. The combination of stability selection and C-index boosting can be used to select small numbers of informative biomarkers and to derive new prediction rules that are optimal with respect to their discriminatory power. Stability selection controls the per

  20. Automatic selection of an optimal systolic and diastolic reconstruction windows for dual-source CT coronary angiography

    NASA Astrophysics Data System (ADS)

    Seifarth, H.; Puesken, M.; Wienbeck, S.; Maintz, D.; Heindel, W.; Juergens, K.-U.

    2008-03-01

    Purpose: To assess the performance of a motion map algorithm to automatically determine the optimal systolic and diastolic reconstruction window for coronary CT Angiography using Dual Source CT. Materials and Methods: Dual Source coronary CT angiography data sets (Somatom Definition, Siemens Medical Solutions) from 50 consecutive patients were included in the analysis. Optimal systolic and diastolic reconstruction windows were determined using a motion map algorithm (BestPhase, Siemens Medical Solutions). Additionally data sets were reconstructed in 5% steps throughout the RR-interval. For each major vessel (RCA, LAD and LCX) an optimal systolic and diastolic reconstruction window was manually determined by two independent readers using volume rendering displays. Image quality was rated using a five-point scale (1 = no motion artifacts, 5 = severe motion artifacts over entire length of the vessel). Results: The mean heart rate during the scan was 72.4bpm (+/-15.8bpm). Median systolic and diastolic reconstruction windows using the BestPhase algorithm were at 37% and 73% RR. The median manually selected systolic reconstruction window was 35 %, 30% and 35% for RCA, LAD, and LCX. For all vessels the median observer selected diastolic reconstruction window was 75%. Mean image quality using the BestPhase algorithm was 2.4 +/-0.9 for systolic reconstructions and 1.9 +/-1.1 for diastolic reconstructions. Using the manual approach, the mean image quality was 1.9 +/-0.5 and 1.7 +/-0.8 respectively. There was a significant difference in image quality between automatically and manually determined systolic reconstructions (p<0.01) but there was no significant difference in image quality in diastolic reconstructions. Conclusion: Automatic determination of the optimal reconstruction interval using the BestPhase algorithm is feasible and yields reconstruction windows similar to observer selected reconstruction windows. In diastolic reconstructions overall image quality is similar

  1. Pyrido pyrimidinones as selective agonists of the high affinity niacin receptor GPR109A: optimization of in vitro activity.

    PubMed

    Peters, Jens-Uwe; Kühne, Holger; Dehmlow, Henrietta; Grether, Uwe; Conte, Aurelia; Hainzl, Dominik; Hertel, Cornelia; Kratochwil, Nicole A; Otteneder, Michael; Narquizian, Robert; Panousis, Constantinos G; Ricklin, Fabienne; Röver, Stephan

    2010-09-15

    Pyrido pyrimidinones are selective agonists of the human high affinity niacin receptor GPR109A (HM74A). They show no activity on the highly homologous low affinity receptor GPR109B (HM74). Starting from a high throughput screening hit the in vitro activity of the pyrido pyrimidinones was significantly improved providing lead compounds suitable for further optimization. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  2. Optimal frequency selection of multi-channel O2-band different absorption barometric radar for air pressure measurements

    NASA Astrophysics Data System (ADS)

    Lin, Bing; Min, Qilong

    2017-02-01

    Through theoretical analysis, optimal selection of frequencies for O2 differential absorption radar systems on air pressure field measurements is achieved. The required differential absorption optical depth between a radar frequency pair is 0.5. With this required value and other considerations on water vapor absorption and the contamination of radio wave transmission, frequency pairs of present considered radar system are obtained. Significant impacts on general design of differential absorption remote sensing systems are expected from current results.

  3. The Why of Waiting: How mathematical Best-Choice Models demonstrate optimality of a Refractory Period in Habitat Selection

    NASA Astrophysics Data System (ADS)

    Brugger, M. F.; Waymire, E. C.; Betts, M. G.

    2010-12-01

    When brush mice, fruit flies, and other animals disperse from their natal site, they are immediately tasked with selecting new habitat, and must do so in such a way as to optimize their chances of surviving and breeding. Habitat selection connects the fields of behavioral ecology and landscape ecology by describing the role the physical quality of habitat plays in the selection process. Interestingly, observations indicate a strategy that occurs with a certain prescribed statistical regularity. It has been demonstrated (Stamps, Davis, Blozis, Boundy-Mills, Anim. Behav., 2007) that brush mice and fruit flies employ a refractory period: a period wherein a disperser, after leaving its natal site, will not accept highly-preferred natural habitats. Assuming this behavior has adaptive benefit, the apparent optimality of this strategy is mirrored in mathematical models of Stochastic Optimization. In one such model, the Classical Best Choice Problem, a selector views some permutation of the numbers {1, ..., n} one-by-one, seeing only their relative ranks and then either selecting that element or discarding it. The goal is to choose the ``n" element. The optimal strategy is to wait for the ⌈ n/e ⌉ th element and then pick an element if it is better than all those already seen; this might demonstrate why refractory periods have adaptive benefit. We present three extensions to the Best Choice Problem: a partial ordering on the set of elements (Kubicki & Morayne, SIAM J. Discrete Math., 2005), a new goal of minimizing the expected rank (Chow, Moriguti, Robbins, Samuels, Israel J. Math., 1964), and a general utility function (Gusein-Zade, Theory of Prob. and Applications, 1966), allowing the top r sites to be equally desirable. These extensions relate to ecological phenomena not represented by the Classical Problem. In each, we discuss the effect on the duration or existence of the Refractory Period.

  4. Quantitative and qualitative optimization of allergen extraction from peanut and selected tree nuts. Part 1. Screening of optimal extraction conditions using a D-optimal experimental design.

    PubMed

    L'Hocine, Lamia; Pitre, Mélanie

    2016-03-01

    A D-optimal design was constructed to optimize allergen extraction efficiency simultaneously from roasted, non-roasted, defatted, and non-defatted almond, hazelnut, peanut, and pistachio flours using three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various conditions of ionic strength, buffer-to-protein ratio, extraction temperature, and extraction duration. Statistical analysis showed that roasting and non-defatting significantly lowered protein recovery for all nuts. Increasing the temperature and the buffer-to-protein ratio during extraction significantly increased protein recovery, whereas increasing the extraction time had no significant impact. The impact of the three buffers on protein recovery varied significantly among the nuts. Depending on the extraction conditions, protein recovery varied from 19% to 95% for peanut, 31% to 73% for almond, 17% to 64% for pistachio, and 27% to 88% for hazelnut. A modulation by the buffer type and ionic strength of protein and immunoglobuline E binding profiles of extracts was evidenced, where high protein recovery levels did not always correlate with high immunoreactivity.

  5. Selection of optimal welding condition for GTA pulse welding in root-pass of V-groove butt joint

    NASA Astrophysics Data System (ADS)

    Yun, Seok-Chul; Kim, Jae-Woong

    2010-12-01

    In the manufacture of high-quality welds or pipeline, a full-penetration weld has to be made along the weld joint. Therefore, root-pass welding is very important, and its conditions have to be selected carefully. In this study, an experimental method for the selection of optimal welding conditions is proposed for gas tungsten arc (GTA) pulse welding in the root pass which is done along the V-grooved butt-weld joint. This method uses response surface analysis in which the width and height of back bead are chosen as quality variables of the weld. The overall desirability function, which is the combined desirability function for the two quality variables, is used as the objective function to obtain the optimal welding conditions. In our experiments, the target values of back bead width and height are 4 mm and zero, respectively, for a V-grooved butt-weld joint of a 7-mm-thick steel plate. The optimal welding conditions could determine the back bead profile (bead width and height) as 4.012 mm and 0.02 mm. From a series of welding tests, it was revealed that a uniform and full-penetration weld bead can be obtained by adopting the optimal welding conditions determined according to the proposed method.

  6. Commentary: Why Pharmaceutical Scientists in Early Drug Discovery Are Critical for Influencing the Design and Selection of Optimal Drug Candidates.

    PubMed

    Landis, Margaret S; Bhattachar, Shobha; Yazdanian, Mehran; Morrison, John

    2017-07-28

    This commentary reflects the collective view of pharmaceutical scientists from four different organizations with extensive experience in the field of drug discovery support. Herein, engaging discussion is presented on the current and future approaches for the selection of the most optimal and developable drug candidates. Over the past two decades, developability assessment programs have been implemented with the intention of improving physicochemical and metabolic properties. However, the complexity of both new drug targets and non-traditional drug candidates provides continuing challenges for developing formulations for optimal drug delivery. The need for more enabled technologies to deliver drug candidates has necessitated an even more active role for pharmaceutical scientists to influence many key molecular parameters during compound optimization and selection. This enhanced role begins at the early in vitro screening stages, where key learnings regarding the interplay of molecular structure and pharmaceutical property relationships can be derived. Performance of the drug candidates in formulations intended to support key in vivo studies provides important information on chemotype-formulation compatibility relationships. Structure modifications to support the selection of the solid form are also important to consider, and predictive in silico models are being rapidly developed in this area. Ultimately, the role of pharmaceutical scientists in drug discovery now extends beyond rapid solubility screening, early form assessment, and data delivery. This multidisciplinary role has evolved to include the practice of proactively taking part in the molecular design to better align solid form and formulation requirements to enhance developability potential.

  7. Optimized fabrication of Ca-P/PHBV nanocomposite scaffolds via selective laser sintering for bone tissue engineering.

    PubMed

    Duan, Bin; Cheung, Wai Lam; Wang, Min

    2011-03-01

    Biomaterials for scaffolds and scaffold fabrication techniques are two key elements in scaffold-based tissue engineering. Nanocomposites that consist of biodegradable polymers and osteoconductive bioceramic nanoparticles and advanced scaffold manufacturing techniques, such as rapid prototyping (RP) technologies, have attracted much attention for developing new bone tissue engineering strategies. In the current study, poly(hydroxybutyrate-co-hydroxyvalerate) (PHBV) microspheres and calcium phosphate (Ca-P)/PHBV nanocomposite microspheres were fabricated using the oil-in-water (O/W) and solid-in-oil-in-water (S/O/W) emulsion solvent evaporation methods. The microspheres with suitable sizes were then used as raw materials for scaffold fabrication via selective laser sintering (SLS), which is a mature RP technique. A three-factor three-level complete factorial design was applied to investigate the effects of the three factors (laser power, scan spacing, and layer thickness) in SLS and to optimize SLS parameters for producing good-quality PHBV polymer scaffolds and Ca-P/PHBV nanocomposite scaffolds. The plots of the main effects of these three factors and the three-dimensional response surface were constructed and discussed. Based on the regression equation, optimized PHBV scaffolds and Ca-P/PHBV scaffolds were fabricated using the optimized values of SLS parameters. Characterization of optimized PHBV scaffolds and Ca-P/PHBV scaffolds verified the optimization process. It has also been demonstrated that SLS has the capability of constructing good-quality, sophisticated porous structures of complex shape, which some tissue engineering applications may require.

  8. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    PubMed

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection.

  9. Optimization of 2-phenylcyclopropylmethylamines as selective serotonin 2C receptor agonists and their evaluation as potential antipsychotic agents.

    PubMed

    Cheng, Jianjun; Giguère, Patrick M; Onajole, Oluseye K; Lv, Wei; Gaisin, Arsen; Gunosewoyo, Hendra; Schmerberg, Claire M; Pogorelov, Vladimir M; Rodriguiz, Ramona M; Vistoli, Giulio; Wetsel, William C; Roth, Bryan L; Kozikowski, Alan P

    2015-02-26

    The discovery of a new series of compounds that are potent, selective 5-HT2C receptor agonists is described herein as we continue our efforts to optimize the 2-phenylcyclopropylmethylamine scaffold. Modifications focused on the alkoxyl substituent present on the aromatic ring led to the identification of improved ligands with better potency at the 5-HT2C receptor and excellent selectivity against the 5-HT2A and 5-HT2B receptors. ADMET studies coupled with a behavioral test using the amphetamine-induced hyperactivity model identified four compounds possessing drug-like profiles and having antipsychotic properties. Compound (+)-16b, which displayed an EC50 of 4.2 nM at 5-HT2C, no activity at 5-HT2B, and an 89-fold selectivity against 5-HT2A, is one of the most potent and selective 5-HT2C agonists reported to date. The likely binding mode of this series of compounds to the 5-HT2C receptor was also investigated in a modeling study, using optimized models incorporating the structures of β2-adrenergic receptor and 5-HT2B receptor.

  10. A modified binary particle swarm optimization for selecting the small subset of informative genes from gene expression data.

    PubMed

    Mohamad, Mohd Saberi; Omatu, Sigeru; Deris, Safaai; Yoshioka, Michifumi

    2011-11-01

    Gene expression data are expected to be of significant help in the development of efficient cancer diagnoses and classification platforms. In order to select a small subset of informative genes from the data for cancer classification, recently, many researchers are analyzing gene expression data using various computational intelligence methods. However, due to the small number of samples compared to the huge number of genes (high dimension), irrelevant genes, and noisy genes, many of the computational methods face difficulties to select the small subset. Thus, we propose an improved (modified) binary particle swarm optimization to select the small subset of informative genes that is relevant for the cancer classification. In this proposed method, we introduce particles' speed for giving the rate at which a particle changes its position, and we propose a rule for updating particle's positions. By performing experiments on ten different gene expression datasets, we have found that the performance of the proposed method is superior to other previous related works, including the conventional version of binary particle swarm optimization (BPSO) in terms of classification accuracy and the number of selected genes. The proposed method also produces lower running times compared to BPSO.

  11. Design and optimization of a multi-element piezoelectric transducer for mode-selective generation of guided waves

    NASA Astrophysics Data System (ADS)

    Yazdanpanah Moghadam, Peyman; Quaegebeur, Nicolas; Masson, Patrice

    2016-07-01

    A novel multi-element piezoelectric transducers (MEPT) is designed, optimized, machined and experimentally tested to improve structural health monitoring systems for mode-selective generation of guided waves (GW) in an isotropic structure. GW generation using typical piezoceramics makes the signal processing and consequently damage detection very complicated because at any driving frequency at least two fundamental symmetric (S 0) and antisymmetric (A 0) modes are generated. To prevent this, mode selective transducer design is proposed based on MEPT. A numerical method is first developed to extract the interfacial stress between a single piezoceramic element and a host structure and then used as the input of an analytical model to predict the GW propagation through the thickness of an isotropic plate. Two novel objective functions are proposed to optimize the interfacial shear stress for both suppressing unwanted mode(s) and maximizing the desired mode. Simplicity and low manufacturing cost are two main targets driving the design of the MEPT. A prototype MEPT is then manufactured using laser micro-machining. An experimental procedure is presented to validate the performances of the MEPT as a new solution for mode-selective GW generation. Experimental tests illustrate the high capability of the MEPT for mode-selective GW generation, as unwanted mode is suppressed by a factor up to 170 times compared with the results obtained with a single piezoceramic.

  12. Scaffold Ranking and Positional Scanning Utilized in the Discovery of nAChR-Selective Compounds Suitable for Optimization Studies

    PubMed Central

    Wu, Jinhua; Zhang, Yaohong; Maida, Laura E.; Santos, Radleigh G.; Welmaker, Gregory S.; LaVoi, Travis M.; Nefzi, Adel; Yu, Yongping; Houghten, Richard A.; Toll, Lawrence; Giulianotti, Marc A.

    2014-01-01

    Nicotine binds to nicotinic acetylcholine receptors (nAChR), which can exist as many different subtypes. The α4β2 nAChR is the most prevalent subtype in the brain and possesses the most evidence linking it to nicotine seeking behavior. Herein we report the use of mixture based combinatorial libraries for the rapid discovery of a series of α4β2 nAChR selective compounds. Further chemistry optimization provided compound 301, which was characterized as a selective α4β2 nAChR antagonist. This compound displayed no agonist activity but blocked nicotine-induced depolarization of HEK cells with an IC50 of approximately 430 nM. 301 demonstrated nearly 500-fold selectivity for binding and 40-fold functional selectivity for α4β2 over α3β4 nAChR. In total over 5 million compounds were assessed through the use of just 170 samples in order to identify a series of structural analogues suitable for future optimization toward the goal of developing clinically relevant smoking cessation medications. PMID:24274400

  13. Regression metamodels of an optimal genomic testing strategy in dairy cattle when selection intensity is low

    USDA-ARS?s Scientific Manuscript database

    Genomic testing of dairy cattle increases reliability and can be used to select animals with superior genetic merit. Genomic testing is not free and not all candidates for selection should necessarily be tested. One common algorithm used to compare alternative decisions is time-consuming and not eas...

  14. Pattern Search Ranking and Selection Algorithms for Mixed-Variable Optimization of Stochastic Systems

    DTIC Science & Technology

    2004-09-01

    optimization problems with stochastic objective functions and a mixture of design variable types. The generalized pattern search (GPS) class of algorithms is...provide computational enhancements to the basic algorithm. Im- plementation alternatives include the use of modern R&S procedures designed to provide...83 vii Page 4.3 Termination Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.4 Algorithm Design

  15. Selecting Segmental Errors in Non-Native Dutch for Optimal Pronunciation Training

    ERIC Educational Resources Information Center

    Neri, Ambra; Cucchiarini, Catia; Strik, Helmer

    2006-01-01

    The current emphasis in second language teaching lies in the achievement of communicative effectiveness. In line with this approach, pronunciation training is nowadays geared towards helping learners avoid serious pronunciation errors, rather than eradicating the finest traces of foreign accent. However, to devise optimal pronunciation training…

  16. Item Selection for the Development of Short Forms of Scales Using an Ant Colony Optimization Algorithm

    ERIC Educational Resources Information Center

    Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.

    2008-01-01

    This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and…

  17. Optimal sensor selection for noisy binary detection in stochastic pooling networks

    NASA Astrophysics Data System (ADS)

    McDonnell, Mark D.; Li, Feng; Amblard, P.-O.; Grant, Alex J.

    2013-08-01

    Stochastic Pooling Networks (SPNs) are a useful model for understanding and explaining how naturally occurring encoding of stochastic processes can occur in sensor systems ranging from macroscopic social networks to neuron populations and nanoscale electronics. Due to the interaction of nonlinearity, random noise, and redundancy, SPNs support various unexpected emergent features, such as suprathreshold stochastic resonance, but most existing mathematical results are restricted to the simplest case where all sensors in a network are identical. Nevertheless, numerical results on information transmission have shown that in the presence of independent noise, the optimal configuration of a SPN is such that there should be partial heterogeneity in sensor parameters, such that the optimal solution includes clusters of identical sensors, where each cluster has different parameter values. In this paper, we consider a SPN model of a binary hypothesis detection task and show mathematically that the optimal solution for a specific bound on detection performance is also given by clustered heterogeneity, such that measurements made by sensors with identical parameters either should all be excluded from the detection decision or all included. We also derive an algorithm for numerically finding the optimal solution and illustrate its utility with several examples, including a model of parallel sensory neurons with Poisson firing characteristics.

  18. Item Selection for the Development of Short Forms of Scales Using an Ant Colony Optimization Algorithm

    ERIC Educational Resources Information Center

    Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.

    2008-01-01

    This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and…

  19. A Conceptual Framework for Procurement Decision Making Model to Optimize Supplier Selection: The Case of Malaysian Construction Industry

    NASA Astrophysics Data System (ADS)

    Chuan, Ngam Min; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Yong, Lee Choon; Ghazali, Azrul; Ezanee Rusli, Mohd; Itam, Zarina Binti; Beddu, Salmia; Liyana Mohd Kamal, Nur

    2016-03-01

    This paper intends to fathom the current state of procurement system in Malaysia specifically in the construction industry in the aspect of supplier selection. This paper propose a comprehensive study on the supplier selection metrics for infrastructure building, weight the importance of each metrics assigned and to find the relationship between the metrics among initiators, decision makers, buyers and users. With the metrics hierarchy of criteria importance, a supplier selection process can be defined, repeated and audited with lesser complications or difficulties. This will help the field of procurement to improve as this research is able to develop and redefine policies and procedures that have been set in supplier selection. Developing this systematic process will enable optimization of supplier selection and thus increasing the value for every stakeholders as the process of selection is greatly simplified. With a new redefined policy and procedure, it does not only increase the company’s effectiveness and profit, but also make it available for the company to reach greater heights in the advancement of procurement in Malaysia.

  20. Optimization of fermentation parameters to study the behavior of selected lactic cultures on soy solid state fermentation.

    PubMed

    Rodríguez de Olmos, A; Bru, E; Garro, M S

    2015-03-02

    The use of solid fermentation substrate (SSF) has been appreciated by the demand for natural and healthy products. Lactic acid bacteria and bifidobacteria play a leading role in the production of novel functional foods and their behavior is practically unknown in these systems. Soy is an excellent substrate for the production of functional foods for their low cost and nutritional value. The aim of this work was to optimize different parameters involved in solid state fermentation (SSF) using selected lactic cultures to improve soybean substrate as a possible strategy for the elaboration of new soy food with enhanced functional and nutritional properties. Soy flour and selected lactic cultures were used under different conditions to optimize the soy SSF. The measured responses were bacterial growth, free amino acids and β-glucosidase activity, which were analyzed by applying response surface methodology. Based on the proposed statistical model, different fermentation conditions were raised by varying the moisture content (50-80%) of the soy substrate and temperature of incubation (31-43°C). The effect of inoculum amount was also investigated. These studies demonstrated the ability of selected strains (Lactobacillus paracasei subsp. paracasei and Bifidobacterium longum) to grow with strain-dependent behavior on the SSF system. β-Glucosidase activity was evident in both strains and L. paracasei subsp. paracasei was able to increase the free amino acids at the end of fermentation under assayed conditions. The used statistical model has allowed the optimization of fermentation parameters on soy SSF by selected lactic strains. Besides, the possibility to work with lower initial bacterial amounts to obtain results with significant technological impact was demonstrated.