Science.gov

Sample records for optimal tuner selection

  1. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  2. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  3. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  4. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  5. Model-Based Control of an Aircraft Engine using an Optimal Tuner Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Chicatelli, Amy; Garg, Sanjay

    2012-01-01

    This paper covers the development of a model-based engine control (MBEC) method- ology applied to an aircraft turbofan engine. Here, a linear model extracted from the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) at a cruise operating point serves as the engine and the on-board model. The on-board model is up- dated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. MBEC provides the ability for a tighter control bound of thrust over the entire life cycle of the engine that is not achievable using traditional control feedback, which uses engine pressure ratio or fan speed. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC tighter thrust control. In addition, investigations of using the MBEC to provide a surge limit for the controller limit logic are presented that could provide benefits over a simple acceleration schedule that is currently used in engine control architectures.

  6. Fast ferrite tuner for the BNL synchrotron light source

    SciTech Connect

    Pivit, E. ); Hanna, S.M.; Keane, J. )

    1991-01-01

    A new type of ferrite tuner has been tested at the BNL. The ferrite tuner uses garnet slabs partially filling a stripline. One of the important features of the tuner is that the ferrite is perpendicularly biased for operation above FMR, thus reducing the magnetic losses. A unique design was adopted to achieve the efficient cooling. The principle of operation of the tuner as well as our preliminary results on tuning a 52 MHz cavity are reported. Optimized conditions under which we demonstrated linear tunability of 80 KHz are described. The tuner's losses and its effect on higher-order modes in the cavity are discussed. 2 refs., 8 figs.

  7. Model-Based Control of a Nonlinear Aircraft Engine Simulation using an Optimal Tuner Kalman Filter Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob

    2013-01-01

    This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.

  8. Circular piezoelectric bender laser tuners

    NASA Technical Reports Server (NTRS)

    Mcelroy, J. H.; Thompson, P. E.; Walker, H. E.; Johnson, E. H.; Radecki, D. J.; Reynolds, R. S.

    1972-01-01

    The circular piezoelectric bender laser tuner to replace conventional laser tuners when mirror diameters up to 0.50 inch are sufficient is described. The circular piezoelectric bender laser tuner offers much higher displacements per applied volt and permits laser control circuits to be fabricated using standard operational amplifiers, rather than the expensive high-voltage amplifiers required by conventional tuners. The cost of the device is more than one order of magnitude lower than conventional tuners and the device is very rugged with all mechanical resonances easily designed to be greater than 3kHz. In addition to its use as a laser frequency tuner, the circular bender tuner should find many applications in interferometers and similar devices.

  9. Test of a coaxial blade tuner at HTS FNAL

    SciTech Connect

    Pischalnikov, Y.; Barbanotti, S.; Harms, E.; Hocker, A.; Khabiboulline, T.; Schappert, W.; Bosotti, A.; Pagani, C.; Paparella, R.; /LASA, Segrate

    2011-03-01

    A coaxial blade tuner has been selected for the 1.3GHz SRF cavities of the Fermilab SRF Accelerator Test Facility. Results from tuner cold tests in the Fermilab Horizontal Test Stand are presented. Fermilab is constructing the SRF Accelerator Test Facility, a facility for accelerator physics research and development. This facility will contain a total of six cryomodules, each containing eight 1.3 GHz nine-cell elliptical cavities. Each cavity will be equipped with a Slim Blade Tuner designed by INFN Milan. The blade tuner incorporates both a stepper motor and piezo actuators to allow for both slow and fast cavity tuning. The stepper motor allows the cavity frequency to be statically tuned over a range of 500 kHz with an accuracy of several Hz. The piezos provide up to 2 kHz of dynamic tuning for compensation of Lorentz force detuning and variations in the He bath pressure. The first eight blade tuners were built at INFN Milan, but the remainder are being manufactured commercially following the INFN design. To date, more than 40 of the commercial tuners have been delivered.

  10. Electromagnetic SCRF Cavity Tuner

    SciTech Connect

    Kashikhin, V.; Borissov, E.; Foster, G.W.; Makulski, A.; Pischalnikov, Y.; Khabiboulline, T.; /Fermilab

    2009-05-01

    A novel prototype of SCRF cavity tuner is being designed and tested at Fermilab. This is a superconducting C-type iron dominated magnet having a 10 mm gap, axial symmetry, and a 1 Tesla field. Inside the gap is mounted a superconducting coil capable of moving {+-} 1 mm and producing a longitudinal force up to {+-} 1.5 kN. The static force applied to the RF cavity flanges provides a long-term cavity geometry tuning to a nominal frequency. The same coil powered by fast AC current pulse delivers mechanical perturbation for fast cavity tuning. This fast mechanical perturbation could be used to compensate a dynamic RF cavity detuning caused by cavity Lorentz forces and microphonics. A special configuration of magnet system was designed and tested.

  11. LEB tuner made out of titanium alloy

    SciTech Connect

    Goren, Y.; Campbell, B.

    1991-09-01

    A proposed design of a closed shell tuner for the LEB cavity is presented. The tuner is made out of Ti alloy which has a high electrical resistivity as well as very good mechanical strength. Using this alloy results in a substantial reduction in the eddy current heating as well as allowing for faster frequency control. 9 figs.

  12. Inductive tuners for microwave driven discharge lamps

    DOEpatents

    Simpson, James E.

    1999-01-01

    An RF powered electrodeless lamp utilizing an inductive tuner in the waveguide which couples the RF power to the lamp cavity, for reducing reflected RF power and causing the lamp to operate efficiently.

  13. ANT tuner retrofit for LEB cavity

    SciTech Connect

    Walling, L.; Goren, Y.; Kwiatkowski, S.

    1994-03-01

    This report describes a ferrite tuner design for the LEB cavity that utilizes techniques for bonding ferrite to metallic cooling plates that is utilized in the high-power rf and microwave industry. A test tuner was designed to fit into the existing LEB-built magnet and onto the Grimm LEB Cavity. It will require a new vacuum window in order to attain maximal tuning range and high voltage capability and a new center conductor of longer length and a different vacuum window connection than the Grimm center conductor. However, the new center conductor will be essentially identical to the Grimm center conductor in its basic construction and in the way it connects to the stand for support. The tuner is mechanically very similar to high-power stacked circulators built by ANT of Germany and was designed according to ANT`s established engineering and design criteria and SSC LEB tuning and power requirements. The tuner design incorporates thin tiles of ferrite glued using a high-radiation-resistance epoxy to copper-plated stainless steel cooling plates of thickness 6.5 mm with water cooling channels inside the plates. The cooling plates constitute 16 pie-shaped segments arranged in a disk. They are electrically isolated from each other to suppress eddy currents. Five of these disks are arranged in parallel with high-pressure rf contacts between the plates at the outer radius. The end walls are slotted copper-plated stainless steel of thickness 3 mm.

  14. Mechanical design upgrade of the APS storage ring rf cavity tuner

    SciTech Connect

    Jones, J.; Bromberek, D.; Kang, Y.

    1997-08-01

    The Advanced Photon Source (APS) storage ring (SR) rf system employs four banks of four spherical, single-cell resonant cavities. Each cavity is tuned by varying the cavity volume through insertion/retraction of a copper piston located at the circumference of the cavity and oriented perpendicular to the accelerator beam. During the commissioning of the APS SR, the tuners and cavity tuner ports were prone to extensive arcing and overheating. The existing tuners were modified to eliminate the problems, and two new, redesigned tuners were installed. In both cases marked improvements were obtained in the tuner mechanical performance. As measured by tuner piston and flange surface temperatures, tuner heating has been reduced by a factor of five in the new version. Redesign considerations discussed include tuner piston-to-housing alignment, tuner piston and housing materials and cooling configurations, and tuner piston sliding electrical contacts. The tuner redesign is also distinguished by a modular, more maintainable assembly.

  15. Dependence of ion beam current on position of mobile plate tuner in multi-frequencies microwaves electron cyclotron resonance ion source

    SciTech Connect

    Kurisu, Yosuke; Kiriyama, Ryutaro; Takenaka, Tomoya; Nozaki, Dai; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2012-02-15

    We are constructing a tandem-type electron cyclotron resonance ion source (ECRIS). The first stage of this can supply 2.45 GHz and 11-13 GHz microwaves to plasma chamber individually and simultaneously. We optimize the beam current I{sub FC} by the mobile plate tuner. The I{sub FC} is affected by the position of the mobile plate tuner in the chamber as like a circular cavity resonator. We aim to clarify the relation between the I{sub FC} and the ion saturation current in the ECRIS against the position of the mobile plate tuner. We obtained the result that the variation of the plasma density contributes largely to the variation of the I{sub FC} when we change the position of the mobile plate tuner.

  16. Dependence of ion beam current on position of mobile plate tuner in multi-frequencies microwaves electron cyclotron resonance ion source.

    PubMed

    Kurisu, Yosuke; Kiriyama, Ryutaro; Takenaka, Tomoya; Nozaki, Dai; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2012-02-01

    We are constructing a tandem-type electron cyclotron resonance ion source (ECRIS). The first stage of this can supply 2.45 GHz and 11-13 GHz microwaves to plasma chamber individually and simultaneously. We optimize the beam current I(FC) by the mobile plate tuner. The I(FC) is affected by the position of the mobile plate tuner in the chamber as like a circular cavity resonator. We aim to clarify the relation between the I(FC) and the ion saturation current in the ECRIS against the position of the mobile plate tuner. We obtained the result that the variation of the plasma density contributes largely to the variation of the I(FC) when we change the position of the mobile plate tuner.

  17. Laser tuners using circular piezoelectric benders

    NASA Technical Reports Server (NTRS)

    Mcelroy, J. H.; Thompson, P. E.; Walker, H. E.; Johnson, E. H.; Radecki, D. J.; Reynolds, R. S.

    1975-01-01

    The paper presents the results of an experimental evaluation of a new type of piezoelectric ceramic device designed for use as a laser mirror tuner. Thin plates made from various materials were assembled into a circular bimorph configuration and tested for linearity of movement, maximum travel, and resonant frequency for varying conditions of clamping torque and mirror loading values. Most of the devices tested could accept mirror diameters up to approximately 1.3 cm and maintain a resonant frequency above 2 kHz. Typical mirror translation without measurable tilt was plus or minus 20 micrometers or greater for applied voltages of less than plus or minus 300 V.

  18. Feedback controlled hybrid fast ferrite tuners

    SciTech Connect

    Remsen, D.B.; Phelps, D.A.; deGrassie, J.S.; Cary, W.P.; Pinsker, R.I.; Moeller, C.P.; Arnold, W.; Martin, S.; Pivit, E.

    1993-09-01

    A low power ANT-Bosch fast ferrite tuner (FFT) was successfully tested into (1) the lumped circuit equivalent of an antenna strap with dynamic plasma loading, and (2) a plasma loaded antenna strap in DIII-D. When the FFT accessible mismatch range was phase-shifted to encompass the plasma-induced variation in reflection coefficient, the 50 {Omega} source was matched (to within the desired 1.4 : 1 voltage standing wave ratio). The time required to achieve this match (i.e., the response time) was typically a few hundred milliseconds, mostly due to a relatively slow network analyzer-computer system. The response time for the active components of the FFT was 10 to 20 msec, or much faster than the present state-of-the-art for dynamic stub tuners. Future FFT tests are planned, that will utilize the DIII-D computer (capable of submillisecond feedback control), as well as several upgrades to the active control circuit, to produce a FFT feedback control system with a response time approaching 1 msec.

  19. Fast Tuner R&D for RIA

    SciTech Connect

    Rusnak, B; Shen, S

    2003-08-19

    The limited cavity beam loading conditions anticipated for the Rare Isotope Accelerator (RIA) create a situation where microphonic-induced cavity detuning dominates radio frequency (RF) coupling and RF system architecture choices in the linac design process. Where most superconducting electron and proton linacs have beam-loaded bandwidths that are comparable to or greater than typical microphonic detuning bandwidths on the cavities, the beam-loaded bandwidths for many heavy-ion species in the RIA driver linac can be as much as a factor of 10 less than the projected 80-150 Hz microphonic control window for the RF structures along the driver, making RF control problematic. System studies indicate that for the low-{beta} driver linac alone, running the cavities with no fast tuner may cost 50% or more than an RF system employing a voltage controlled reactance (VCX) or other type of fast tuner. An update of these system cost studies, along with the status of the VCX work being done at Lawrence Livermore National Lab is presented.

  20. Characterization of CNRS Fizeau wedge laser tuner

    NASA Astrophysics Data System (ADS)

    A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom fabricated circuit board which contains a high speed fringe detection and locating circuit. This board includes a dc level discriminator type fringe detector, a counter circuit to determine fringe center, a pulsed laser triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data collection process and interprets the results.

  1. Characterization of CNRS Fizeau wedge laser tuner

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom fabricated circuit board which contains a high speed fringe detection and locating circuit. This board includes a dc level discriminator type fringe detector, a counter circuit to determine fringe center, a pulsed laser triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data collection process and interprets the results.

  2. Fast Ferroelectric L-Band Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2011-03-01

    Analysis and modeling is presented for a fast microwave tuner to operate at 700 MHz which incorporates ferroelectric elements whose dielectric permittivity can be rapidly altered by application of an external voltage. This tuner could be used to correct unavoidable fluctuations in the resonant frequency of superconducting cavities in accelerator structures, thereby greatly reducing the RF power needed to drive the cavities. A planar test version of the tuner has been tested at low levels of RF power, but at 1300 MHz to minimize the physical size of the test structure. This test version comprises one-third of the final version. The tests show performance in good agreement with simulations, but with losses in the ferroelectric elements that are too large for practical use, and with issues in bonding of ferroelectric elements to the metal walls of the tuner structure.

  3. Fast Ferroelectric L-Band Tuner for ILC Cavities

    SciTech Connect

    Hirshfield, Jay L

    2010-03-15

    Design, analysis, and low-power tests are described on a 1.3 GHz ferroelectric tuner that could find application in the International Linear Collider or in Project X at Fermi National Accelerator Laboratory. The tuner configuration utilizes a three-deck sandwich imbedded in a WR-650 waveguide, in which ferroelectric bars are clamped between conducting plates that allow the tuning bias voltage to be applied. Use of a reduced one-third structure allowed tests of critical parameters of the configuration, including phase shift, loss, and switching speed. Issues that were revealed that require improvement include reducing loss tangent in the ferroelectric material, development of a reliable means of brazing ferroelectric elements to copper parts of the tuner, and simplification of the mechanical design of the configuration.

  4. Fast Ferroelectric L-Band Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2012-07-03

    Design, analysis, and low-power tests are described on a ferroelectric tuner concept that could be used for controlling external coupling to RF cavities for the superconducting Energy Recovery Linac (ERL) in the electron cooler of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL). The tuner configuration utilizes several small donut-shaped ferroelectric assemblies, which allow the design to be simpler and more flexible, as compared to previous designs. Design parameters for 704 and 1300 MHz versions of the tuner are given. Simulation results point to efficient performance that could reduce by a factor-of-ten the RF power levels required for driving superconducting cavities in the BNL ERL.

  5. A Fast Double Mode Tuner for Antenna Matching

    NASA Astrophysics Data System (ADS)

    Martin, S.; Arnold, W.; Pivit, E.

    1992-01-01

    To match a microwave transmitter to different loading conditions, two networks are presented. Based on a Fast Ferrite Tuner (FFT) a Double Stub Tuner (DST) is developed. The principle function of a FFT, which consists of a stripline partially filled with microwave ferrites, may be described by tunable reactances. The DST consists of two FFTs connected by a transformation line. An advanced development is the Double Mode Tuner (DMT). It uses two different modes, which can be tuned independently by varying DC magnetic field. Thus series and parallel reactances are introduced into a coaxial system and a T-matching network results. Both matching networks can be tuned within 20 ms and handle power levels up to 2 MW. Typical applications are in the frequency range from 25 MHz to 300 MHz with load reflection coefficients from 0.0 to 0.5 at all phases.

  6. Fast 704 MHz Ferroelectric Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2012-04-12

    The Omega-P SBIR project described in this Report has as its goal the development, test, and evaluation of a fast electrically-controlled L-band tuner for BNL Energy Recovery Linac (ERL) in the Electron Ion Collider (EIC) upgrade of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL). The tuner, that employs an electrically-controlled ferroelectric component, is to allow fast compensation to cavity resonance changes. In ERLs, there are several factors which significantly affect the amount of power required from the wall-plug to provide the RF-power level necessary for the operation. When beam loading is small, the power requirements are determined by (i) ohmic losses in cavity walls, (ii) fluctuations in amplitude and/or phase for beam currents, and (iii) microphonics. These factors typically require a substantial change in the coupling between the cavity and the feeding line, which results in an intentional broadening of the cavity bandwidth, which in turn demands a significant amount of additional RF power. If beam loading is not small, there is a variety of beam-drive phase instabilities to be managed, and microphonics will still remain an issue, so there remain requirements for additional power. Moreover ERL performance is sensitive to changes in beam arrival time, since any such change is equivalent to phase instability with its vigorous demands for additional power. In this Report, we describe the new modular coaxial tuner, with specifications suitable for the 704 MHz ERL application. The device would allow changing the RF-coupling during the cavity filling process in order to effect significant RF power savings, and also will provide rapid compensation for beam imbalance and allow for fast stabilization against phase fluctuations caused by microphonics, beam-driven instabilities, etc. The tuner is predicted to allow a reduction of about ten times in the required power from the RF source, as compared to a compensation system

  7. Waveguide Stub Tuner Analysis for CEBAF Machine Application

    SciTech Connect

    Haipeng Wang; Michael Tiefenback

    2004-08-01

    Three-stub WR650 waveguide tuners have been used on the CEBAF superconducting cavities for two changes of the external quality factors (Qext): increasing the Qext from 3.4-7.6 x 10{sup 6} to 8 x 10{sup 6}6 on 5-cell cavities to reduce klystron power at operating gradients and decreasing the Qext from 1.7-2.4 x 10{sup 7} to 8 x 10{sup 6} on 7-cell cavities to simplify control of Lorenz Force detuning. To understand the reactive tuning effects in the machine operations with beam current and mechanical tuning, a network analysis model was developed. The S parameters of the stub tuner were simulated by MAFIA and measured on the bench. We used this stub tuner model to study tuning range, sensitivity, and frequency pulling, as well as cold waveguide (WG) and window heating problems. Detailed experimental results are compared against this model. Pros and cons of this stub tuner application are summarized.

  8. A hydrogen maser with cavity auto-tuner for timekeeping

    NASA Technical Reports Server (NTRS)

    Lin, C. F.; He, J. W.; Zhai, Z. C.

    1992-01-01

    A hydrogen maser frequency standard for timekeeping was worked on at the Shanghai Observatory. The maser employs a fast cavity auto-tuner, which can detect and compensate the frequency drift of the high-Q resonant cavity with a short time constant by means of a signal injection method, so that the long term frequency stability of the maser standard is greatly improved. The cavity auto-tuning system and some maser data obtained from the atomic time comparison are described.

  9. Testing of the new tuner design for the CEBAF 12 GeV upgrade SRF cavities

    SciTech Connect

    Edward Daly; G. Davis; William Hicks

    2005-05-01

    The new tuner design for the 12 GeV Upgrade SRF cavities consists of a coarse mechanical tuner and a fine piezoelectric tuner. The mechanism provides a 30:1 mechanical advantage, is pre-loaded at room temperature and tunes the cavities in tension only. All of the components are located in the insulating vacuum space and attached to the helium vessel, including the motor, harmonic drive and piezoelectric actuators. The requirements and detailed design are presented. Measurements of range and resolution of the coarse tuner are presented and discussed.

  10. Feature Selection via Chaotic Antlion Optimization

    PubMed Central

    Zawbaa, Hossam M.; Emary, E.; Grosan, Crina

    2016-01-01

    Background Selecting a subset of relevant properties from a large set of features that describe a dataset is a challenging machine learning task. In biology, for instance, the advances in the available technologies enable the generation of a very large number of biomarkers that describe the data. Choosing the more informative markers along with performing a high-accuracy classification over the data can be a daunting task, particularly if the data are high dimensional. An often adopted approach is to formulate the feature selection problem as a biobjective optimization problem, with the aim of maximizing the performance of the data analysis model (the quality of the data training fitting) while minimizing the number of features used. Results We propose an optimization approach for the feature selection problem that considers a “chaotic” version of the antlion optimizer method, a nature-inspired algorithm that mimics the hunting mechanism of antlions in nature. The balance between exploration of the search space and exploitation of the best solutions is a challenge in multi-objective optimization. The exploration/exploitation rate is controlled by the parameter I that limits the random walk range of the ants/prey. This variable is increased iteratively in a quasi-linear manner to decrease the exploration rate as the optimization progresses. The quasi-linear decrease in the variable I may lead to immature convergence in some cases and trapping in local minima in other cases. The chaotic system proposed here attempts to improve the tradeoff between exploration and exploitation. The methodology is evaluated using different chaotic maps on a number of feature selection datasets. To ensure generality, we used ten biological datasets, but we also used other types of data from various sources. The results are compared with the particle swarm optimizer and with genetic algorithm variants for feature selection using a set of quality metrics. PMID:26963715

  11. DESIGN CONSIDERATIONS FOR THE MECHANICAL TUNER OF THE RHIC ELECTRON COOLER RF CAVITY.

    SciTech Connect

    RANK, J.; BEN-ZVI,I.; HAHN,G.; MCINTYRE,G.; DALY,E.; PREBLE,J.

    2005-05-16

    The ECX Project, Brookhaven Lab's predecessor to the RHIC e-Cooler, includes a prototype RF tuner mechanism capable of both coarse and fast tuning. This tuner concept, adapted originally from a DESY design, has longer stroke and significantly higher loads attributable to the very stiff ECX cavity shape. Structural design, kinematics, controls, thermal and RF issues are discussed and certain improvements are proposed.

  12. Optimizing Clinical Research Participant Selection with Informatics.

    PubMed

    Weng, Chunhua

    2015-11-01

    Clinical research participants are often not reflective of real-world patients due to overly restrictive eligibility criteria. Meanwhile, unselected participants introduce confounding factors and reduce research efficiency. Biomedical informatics, especially Big Data increasingly made available from electronic health records, offers promising aids to optimize research participant selection through data-driven transparency.

  13. State-space self-tuner for on-line adaptive control

    NASA Technical Reports Server (NTRS)

    Shieh, L. S.

    1994-01-01

    Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.

  14. Active Learning With Optimal Instance Subset Selection.

    PubMed

    Fu, Yifan; Zhu, Xingquan; Elmagarmid, A K

    2013-04-01

    Active learning (AL) traditionally relies on some instance-based utility measures (such as uncertainty) to assess individual instances and label the ones with the maximum values for training. In this paper, we argue that such approaches cannot produce good labeling subsets mainly because instances are evaluated independently without considering their interactions, and individuals with maximal ability do not necessarily form an optimal instance subset for learning. Alternatively, we propose to achieve AL with optimal subset selection (ALOSS), where the key is to find an instance subset with a maximum utility value. To achieve the goal, ALOSS simultaneously considers the following: 1) the importance of individual instances and 2) the disparity between instances, to build an instance-correlation matrix. As a result, AL is transformed to a semidefinite programming problem to select a k-instance subset with a maximum utility value. Experimental results demonstrate that ALOSS outperforms state-of-the-art approaches for AL.

  15. Optimal remediation policy selection under general conditions

    SciTech Connect

    Wang, M.; Zheng, C.

    1997-09-01

    A new simulation-optimization model has been developed for the optimal design of ground-water remediation systems under a variety of field conditions. The model couples genetic algorithm (GA), a global search technique inspired by biological evolution, with MODFLOW and MT3D, two commonly used ground-water flow and solute transport codes. The model allows for multiple management periods in which optimal pumping/injection rates vary with time to reflect the changes in the flow and transport conditions during the remediation process. The objective function of the model incorporates multiple cost terms including the drilling cost, the installation cost, and the costs to extract and treat the contaminated ground water. The simulation-optimization model is first applied to a typical two-dimensional pump-and-treat example with one and three management periods to demonstrate the effectiveness and robustness of the new model. The model is then applied to a large-scale three-dimensional field problem to determine the minimum pumping needed to contain an existing contaminant plume. The optimal solution as determined in this study is compared with a previous solution based on trial-and-error selection.

  16. Optimal Sensor Selection for Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  17. Selected Isotopes for Optimized Fuel Assembly Tags

    SciTech Connect

    Gerlach, David C.; Mitchell, Mark R.; Reid, Bruce D.; Gesh, Christopher J.; Hurley, David E.

    2008-10-01

    In support of our ongoing signatures project we present information on 3 isotopes selected for possible application in optimized tags that could be applied to fuel assemblies to provide an objective measure of burnup. 1. Important factors for an optimized tag are compatibility with the reactor environment (corrosion resistance), low radioactive activation, at least 2 stable isotopes, moderate neutron absorption cross-section, which gives significant changes in isotope ratios over typical fuel assembly irradiation levels, and ease of measurement in the SIMS machine 2. From the candidate isotopes presented in the 3rd FY 08 Quarterly Report, the most promising appear to be Titanium, Hafnium, and Platinum. The other candidate isotopes (Iron, Tungsten, exhibited inadequate corrosion resistance and/or had neutron capture cross-sections either too high or too low for the burnup range of interest.

  18. A Broadband and Low Cost Monolithic BiCMOS Tuner Chip

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Hui; Cheng, Ting-Yuan; Chang, Chun-Yen

    2004-11-01

    This circuit is a single chip of a BiCMOS wafer used in cable TV set-top converters, cable modems, cable TV tuners and digital TVs. It is a double conversion tuner (DCT) structure which has better performance characteristics than the single conversion tuner (SCT). This single chip comprises two LNAs, one AGC, two LO buffers, two VCOs, two resistor-type and double-balanced Gilbert cell mixers, two synthesizers and ESD protection. Its total gain is 45.70 dB, and its total noise figure is 4.3-7.9 dB. The RF varied range, which is not only 50-860 MHz but also 50-1000 MHz, can be applied in cable TV tuners and cable modems. The consuming power is 0.885 W under 3 V and the die size is 4.8 mm2.

  19. Tests of a tuner for a 325 MHz SRF spoke resonator

    SciTech Connect

    Pishchalnikov, Y.; Borissov, E.; Khabiboulline, T.; Madrak, R.; Pilipenko, R.; Ristori, L.; Schappert, W.; /Fermilab

    2011-03-01

    Fermilab is developing 325 MHz SRF spoke cavities for the proposed Project X. A compact fast/slow tuner has been developed for final tuning of the resonance frequency of the cavity after cooling down to operating temperature and to compensate microphonics and Lorentz force detuning [2]. The modified tuner design and results of 4.5K tests of the first prototype are presented. The performance of lever tuners for the SSR1 spoke resonator prototype has been measured during recent CW and pulsed tests in the Fermilab SCTF. The tuner met or exceeded all design goals and has been used to successfully: (1) Bring the cold cavity to the operating frequency; (2) Compensate for dynamic Lorentz force detuning; and (3) Compensate for frequency detuning of the cavity due to changes in the He bath pressure.

  20. Selectively-informed particle swarm optimization

    PubMed Central

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-01-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315

  1. Proof-of-principle Experiment of a Ferroelectric Tuner for the 1.3 GHz Cavity

    SciTech Connect

    Choi,E.M.; Hahn, H.; Shchelkunov, S. V.; Hirshfield, J.; Kazakov, S.

    2009-01-01

    A novel tuner has been developed by the Omega-P company to achieve fast control of the accelerator RF cavity frequency. The tuner is based on the ferroelectric property which has a variable dielectric constant as function of applied voltage. Tests using a Brookhaven National Laboratory (BNL) 1.3 GHz electron gun cavity have been carried out for a proof-of-principle experiment of the ferroelectric tuner. Two different methods were used to determine the frequency change achieved with the ferroelectric tuner (FT). The first method is based on a S11 measurement at the tuner port to find the reactive impedance change when the voltage is applied. The reactive impedance change then is used to estimate the cavity frequency shift. The second method is a direct S21 measurement of the frequency shift in the cavity with the tuner connected. The estimated frequency change from the reactive impedance measurement due to 5 kV is in the range between 3.2 kHz and 14 kHz, while 9 kHz is the result from the direct measurement. The two methods are in reasonable agreement. The detail description of the experiment and the analysis are discussed in the paper.

  2. Optimal test selection for prediction uncertainty reduction

    DOE PAGES

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less

  3. Optimal test selection for prediction uncertainty reduction

    SciTech Connect

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecise data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.

  4. Optimal selection of biochars for remediating metals ...

    EPA Pesticide Factsheets

    Approximately 500,000 abandoned mines across the U.S. pose a considerable, pervasive risk to human health and the environment due to possible exposure to the residuals of heavy metal extraction. Historically, a variety of chemical and biological methods have been used to reduce the bioavailability of the metals at mine sites. Biochar with its potential to complex and immobilize heavy metals, is an emerging alternative for reducing bioavailability. Furthermore, biochar has been reported to improve soil conditions for plant growth and can be used for promoting the establishment of a soil-stabilizing native plant community to reduce offsite movement of metal-laden waste materials. Because biochar properties depend upon feedstock selection, pyrolysis production conditions, and activation procedures used, they can be designed to meet specific remediation needs. As a result biochar with specific properties can be produced to correspond to specific soil remediation situations. However, techniques are needed to optimally match biochar characteristics with metals contaminated soils to effectively reduce metal bioavailability. Here we present experimental results used to develop a generalized method for evaluating the ability of biochar to reduce metals in mine spoil soil from an abandoned Cu and Zn mine. Thirty-eight biochars were produced from approximately 20 different feedstocks and produced via slow pyrolysis or gasification, and were allowed to react with a f

  5. Perpendicularly Biased YIG Tuners for the Fermilab Recycler 52.809 MHz Cavities

    SciTech Connect

    Madrak, R.; Kashikhin, V.; Makarov, A.; Wildman, D.

    2013-09-13

    For NOvA and future experiments requiring high intensity proton beams, Fermilab is in the process of upgrading the existing accelerator complex for increased proton production. One such improvement is to reduce the Main Injector cycle time, by performing slip stacking, previously done in the Main Injector, in the now repurposed Recycler Ring. Recycler slip stacking requires new tuneable RF cavities, discussed separately in these proceedings. These are quarter wave cavities resonant at 52.809 MHz with a 10 kHz tuning range. The 10 kHz range is achieved by use of a tuner which has an electrical length of approximately one half wavelength at 52.809 MHz. The tuner is constructed from 31/8” diameter rigid coaxial line, with 5 inches of its length containing perpendicularly biased, Al doped Yttrium Iron Garnet (YIG). The tuner design, measurements, and high power test results are presented.

  6. Tuner control system of Spoke012 SRF cavity for C-ADS injector I

    NASA Astrophysics Data System (ADS)

    Liu, Na; Sun, Yi; Wang, Guang-Wei; Mi, Zheng-Hui; Lin, Hai-Ying; Wang, Qun-Yao; Liu, Rong; Ma, Xin-Peng

    2016-09-01

    A new tuner control system for spoke superconducting radio frequency (SRF) cavities has been developed and applied to cryomodule I of the C-ADS injector I at the Institute of High Energy Physics, Chinese Academy of Sciences. We have successfully implemented the tuner controller based on Programmable Logic Controller (PLC) for the first time and achieved a cavity tuning phase error of ±0.7° (about ±4 Hz peak to peak) in the presence of electromechanical coupled resonance. This paper presents preliminary experimental results based on the PLC tuner controller under proton beam commissioning. Supported by Proton linac accelerator I of China Accelerator Driven sub-critical System (Y12C32W129)

  7. Optimization of a crossing system using mate selection

    PubMed Central

    Li, Yongjun; Werf, Julius HJ van der; Kinghorn, Brian P

    2006-01-01

    A simple model based on one single identified quantitative trait locus (QTL) in a two-way crossing system was used to demonstrate the power of mate selection algorithms as a natural means of opportunistic line development for optimization of crossbreeding programs over multiple generations. Mate selection automatically invokes divergent selection in two parental lines for an over-dominant QTL and increased frequency of the favorable allele toward fixation in the sire-line for a fully-dominant QTL. It was concluded that an optimal strategy of line development could be found by mate selection algorithms for a given set of parameters such as genetic model of QTL, breeding objective and initial frequency of the favorable allele in the base populations, etc. The same framework could be used in other scenarios, such as programs involving crossing to exploit breed effects and heterosis. In contrast to classical index selection, this approach to mate selection can optimize long-term responses. PMID:16492372

  8. Optimization of a crossing system using mate selection.

    PubMed

    Li, Yongjun; van der Werf, Julius H J; Kinghorn, Brian P

    2006-01-01

    A simple model based on one single identified quantitative trait locus (QTL) in a two-way crossing system was used to demonstrate the power of mate selection algorithms as a natural means of opportunistic line development for optimization of crossbreeding programs over multiple generations. Mate selection automatically invokes divergent selection in two parental lines for an over-dominant QTL and increased frequency of the favorable allele toward fixation in the sire-line for a fully-dominant QTL. It was concluded that an optimal strategy of line development could be found by mate selection algorithms for a given set of parameters such as genetic model of QTL, breeding objective and initial frequency of the favorable allele in the base populations, etc. The same framework could be used in other scenarios, such as programs involving crossing to exploit breed effects and heterosis. In contrast to classical index selection, this approach to mate selection can optimize long-term responses.

  9. On Optimal Input Design and Model Selection for Communication Channels

    SciTech Connect

    Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  10. Optimization of ultrasonic transducers for selective guided wave actuation

    NASA Astrophysics Data System (ADS)

    Miszczynski, Mateusz; Packo, Pawel; Zbyrad, Paulina; Stepinski, Tadeusz; Uhl, Tadeusz; Lis, Jerzy; Wiatr, Kazimierz

    2016-04-01

    The application of guided waves using surface-bonded piezoceramic transducers for nondestructive testing (NDT) and Structural Health Monitoring (SHM) have shown great potential. However, due to difficulty in identification of individual wave modes resulting from their dispersive and multi-modal nature, selective mode excitement methods are highly desired. The presented work focuses on an optimization-based approach to design of a piezoelectric transducer for selective guided waves generation. The concept of the presented framework involves a Finite Element Method (FEM) model in the optimization process. The material of the transducer is optimized in topological sense with the aim of tuning piezoelectric properties for actuation of specific guided wave modes.

  11. Digital logic optimization using selection operators

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor); Cameron, Eric G. (Inventor); Gambles, Jody W. (Inventor)

    2004-01-01

    According to the invention, a digital design method for manipulating a digital circuit netlist is disclosed. In one step, a first netlist is loaded. The first netlist is comprised of first basic cells that are comprised of first kernel cells. The first netlist is manipulated to create a second netlist. The second netlist is comprised of second basic cells that are comprised of second kernel cells. A percentage of the first and second kernel cells are selection circuits. There is less chip area consumed in the second basic cells than in the first basic cells. The second netlist is stored. In various embodiments, the percentage could be 2% or more, 5% or more, 10% or more, 20% or more, 30% or more, or 40% or more.

  12. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  13. Selection of Structures with Grid Optimization, in Multiagent Data Warehouse

    NASA Astrophysics Data System (ADS)

    Gorawski, Marcin; Bańkowski, Sławomir; Gorawski, Michał

    The query optimization problem in data base and data warehouse management systems is quite similar. Changes to Joins sequences, projections and selections, usage of indexes, and aggregations are all decided during the analysis of an execution schedule. The main goal of these changes is to decrease the query response time. The optimization operation is often dedicated to a single node. This paper proposes optimization to grid or cluster data warehouses / databases. Tests were conducted in a multi-agent environment, and the optimization focused not only on a single node but on the whole system as well. A new idea is proposed here with multi-criteria optimization that is based on user-given parameters. Depending on query time, result admissible errors, and the level of system usage, task results were obtained along with grid optimization.

  14. Fast Simulation and Optimization Tool to Explore Selective Neural Stimulation.

    PubMed

    Dali, Mélissa; Rossel, Olivier; Guiraud, David

    2016-06-13

    In functional electrical stimulation, selective stimulation of axons is desirable to activate a specific target, in particular muscular function. This implies to simulate a fascicule without activating neighboring ones i.e. to be spatially selective. Spatial selectivity is achieved by the use of multicontact cuff electrodes over which the stimulation current is distributed. Because of the large number of parameters involved, numerical simulations provide a way to find and optimize electrode configuration. The present work offers a computation effective scheme and associated tool chain capable of simulating electrode-nerve interface and find the best spread of current to achieve spatial selectivity.

  15. Fast Simulation and Optimization Tool to Explore Selective Neural Stimulation

    PubMed Central

    Dali, Mélissa; Rossel, Olivier; Guiraud, David

    2016-01-01

    In functional electrical stimulation, selective stimulation of axons is desirable to activate a specific target, in particular muscular function. This implies to simulate a fascicule without activating neighboring ones i.e. to be spatially selective. Spatial selectivity is achieved by the use of multicontact cuff electrodes over which the stimulation current is distributed. Because of the large number of parameters involved, numerical simulations provide a way to find and optimize electrode configuration. The present work offers a computation effective scheme and associated tool chain capable of simulating electrode-nerve interface and find the best spread of current to achieve spatial selectivity. PMID:27990231

  16. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    PubMed

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained.

  17. A technique for monitoring fast tuner piezoactuator preload forces for superconducting rf cavities

    SciTech Connect

    Pischalnikov, Y.; Branlard, J.; Carcagno, R.; Chase, B.; Edwards, H.; Orris, D.; Makulski, A.; McGee, M.; Nehring, R.; Poloubotko, V.; Sylvester, C.; /Fermilab

    2007-06-01

    The technology for mechanically compensating Lorentz Force detuning in superconducting RF cavities has already been developed at DESY. One technique is based on commercial piezoelectric actuators and was successfully demonstrated on TESLA cavities [1]. Piezo actuators for fast tuners can operate in a frequency range up to several kHz; however, it is very important to maintain a constant static force (preload) on the piezo actuator in the range of 10 to 50% of its specified blocking force. Determining the preload force during cool-down, warm-up, or re-tuning of the cavity is difficult without instrumentation, and exceeding the specified range can permanently damage the piezo stack. A technique based on strain gauge technology for superconducting magnets has been applied to fast tuners for monitoring the preload on the piezoelectric assembly. The design and testing of piezo actuator preload sensor technology is discussed. Results from measurements of preload sensors installed on the tuner of the Capture Cavity II (CCII)[2] tested at FNAL are presented. These results include measurements during cool-down, warmup, and cavity tuning along with dynamic Lorentz force compensation.

  18. Multidimensional Adaptive Testing with Optimal Design Criteria for Item Selection

    ERIC Educational Resources Information Center

    Mulder, Joris; van der Linden, Wim J.

    2009-01-01

    Several criteria from the optimal design literature are examined for use with item selection in multidimensional adaptive testing. In particular, it is examined what criteria are appropriate for adaptive testing in which all abilities are intentional, some should be considered as a nuisance, or the interest is in the testing of a composite of the…

  19. Training set optimization under population structure in genomic selection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The optimization of the training set (TRS) in genomic selection (GS) has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the Coefficient of D...

  20. Optimal Financial Aid Policies for a Selective University.

    ERIC Educational Resources Information Center

    Ehrenberg, Ronald G.; Sherman, Daniel R.

    1984-01-01

    This paper provides a model of optimal financial aid policies for a selective university. The model implies that the financial aid package to be offered to each category of admitted applicants depends on the elasticity of the fraction who accept offers of admission with respect to the financial aid package offered them. (Author/SSH)

  1. Optimal selection of nodes to propagate influence on networks

    NASA Astrophysics Data System (ADS)

    Sun, Yifan

    2016-11-01

    How to optimize the spreading process on networks has been a hot issue in complex networks, marketing, epidemiology, finance, etc. In this paper, we investigate a problem of optimizing locally the spreading: identifying a fixed number of nodes as seeds which would maximize the propagation of influence to their direct neighbors. All the nodes except the selected seeds are assumed not to spread their influence to their neighbors. This problem can be mapped onto a spin glass model with a fixed magnetization. We provide a message-passing algorithm based on replica symmetrical mean-field theory in statistical physics, which can find the nearly optimal set of seeds. Extensive numerical results on computer-generated random networks and real-world networks demonstrate that this algorithm has a better performance than several other optimization algorithms.

  2. Efficient and scalable Pareto optimization by evolutionary local selection algorithms.

    PubMed

    Menczer, F; Degeratu, M; Street, W N

    2000-01-01

    Local selection is a simple selection scheme in evolutionary computation. Individual fitnesses are accumulated over time and compared to a fixed threshold, rather than to each other, to decide who gets to reproduce. Local selection, coupled with fitness functions stemming from the consumption of finite shared environmental resources, maintains diversity in a way similar to fitness sharing. However, it is more efficient than fitness sharing and lends itself to parallel implementations for distributed tasks. While local selection is not prone to premature convergence, it applies minimal selection pressure to the population. Local selection is, therefore, particularly suited to Pareto optimization or problem classes where diverse solutions must be covered. This paper introduces ELSA, an evolutionary algorithm employing local selection and outlines three experiments in which ELSA is applied to multiobjective problems: a multimodal graph search problem, and two Pareto optimization problems. In all these experiments, ELSA significantly outperforms other well-known evolutionary algorithms. The paper also discusses scalability, parameter dependence, and the potential distributed applications of the algorithm.

  3. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate

  4. Optimizing Ligand Efficiency of Selective Androgen Receptor Modulators (SARMs).

    PubMed

    Handlon, Anthony L; Schaller, Lee T; Leesnitzer, Lisa M; Merrihew, Raymond V; Poole, Chuck; Ulrich, John C; Wilson, Joseph W; Cadilla, Rodolfo; Turnbull, Philip

    2016-01-14

    A series of selective androgen receptor modulators (SARMs) containing the 1-(trifluoromethyl)benzyl alcohol core have been optimized for androgen receptor (AR) potency and drug-like properties. We have taken advantage of the lipophilic ligand efficiency (LLE) parameter as a guide to interpret the effect of structural changes on AR activity. Over the course of optimization efforts the LLE increased over 3 log units leading to a SARM 43 with nanomolar potency, good aqueous kinetic solubility (>700 μM), and high oral bioavailability in rats (83%).

  5. Optimizing Ligand Efficiency of Selective Androgen Receptor Modulators (SARMs)

    PubMed Central

    2015-01-01

    A series of selective androgen receptor modulators (SARMs) containing the 1-(trifluoromethyl)benzyl alcohol core have been optimized for androgen receptor (AR) potency and drug-like properties. We have taken advantage of the lipophilic ligand efficiency (LLE) parameter as a guide to interpret the effect of structural changes on AR activity. Over the course of optimization efforts the LLE increased over 3 log units leading to a SARM 43 with nanomolar potency, good aqueous kinetic solubility (>700 μM), and high oral bioavailability in rats (83%). PMID:26819671

  6. Optimized LOWESS normalization parameter selection for DNA microarray data

    PubMed Central

    Berger, John A; Hautaniemi, Sampsa; Järvinen, Anna-Kaarina; Edgren, Henrik; Mitra, Sanjit K; Astola, Jaakko

    2004-01-01

    Background Microarray data normalization is an important step for obtaining data that are reliable and usable for subsequent analysis. One of the most commonly utilized normalization techniques is the locally weighted scatterplot smoothing (LOWESS) algorithm. However, a much overlooked concern with the LOWESS normalization strategy deals with choosing the appropriate parameters. Parameters are usually chosen arbitrarily, which may reduce the efficiency of the normalization and result in non-optimally normalized data. Thus, there is a need to explore LOWESS parameter selection in greater detail. Results and discussion In this work, we discuss how to choose parameters for the LOWESS method. Moreover, we present an optimization approach for obtaining the fraction of data points utilized in the local regression and analyze results for local print-tip normalization. The optimization procedure determines the bandwidth parameter for the local regression by minimizing a cost function that represents the mean-squared difference between the LOWESS estimates and the normalization reference level. We demonstrate the utility of the systematic parameter selection using two publicly available data sets. The first data set consists of three self versus self hybridizations, which allow for a quantitative study of the optimization method. The second data set contains a collection of DNA microarray data from a breast cancer study utilizing four breast cancer cell lines. Our results show that different parameter choices for the bandwidth window yield dramatically different calibration results in both studies. Conclusions Results derived from the self versus self experiment indicate that the proposed optimization approach is a plausible solution for estimating the LOWESS parameters, while results from the breast cancer experiment show that the optimization procedure is readily applicable to real-life microarray data normalization. In summary, the systematic approach to obtain critical

  7. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    PubMed Central

    Faltermeier, Rupert; Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses. PMID:26693250

  8. Optimization of the selective frequency damping parameters using model reduction

    NASA Astrophysics Data System (ADS)

    Cunha, Guilherme; Passaggia, Pierre-Yves; Lazareff, Marc

    2015-09-01

    In the present work, an optimization methodology to compute the best control parameters, χ and Δ, for the selective frequency damping method is presented. The optimization does not suppose any a priori knowledge of the flow physics, neither of the underlying numerical methods, and is especially suited for simulations requiring large quantity of grid elements and processors. It allows for obtaining an optimal convergence rate to a steady state of the damped Navier-Stokes system. This is achieved using the Dynamic Mode Decomposition, which is a snapshot-based method, to estimate the eigenvalues associated with global unstable dynamics. Validations test cases are presented for the numerical configurations of a laminar flow past a 2D cylinder, a separated boundary-layer over a shallow bump, and a 3D turbulent stratified-Poiseuille flow.

  9. Tuner and radiation shield for planar electron paramagnetic resonance microresonators

    SciTech Connect

    Narkowicz, Ryszard; Suter, Dieter

    2015-02-15

    Planar microresonators provide a large boost of sensitivity for small samples. They can be manufactured lithographically to a wide range of target parameters. The coupler between the resonator and the microwave feedline can be integrated into this design. To optimize the coupling and to compensate manufacturing tolerances, it is sometimes desirable to have a tuning element available that can be adjusted when the resonator is connected to the spectrometer. This paper presents a simple design that allows one to bring undercoupled resonators into the condition for critical coupling. In addition, it also reduces radiation losses and thereby increases the quality factor and the sensitivity of the resonator.

  10. State-Selective Excitation of Quantum Systems via Geometrical Optimization.

    PubMed

    Chang, Bo Y; Shin, Seokmin; Sola, Ignacio R

    2015-09-08

    We lay out the foundations of a general method of quantum control via geometrical optimization. We apply the method to state-selective population transfer using ultrashort transform-limited pulses between manifolds of levels that may represent, e.g., state-selective transitions in molecules. Assuming that certain states can be prepared, we develop three implementations: (i) preoptimization, which implies engineering the initial state within the ground manifold or electronic state before the pulse is applied; (ii) postoptimization, which implies engineering the final state within the excited manifold or target electronic state, after the pulse; and (iii) double-time optimization, which uses both types of time-ordered manipulations. We apply the schemes to two important dynamical problems: To prepare arbitrary vibrational superposition states on the target electronic state and to select weakly coupled vibrational states. Whereas full population inversion between the electronic states only requires control at initial time in all of the ground vibrational levels, only very specific superposition states can be prepared with high fidelity by either pre- or postoptimization mechanisms. Full state-selective population inversion requires manipulating the vibrational coherences in the ground electronic state before the optical pulse is applied and in the excited electronic state afterward, but not during all times.

  11. Optimization of topical gels with betamethasone dipropionate: selection of gel forming and optimal cosolvent system.

    PubMed

    Băiţan, Mariana; Lionte, Mihaela; Moisuc, Lăcrămioara; Gafiţanu, Eliza

    2011-01-01

    The purpose of these studies was to develop a 0.05% betamethasone gel characterized by physical-chemical stability and good release properties. The preliminary studies were designed to select the gel-forming agents and the excipients compatible with betamethasone dipropionate. In order to formulate a clear gel without particles of drug substances in suspension, a solvent system for the drug substance was selected. The content of drug substance released, the rheological and in vitro release tests were the tools used for the optimal formulation selection. A stable carbomer gel was obtained by solubilization of betamethasone dipropionate in a vehicle composed by 40% PEG 400, 10% ethanol and 5% Transcutol.

  12. An R-D optimized transcoding resilient motion vector selection

    NASA Astrophysics Data System (ADS)

    Aminlou, Alireza; Semsarzadeh, Mehdi; Fatemi, Omid

    2014-12-01

    Selection of motion vector (MV) has a significant impact on the quality of an encoded, and particularly a transcoded video, in terms of rate-distortion (R-D) performance. The conventional motion estimation process, in most existing video encoders, ignores the rate of residuals by utilizing rate and distortion of motion compensation step. This approach implies that the selected MV depends on the quantization parameter. Hence, the same MV that has been selected for high bit rate compression may not be suitable for low bit rate ones when transcoding the video with motion information reuse technique, resulting in R-D performance degradation. In this paper, we propose an R-D optimized motion selection criterion that takes into account the effect of residual rate in MV selection process. Based on the proposed criterion, a new two-piece Lagrange multiplier selection is introduced for motion estimation process. Analytical evaluations indicate that our proposed scheme results in MVs that are less sensitive to changes in bit rate or quantization parameter. As a result, MVs in the encoded bitstream may be used even after the encoded sequence has been transcoded to a lower bit rate one using re-quantization. Simulation results indicate that the proposed technique improves the quality performance of coding and transcoding without any computational overhead.

  13. Field of view selection for optimal airborne imaging sensor performance

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.; Barnard, P. Werner; Fildis, Halidun; Erbudak, Mustafa; Senger, Tolga; Alpman, Mehmet E.

    2014-05-01

    The choice of the Field of View (FOV) of imaging sensors used in airborne targeting applications has major impact on the overall performance of the system. Conducting a market survey from published data on sensors used in stabilized airborne targeting systems shows a trend of ever narrowing FOVs housed in smaller and lighter volumes. This approach promotes the ever increasing geometric resolution provided by narrower FOVs, while it seemingly ignores the influences the FOV selection has on the sensor's sensitivity, the effects of diffraction, the influences of sight line jitter and collectively the overall system performance. This paper presents a trade-off methodology to select the optimal FOV for an imaging sensor that is limited in aperture diameter by mechanical constraints (such as space/volume available and window size) by balancing the influences FOV has on sensitivity and resolution and thereby optimizing the system's performance. The methodology may be applied to staring array based imaging sensors across all wavebands from visible/day cameras through to long wave infrared thermal imagers. Some examples of sensor analysis applying the trade-off methodology are given that highlights the performance advantages that can be gained by maximizing the aperture diameters and choosing the optimal FOV for an imaging sensor used in airborne targeting applications.

  14. ICRF antenna matching system with ferrite tuners for the Alcator C-Mod tokamak

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Binus, A.; Wukitch, S. J.; Koert, P.; Murray, R.; Pfeiffer, A.

    2015-12-01

    Real-time fast ferrite tuning (FFT) has been successfully implemented on the ICRF antennas on Alcator C-Mod. The former prototypical FFT system on the E-port 2-strap antenna has been upgraded using new ferrite tuners that have been designed specifically for the operational parameters of the Alcator C-Mod ICRF system (˜ 80 MHz). Another similar FFT system, with two ferrite tuners and one fixed-length stub, has been installed on the transmission line of the D-port 2-strap antenna. These two systems share a Linux-server-based real-time controller. These FFT systems are able to achieve and maintain the reflected power to the transmitters to less than 1% in real time during the plasma discharges under almost all plasma conditions, and help ensure reliable high power operation of the antennas. The innovative field-aligned (FA) 4-strap antenna on J-port has been found to have an interesting feature of loading insensitivity vs. plasma conditions. This feature allows us to significantly improve the matching for the FA J-port antenna by installing carefully designed stubs on the two transmission lines. The reduction of the RF voltages in the transmission lines has enabled the FA J-port antenna to deliver 3.7 MW RF power to plasmas out of the 4 MW source power in high performance I-mode plasmas.

  15. Theoretical Analysis of Triple Liquid Stub Tuner Impedance Matching for ICRH on Tokamaks

    NASA Astrophysics Data System (ADS)

    Du, Dan; Gong, Xueyu; Yin, Lan; Xiang, Dong; Li, Jingchun

    2015-12-01

    The impedance matching is crucial for continuous wave operation of ion cyclotron resonance heating (ICRH) antennae with high power injection into plasmas. A sudden increase in the reflected radio frequency power due to an impedance mismatch of the ICRH system is an issue which must be solved for present-day and future fusion reactors. This paper presents a method for theoretical analysis of ICRH system impedance matching for a triple liquid stub tuner under plasma operational conditions. The relationship of the antenna input impedance with the plasma parameters and operating frequency is first obtained using a global solution. Then, the relations of the plasma parameters and operating frequency with the matching liquid heights are indirectly obtained through numerical simulation according to transmission line theory and matching conditions. The method provides an alternative theoretical method, rather than measurements, to study triple liquid stub tuner impedance matching for ICRH, which may be beneficial for the design of ICRH systems on tokamaks. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2014GB108002, 2013GB107001), National Natural Science Foundation of China (Nos. 11205086, 11205053, 11375085, and 11405082), the Construct Program of Fusion and Plasma Physics Innovation Team in Hunan Province, China (No. NHXTD03), the Natural Science Foundation of Hunan Province, China (No. 2015JJ4044)

  16. Brachytherapy for clinically localized prostate cancer: optimal patient selection.

    PubMed

    Kollmeier, Marisa A; Zelefsky, Michael J

    2011-10-01

    The objective of this review is to present an overview of each modality and delineate how to best select patients who are optimal candidates for these treatment approaches. Prostate brachytherapy as a curative modality for clinically localized prostate cancer has become increasingly utilized over the past decade; 25% of all early cancers are now treated this way in the United States (1). The popularity of this treatment strategy lies in the highly conformal nature of radiation dose, low morbidity, patient convenience, and high efficacy rates. Prostate brachytherapy can be delivered by either a permanent interstitial radioactive seed implantation (low dose rate [LDR]) or a temporary interstitial insertion of iridium-192 (Ir192) afterloading catheters. The objective of both of these techniques is to deliver a high dose of radiation to the prostate gland while exposing normal surrounding tissues to minimal radiation dose. Brachytherapy techniques are ideal to achieve this goal given the close proximity of the radiation source to tumor and sharp fall off of the radiation dose cloud proximate to the source. Brachytherapy provides a powerful means of delivering dose escalation above and beyond that achievable with intensity-modulated external beam radiotherapy alone. Careful selection of appropriate patients for these therapies, however, is critical for optimizing both disease-related outcomes and treatment-related toxicity.

  17. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGES

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  18. Optimal subinterval selection approach for power system transient stability simulation

    SciTech Connect

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.

  19. Optimizing Hammermill Performance Through Screen Selection and Hammer Design

    SciTech Connect

    Neal A. Yancey; Tyler L. Westover; Christopher T. Wright

    2013-01-01

    Background: Mechanical preprocessing, which includes particle size reduction and mechanical separation, is one of the primary operations in the feedstock supply system for a lignocellulosic biorefinery. It is the means by which raw biomass from the field or forest is mechanically transformed into an on-spec feedstock with characteristics better suited for the fuel conversion process. Results: This work provides a general overview of the objectives and methodologies of mechanical preprocessing and then presents experimental results illustrating (1) improved size reduction via optimization of hammer mill configuration, (2) improved size reduction via pneumatic-assisted hammer milling, and (3) improved control of particle size and particle size distribution through proper selection of grinder process parameters. Conclusion: Optimal grinder configuration for maximal process throughput and efficiency is strongly dependent on feedstock type and properties, such moisture content. Tests conducted using a HG200 hammer grinder indicate that increasing the tip speed, optimizing hammer geometry, and adding pneumatic assist can increase grinder throughput as much as 400%.

  20. Optimal Selection of Threshold Value 'r' for Refined Multiscale Entropy.

    PubMed

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar

    2015-12-01

    Refined multiscale entropy (RMSE) technique was introduced to evaluate complexity of a time series over multiple scale factors 't'. Here threshold value 'r' is updated as 0.15 times SD of filtered scaled time series. The use of fixed threshold value 'r' in RMSE sometimes assigns very close resembling entropy values to certain time series at certain temporal scale factors and is unable to distinguish different time series optimally. The present study aims to evaluate RMSE technique by varying threshold value 'r' from 0.05 to 0.25 times SD of filtered scaled time series and finding optimal 'r' values for each scale factor at which different time series can be distinguished more effectively. The proposed RMSE was used to evaluate over HRV time series of normal sinus rhythm subjects, patients suffering from sudden cardiac death, congestive heart failure, healthy adult male, healthy adult female and mid-aged female groups as well as over synthetic simulated database for different datalengths 'N' of 3000, 3500 and 4000. The proposed RMSE results in improved discrimination among different time series. To enhance the computational capability, empirical mathematical equations have been formulated for optimal selection of threshold values 'r' as a function of SD of filtered scaled time series and datalength 'N' for each scale factor 't'.

  1. Ant colony optimization with selective evaluation for feature selection in character recognition

    NASA Astrophysics Data System (ADS)

    Oh, Il-Seok; Lee, Jin-Seon

    2010-01-01

    This paper analyzes the size characteristics of character recognition domain with the aim of developing a feature selection algorithm adequate for the domain. Based on the results, we further analyze the timing requirements of three popular feature selection algorithms, greedy algorithm, genetic algorithm, and ant colony optimization. For a rigorous timing analysis, we adopt the concept of atomic operation. We propose a novel scheme called selective evaluation to improve convergence of ACO. The scheme cut down the computational load by excluding the evaluation of unnecessary or less promising candidate solutions. The scheme is realizable in ACO due to the valuable information, pheromone trail which helps identify those solutions. Experimental results showed that the ACO with selective evaluation was promising both in timing requirement and recognition performance.

  2. Optimal experiment design for model selection in biochemical networks

    PubMed Central

    2014-01-01

    Background Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. Results We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. Conclusions We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors. PMID:24555498

  3. Optimizing Site Selection in Urban Areas in Northern Switzerland

    NASA Astrophysics Data System (ADS)

    Plenkers, K.; Kraft, T.; Bethmann, F.; Husen, S.; Schnellmann, M.

    2012-04-01

    There is a need to observe weak seismic events (M<2) in areas close to potential nuclear-waste repositories or nuclear power plants, in order to analyze the underlying seismo-tectonic processes and estimate their seismic hazard. We are therefore densifying the existing Swiss Digital Seismic Network in northern Switzerland by additional 20 stations. The new network that will be in operation by the end of 2012, aims at observing seismicity in northern Switzerland with a completeness of M_c=1.0 and a location error < 0.5 km in epicenter and < 2 km in focal depth. Monitoring of weak seismic events in this region is challenging, because the area of interest is densely populated and geology is dominated by the Swiss molasse basin. A optimal network-design and a thoughtful choice for station-sites is, therefore, mandatory. To help with decision making we developed a step-wise approach to find the optimum network configuration. Our approach is based on standard network optimization techniques regarding the localization error. As a new feature, our approach uses an ambient noise model to compute expected signal-to-noise ratios for a given site. The ambient noise model uses information on land use and major infrastructures such as highways and train lines. We ran a series of network optimizations with increasing number of stations until the requirements regarding localization error and magnitude of completeness are reached. The resulting network geometry serves as input for the site selection. Site selection is done by using a newly developed multi-step assessment-scheme that takes into account local noise level, geology, infrastructure, and costs necessary to realize the station. The assessment scheme is weighting the different parameters and the most promising sites are identified. In a first step, all potential sites are classified based on information from topographic maps and site inspection. In a second step, local noise conditions are measured at selected sites. We

  4. Influenza B vaccine lineage selection--an optimized trivalent vaccine.

    PubMed

    Mosterín Höpping, Ana; Fonville, Judith M; Russell, Colin A; James, Sarah; Smith, Derek J

    2016-03-18

    Epidemics of seasonal influenza viruses cause considerable morbidity and mortality each year. Various types and subtypes of influenza circulate in humans and evolve continuously such that individuals at risk of serious complications need to be vaccinated annually to keep protection up to date with circulating viruses. The influenza vaccine in most parts of the world is a trivalent vaccine, including an antigenically representative virus of recently circulating influenza A/H3N2, A/H1N1, and influenza B viruses. However, since the 1970s influenza B has split into two antigenically distinct lineages, only one of which is represented in the annual trivalent vaccine at any time. We describe a lineage selection strategy that optimizes protection against influenza B using the standard trivalent vaccine as a potentially cost effective alternative to quadrivalent vaccines.

  5. Digitral Down Conversion Technology for Tevatron Beam Line Tuner at FNAL

    SciTech Connect

    Schappert, W.; Lorman, E.; Scarpine, V.; Ross, M.C.; Sebek, J.; Straumann, T.; /Fermilab /SLAC

    2008-03-17

    Fermilab is presently in Run II collider operations and is developing instrumentation to improve luminosity. Improving the orbit matching between accelerator components using a Beam Line Tuner (BLT) can improve the luminosity. Digital Down Conversion (DDC) has been proposed as a method for making more accurate beam position measurements. Fermilab has implemented a BLT system using a DDC technique to measure orbit oscillations during injections from the Main Injector to the Tevatron. The output of a fast ADC is downconverted and filtered in software. The system measures the x and y positions, the intensity, and the time of arrival for each proton or antiproton bunch, on a turn-by-turn basis, during the first 1024 turns immediately following injection. We present results showing position, intensity, and time of arrival for both injected and coasting beam. Initial results indicate a position resolution of {approx}20 to 40 microns and a phase resolution of {approx}25 ps.

  6. A CMOS Sub-GHz Wideband Low-Noise Amplifier for Digital TV Tuner Applications

    NASA Astrophysics Data System (ADS)

    Cha, Hyouk-Kyu

    A high performance highly integrated sub-GHz wideband differential low-noise amplifier (LNA) for terrestrial and cable digital TV tuner applications is realized in 0.18µm CMOS technology. A noise-canceling topology using a feed-forward current reuse common-source stage is presented to obtain low noise characteristics and high gain while achieving good wideband input matching within 48-860MHz. In addition, linearization methods are appropriately utilized to improve the linearity. The implemented LNA achieves a power gain of 20.9dB, a minimum noise figure of 2.8dB, and an OIP3 of 24.2dBm. The chip consumes 32mA of current at 1.8V power supply and the core die size is 0.21mm2.

  7. A dual molecular analogue tuner for dissecting protein function in mammalian cells

    PubMed Central

    Brosh, Ran; Hrynyk, Iryna; Shen, Jessalyn; Waghray, Avinash; Zheng, Ning; Lemischka, Ihor R.

    2016-01-01

    Loss-of-function studies are fundamental for dissecting gene function. Yet, methods to rapidly and effectively perturb genes in mammalian cells, and particularly in stem cells, are scarce. Here we present a system for simultaneous conditional regulation of two different proteins in the same mammalian cell. This system harnesses the plant auxin and jasmonate hormone-induced degradation pathways, and is deliverable with only two lentiviral vectors. It combines RNAi-mediated silencing of two endogenous proteins with the expression of two exogenous proteins whose degradation is induced by external ligands in a rapid, reversible, titratable and independent manner. By engineering molecular tuners for NANOG, CHK1, p53 and NOTCH1 in mammalian stem cells, we have validated the applicability of the system and demonstrated its potential to unravel complex biological processes. PMID:27230261

  8. Optimization of killer assays for yeast selection protocols.

    PubMed

    Lopes, C A; Sangorrín, M P

    2010-01-01

    A new optimized semiquantitative yeast killer assay is reported for the first time. The killer activity of 36 yeast isolates belonging to three species, namely, Metschnikowia pulcherrima, Wickerhamomyces anomala and Torulaspora delbrueckii, was tested with a view to potentially using these yeasts as biocontrol agents against the wine spoilage species Pichia guilliermondii and Pichia membranifaciens. The effectiveness of the classical streak-based (qualitative method) and the new semiquantitative techniques was compared. The percentage of yeasts showing killer activity was found to be higher by the semiquantitative technique (60%) than by the qualitative method (45%). In all cases, the addition of 1% NaCl into the medium allowed a better observation of the killer phenomenon. Important differences were observed in the killer capacity of different isolates belonging to a same killer species. The broadest spectrum of action was detected in isolates of W. anomala NPCC 1023 and 1025, and M. pulcherrima NPCC 1009 and 1013. We also brought experimental evidence supporting the importance of the adequate selection of the sensitive isolate to be used in killer evaluation. The new semiquantitative method proposed in this work enables to visualize the relationship between the number of yeasts tested and the growth of the inhibition halo (specific productivity). Hence, this experimental approach could become an interesting tool to be taken into account for killer yeast selection protocols.

  9. Optimized bioregenerative space diet selection with crew choice.

    PubMed

    Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean

    2003-01-01

    Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.

  10. Optimized bioregenerative space diet selection with crew choice

    NASA Technical Reports Server (NTRS)

    Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean

    2003-01-01

    Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.

  11. 75 FR 39437 - Optimizing the Security of Biological Select Agents and Toxins in the United States

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ... Executive Order 13546--Optimizing the Security of Biological Select Agents and Toxins in the United States... July 2, 2010 Optimizing the Security of Biological Select Agents and Toxins in the United States By the... and productive scientific enterprise that utilizes biological select agents and toxins (BSAT)...

  12. Applications of Optimal Building Energy System Selection and Operation

    SciTech Connect

    Marnay, Chris; Stadler, Michael; Siddiqui, Afzal; DeForest, Nicholas; Donadee, Jon; Bhattacharya, Prajesh; Lai, Judy

    2011-04-01

    Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated by description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.

  13. Making the optimal decision in selecting protective clothing

    SciTech Connect

    Price, J. Mark

    2007-07-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of employee dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered, including - Protection; - Personnel Contamination; - Cost; - Radwaste; - Comfort; - Convenience; - Logistics/Rad Material Considerations; - Reject Rate of Laundered Clothing; - Durability; - Security; - Personnel Safety including Heat Stress; - Disposition of Gloves and Booties. In addition, over the last several years there has been a trend of nuclear power plants either running trials or switching to Single Use Protective Clothing (SUPC) from traditional protective clothing. In some cases, after trial usage of SUPC, plants have chosen not to switch. In other cases after switching to SUPC for a period of time, some plants have chosen to switch back to laundering. Based on these observations, this paper reviews the 'real' drivers, issues, and interrelating factors regarding the selection and use of protective clothing throughout the nuclear industry. (authors)

  14. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  15. A low power 8th order elliptic low-pass filter for a CMMB tuner

    NASA Astrophysics Data System (ADS)

    Zheng, Gong; Bei, Chen; Xueqing, Hu; Yin, Shi; Foster, Dai Fa

    2011-09-01

    This paper presents an 8th order active-RC elliptic low-pass filter (LPF) for a direct conversion China Mobile Multimedia Broadcasting (CMMB) tuner with a 1 or 4 MHz -3 dB cutoff frequency (f-3dB). By using a novel gain-bandwidth-product (GBW) extension technique in designing the operational amplifiers (op-amps), the proposed filter achieves 71 dB stop-band rejection at 1.7 f-3dB to meet the stringent CMMB adjacent channel rejection (ACR) specifications while dissipates only 2.8 mA/channel from a 3 V supply, its bias current can be further lowered to 2 mA/channel with only 0.5 dB peaking measured at the filter's pass-band edge. Elaborated common-mode (CM) control circuits are applied to the filter op-amp to increase its common-mode rejection ratio (CMRR) and effectively reject the large signal common-mode interference. Measurement results show that the filter has 128 dBμVrms in-band IIP3 and more than 80 dB passband CMRR. Fabricated in a 0.35-μm SiGe BiCMOS process, the proposed filter occupies a 1.19 mm2 die area.

  16. Update on RF System Studies and VCX Fast Tuner Work for the RIA Drive Linac

    SciTech Connect

    Rusnak, B; Shen, S

    2003-05-06

    The limited cavity beam loading conditions anticipated for the Rare Isotope Accelerator (RIA) create a situation where microphonic-induced cavity detuning dominates radio frequency (RF) coupling and RF system architecture choices in the linac design process. Where most superconducting electron and proton linacs have beam-loaded bandwidths that are comparable to or greater than typical microphonic detuning bandwidths on the cavities, the beam-loaded bandwidths for many heavy-ion species in the RIA driver linac can be as much as a factor of 10 less than the projected 80-150 Hz microphonic control window for the RF structures along the driver, making RF control problematic. While simply overcoupling the coupler to the cavity can mitigate this problem to some degree, system studies indicate that for the low-{beta} driver linac alone, this approach may cost 50% or more than an RF system employing a voltage controlled reactance (VCX) fast tuner. An update of these system cost studies, along with the status of the VCX work being done at Lawrence Livermore National Lab is presented here.

  17. To Eat or Not to Eat: An Easy Simulation of Optimal Diet Selection in the Classroom

    ERIC Educational Resources Information Center

    Ray, Darrell L.

    2010-01-01

    Optimal diet selection, a component of optimal foraging theory, suggests that animals should select a diet that either maximizes energy or nutrient consumption per unit time or minimizes the foraging time needed to attain required energy or nutrients. In this exercise, students simulate the behavior of foragers that either show no foraging…

  18. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2016-06-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  19. An artificial system for selecting the optimal surgical team.

    PubMed

    Saberi, Nahid; Mahvash, Mohsen; Zenati, Marco

    2015-01-01

    We introduce an intelligent system to optimize a team composition based on the team's historical outcomes and apply this system to compose a surgical team. The system relies on a record of the procedures performed in the past. The optimal team composition is the one with the lowest probability of unfavorable outcome. We use the theory of probability and the inclusion exclusion principle to model the probability of team outcome for a given composition. A probability value is assigned to each person of database and the probability of a team composition is calculated from them. The model allows to determine the probability of all possible team compositions even if there is no recoded procedure for some team compositions. From an analytical perspective, assembling an optimal team is equivalent to minimizing the overlap of team members who have a recurring tendency to be involved with procedures of unfavorable results. A conceptual example shows the accuracy of the proposed system on obtaining the optimal team.

  20. Age-Related Differences in Goals: Testing Predictions from Selection, Optimization, and Compensation Theory and Socioemotional Selectivity Theory

    ERIC Educational Resources Information Center

    Penningroth, Suzanna L.; Scott, Walter D.

    2012-01-01

    Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two…

  1. Using Fisher Information Criteria for Chemical Sensor Selection via Convex Optimization Methods

    DTIC Science & Technology

    2016-11-16

    best sensors after an optimization procedure. Due to the positive definite nature of the Fisher information matrix, convex optimization may be used to...parametrized to select the best sensors after an optimization procedure. Due to the positive definite nature of the Fisher information matrix, convex op...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6180--16-9711 Using Fisher Information Criteria for Chemical Sensor Selection via Convex

  2. Polyhedral Interpolation for Optimal Reaction Control System Jet Selection

    NASA Technical Reports Server (NTRS)

    Gefert, Leon P.; Wright, Theodore

    2014-01-01

    An efficient algorithm is described for interpolating optimal values for spacecraft Reaction Control System jet firing duty cycles. The algorithm uses the symmetrical geometry of the optimal solution to reduce the number of calculations and data storage requirements to a level that enables implementation on the small real time flight control systems used in spacecraft. The process minimizes acceleration direction errors, maximizes control authority, and minimizes fuel consumption.

  3. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  4. Self-Selection, Optimal Income Taxation, and Redistribution

    ERIC Educational Resources Information Center

    Amegashie, J. Atsu

    2009-01-01

    The author makes a pedagogical contribution to optimal income taxation. Using a very simple model adapted from George A. Akerlof (1978), he demonstrates a key result in the approach to public economics and welfare economics pioneered by Nobel laureate James Mirrlees. He shows how incomplete information, in addition to the need to preserve…

  5. A Regression Design Approach to Optimal and Robust Spacing Selection.

    DTIC Science & Technology

    1981-07-01

    release and sale; its distribution is unlimited Acceso For NTIS GRA&I DEPARTMENT OF STATISTICS DTIC TAB Unannounced Southern Methodist University F...such as the Cauchy where A is a constant multiple of the identity. In fact, for the Cauchy distribution asymptotically optimal spacing sequences for

  6. Optimal design and selection of magneto-rheological brake types based on braking torque and mass

    NASA Astrophysics Data System (ADS)

    Nguyen, Q. H.; Lang, V. T.; Choi, S. B.

    2015-06-01

    In developing magnetorheological brakes (MRBs), it is well known that the braking torque and the mass of the MRBs are important factors that should be considered in the product’s design. This research focuses on the optimal design of different types of MRBs, from which we identify an optimal selection of MRB types, considering braking torque and mass. In the optimization, common types of MRBs such as disc-type, drum-type, hybrid-type, and T-shape types are considered. The optimization problem is to find an optimal MRB structure that can produce the required braking torque while minimizing its mass. After a brief description of the configuration of the MRBs, the MRBs’ braking torque is derived based on the Herschel-Bulkley rheological model of the magnetorheological fluid. Then, the optimal designs of the MRBs are analyzed. The optimization objective is to minimize the mass of the brake while the braking torque is constrained to be greater than a required value. In addition, the power consumption of the MRBs is also considered as a reference parameter in the optimization. A finite element analysis integrated with an optimization tool is used to obtain optimal solutions for the MRBs. Optimal solutions of MRBs with different required braking torque values are obtained based on the proposed optimization procedure. From the results, we discuss the optimal selection of MRB types, considering braking torque and mass.

  7. Optimizing drilling performance using a selected drilling fluid

    DOEpatents

    Judzis, Arnis [Salt Lake City, UT; Black, Alan D [Coral Springs, FL; Green, Sidney J [Salt Lake City, UT; Robertson, Homer A [West Jordan, UT; Bland, Ronald G [Houston, TX; Curry, David Alexander [The Woodlands, TX; Ledgerwood, III, Leroy W.

    2011-04-19

    To improve drilling performance, a drilling fluid is selected based on one or more criteria and to have at least one target characteristic. Drilling equipment is used to drill a wellbore, and the selected drilling fluid is provided into the wellbore during drilling with the drilling equipment. The at least one target characteristic of the drilling fluid includes an ability of the drilling fluid to penetrate into formation cuttings during drilling to weaken the formation cuttings.

  8. Selection of magnetorheological brake types via optimal design considering maximum torque and constrained volume

    NASA Astrophysics Data System (ADS)

    Nguyen, Q. H.; Choi, S. B.

    2012-01-01

    This research focuses on optimal design of different types of magnetorheological brakes (MRBs), from which an optimal selection of MRB types is identified. In the optimization, common types of MRB such as disc-type, drum-type, hybrid-types, and T-shaped type are considered. The optimization problem is to find the optimal value of significant geometric dimensions of the MRB that can produce a maximum braking torque. The MRB is constrained in a cylindrical volume of a specific radius and length. After a brief description of the configuration of MRB types, the braking torques of the MRBs are derived based on the Herschel-Bulkley model of the MR fluid. The optimal design of MRBs constrained in a specific cylindrical volume is then analysed. The objective of the optimization is to maximize the braking torque while the torque ratio (the ratio of maximum braking torque and the zero-field friction torque) is constrained to be greater than a certain value. A finite element analysis integrated with an optimization tool is employed to obtain optimal solutions of the MRBs. Optimal solutions of MRBs constrained in different volumes are obtained based on the proposed optimization procedure. From the results, discussions on the optimal selection of MRB types depending on constrained volumes are given.

  9. Optimizing the yield and selectivity of high purity nanoparticle clusters

    NASA Astrophysics Data System (ADS)

    Pease, Leonard F.

    2011-05-01

    Here we investigate the parameters that govern the yield and selectivity of small clusters composed of nanoparticles using a Monte Carlo simulation that accounts for spatial and dimensional distributions in droplet and nanoparticle density and size. Clustering nanoparticles presents a powerful paradigm with which to access properties not otherwise available using individual molecules, individual nanoparticles or bulk materials. However, the governing parameters that precisely tune the yield and selectivity of clusters fabricated via an electrospray droplet evaporation method followed by purification with differential mobility analysis (DMA) remain poorly understood. We find that the product of the electrospray droplet mean diameter to the third power and nanoparticle concentration governs the yield of individual clusters, while the ratio of the nanoparticle standard deviation to the mean diameter governs the selectivity. The resulting, easily accessible correlations may be used to minimize undesirable clustering, such as protein aggregation in the biopharmaceutical industry, and maximize the yield of a particular type of cluster for nanotechnology and energy applications.

  10. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  11. Selecting radiotherapy dose distributions by means of constrained optimization problems.

    PubMed

    Alfonso, J C L; Buttazzo, G; García-Archilla, B; Herrero, M A; Núñez, L

    2014-05-01

    The main steps in planning radiotherapy consist in selecting for any patient diagnosed with a solid tumor (i) a prescribed radiation dose on the tumor, (ii) bounds on the radiation side effects on nearby organs at risk and (iii) a fractionation scheme specifying the number and frequency of therapeutic sessions during treatment. The goal of any radiotherapy treatment is to deliver on the tumor a radiation dose as close as possible to that selected in (i), while at the same time conforming to the constraints prescribed in (ii). To this day, considerable uncertainties remain concerning the best manner in which such issues should be addressed. In particular, the choice of a prescription radiation dose is mostly based on clinical experience accumulated on the particular type of tumor considered, without any direct reference to quantitative radiobiological assessment. Interestingly, mathematical models for the effect of radiation on biological matter have existed for quite some time, and are widely acknowledged by clinicians. However, the difficulty to obtain accurate in vivo measurements of the radiobiological parameters involved has severely restricted their direct application in current clinical practice.In this work, we first propose a mathematical model to select radiation dose distributions as solutions (minimizers) of suitable variational problems, under the assumption that key radiobiological parameters for tumors and organs at risk involved are known. Second, by analyzing the dependence of such solutions on the parameters involved, we then discuss the manner in which the use of those minimizers can improve current decision-making processes to select clinical dosimetries when (as is generally the case) only partial information on model radiosensitivity parameters is available. A comparison of the proposed radiation dose distributions with those actually delivered in a number of clinical cases strongly suggests that solutions of our mathematical model can be

  12. Application’s Method of Quadratic Programming for Optimization of Portfolio Selection

    NASA Astrophysics Data System (ADS)

    Kawamoto, Shigeru; Takamoto, Masanori; Kobayashi, Yasuhiro

    Investors or fund-managers face with optimization of portfolio selection, which means that determine the kind and the quantity of investment among several brands. We have developed a method to obtain optimal stock’s portfolio more rapidly from twice to three times than conventional method with efficient universal optimization. The method is characterized by quadratic matrix of utility function and constrained matrices divided into several sub-matrices by focusing on structure of these matrices.

  13. Optimization of gene sequences under constant mutational pressure and selection

    NASA Astrophysics Data System (ADS)

    Kowalczuk, M.; Gierlik, A.; Mackiewicz, P.; Cebrat, S.; Dudek, M. R.

    1999-12-01

    We have analyzed the influence of constant mutational pressure and selection on the nucleotide composition of DNA sequences of various size, which were represented by the genes of the Borrelia burgdorferi genome. With the help of MC simulations we have found that longer DNA sequences accumulate much less base substitutions per sequence length than short sequences. This leads us to the conclusion that the accuracy of replication may determine the size of genome.

  14. Optimal band selection for dimensionality reduction of hyperspectral imagery

    NASA Technical Reports Server (NTRS)

    Stearns, Stephen D.; Wilson, Bruce E.; Peterson, James R.

    1993-01-01

    Hyperspectral images have many bands requiring significant computational power for machine interpretation. During image pre-processing, regions of interest that warrant full examination need to be identified quickly. One technique for speeding up the processing is to use only a small subset of bands to determine the 'interesting' regions. The problem addressed here is how to determine the fewest bands required to achieve a specified performance goal for pixel classification. The band selection problem has been addressed previously Chen et al., Ghassemian et al., Henderson et al., and Kim et al.. Some popular techniques for reducing the dimensionality of a feature space, such as principal components analysis, reduce dimensionality by computing new features that are linear combinations of the original features. However, such approaches require measuring and processing all the available bands before the dimensionality is reduced. Our approach, adapted from previous multidimensional signal analysis research, is simpler and achieves dimensionality reduction by selecting bands. Feature selection algorithms are used to determine which combination of bands has the lowest probability of pixel misclassification. Two elements required by this approach are a choice of objective function and a choice of search strategy.

  15. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2008-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service these research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, defendable sensor suite to address system health assessment requirements.

  16. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2007-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service. These research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, dependable sensor suite to address system health assessment requirements.

  17. Selection of optimal composition-control parameters for friable materials

    SciTech Connect

    Pak, Yu.N.; Vdovkin, A.V.

    1988-05-01

    A method for composition analysis of coal and minerals is proposed which uses scattered gamma radiation and does away with preliminary sample preparation to ensure homogeneous particle density, surface area, and size. Reduction of the error induced by material heterogeneity has previously been achieved by rotation of the control object during analysis. A further refinement is proposed which addresses the necessity that the contribution of the radiation scattered from each individual surface to the total intensity be the same. This is achieved by providing a constant linear rate of travel for the irradiated spot through back-and-forth motion of the sensor. An analytical expression is given for the laws of motion for the sensor and test tube which provides for uniform irradiated area movement along a path analogous to the Archimedes spiral. The relationships obtained permit optimization of measurement parameters in analyzing friable materials which are not uniform in grain size.

  18. A method to optimize selection on multiple identified quantitative trait loci

    PubMed Central

    Chakraborty, Reena; Moreau, Laurence; Dekkers, Jack CM

    2002-01-01

    A mathematical approach was developed to model and optimize selection on multiple known quantitative trait loci (QTL) and polygenic estimated breeding values in order to maximize a weighted sum of responses to selection over multiple generations. The model allows for linkage between QTL with multiple alleles and arbitrary genetic effects, including dominance, epistasis, and gametic imprinting. Gametic phase disequilibrium between the QTL and between the QTL and polygenes is modeled but polygenic variance is assumed constant. Breeding programs with discrete generations, differential selection of males and females and random mating of selected parents are modeled. Polygenic EBV obtained from best linear unbiased prediction models can be accommodated. The problem was formulated as a multiple-stage optimal control problem and an iterative approach was developed for its solution. The method can be used to develop and evaluate optimal strategies for selection on multiple QTL for a wide range of situations and genetic models. PMID:12081805

  19. Optimizing drug exposure to minimize selection of antibiotic resistance.

    PubMed

    Olofsson, Sara K; Cars, Otto

    2007-09-01

    The worldwide increase in antibiotic resistance is a concern for public health. The fact that the choice of dose and treatment duration can affect the selection of antibiotic-resistant mutants is becoming more evident, and an increased number of studies have used pharmacodynamic models to describe the drug exposure and pharmacodynamic breakpoints needed to minimize and predict the development of resistance. However, there remains a lack of sufficient data, and future work is needed to fully characterize these target drug concentrations. More knowledge is also needed of drug pharmacodynamics versus bacteria with different resistance mutations and susceptibility levels. The dosing regimens should exhibit high efficacy not only against susceptible wild-type bacteria but, preferably, also against mutated bacteria that may exist in low numbers in "susceptible" populations. Thus, to prolong the life span of existing and new antibiotics, it is important that dosing regimens be carefully selected on the basis of pharmacokinetic and pharmacodynamic properties that prevent emergence of preexisting and newly formed mutants.

  20. Stationary phase optimized selectivity liquid chromatography: Basic possibilities of serially connected columns using the "PRISMA" principle.

    PubMed

    Nyiredy, Sz; Szucs, Zoltán; Szepesy, L

    2007-07-20

    A new procedure (stationary phase optimized selectivity liquid chromatography: SOS-LC) is described for the optimization of the HPLC stationary phase, using serially connected columns and the principle of the "PRISMA" model. The retention factors (k) of the analytes were determined on three different stationary phases. By use of these data the k values were predicted applying theoretically combined stationary phases. These predictions resulted in numerous intermediate theoretical separations from among which only the optimal one was assembled and tested. The overall selectivity of this separation was better than that of any individual base stationary phase. SOS-LC is independent of the mechanism and the scale of separation.

  1. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    PubMed

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human

  2. Determination of an Optimal Recruiting-Selection Strategy to Fill a Specified Quota of Satisfactory Personnel.

    ERIC Educational Resources Information Center

    Sands, William A.

    Managers of military and civilian personnel systems justifiably demand an estimate of the payoff in dollars and cents, which can be expected to result from the implementation of a proposed selection program. The Cost of Attaining Personnel Requirements (CAPER) Model provides an optimal recruiting-selection strategy for personnel decisions which…

  3. SLOPE—ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION

    PubMed Central

    Bogdan, Małgorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; Candès, Emmanuel J.

    2015-01-01

    We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minb∈ℝp12‖y−Xb‖ℓ22+λ1|b|(1)+λ2|b|(2)+⋯+λp|b|(p),where λ1 ≥ λ2 ≥ … ≥ λp ≥ 0 and |b|(1)≥|b|(2)≥⋯≥|b|(p) are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical ℓ1 procedures such as the Lasso. Here, the regularizer is a sorted ℓ1 norm, which penalizes the regression coefficients according to their rank: the higher the rank—that is, stronger the signal—the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289–300] procedure (BH) which compares more significant p-values with more stringent thresholds. One notable choice of the sequence {λi} is given by the BH critical values λBH(i)=z(1−i⋅q/2p), where q ∈ (0, 1) and z(α) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data. PMID:26709357

  4. Selection for optimal crew performance - Relative impact of selection and training

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.

    1987-01-01

    An empirical study supporting Helmreich's (1986) theoretical work on the distinct manner in which training and selection impact crew coordination is presented. Training is capable of changing attitudes, while selection screens for stable personality characteristics. Training appears least effective for leadership, an area strongly influenced by personality. Selection is least effective for influencing attitudes about personal vulnerability to stress, which appear to be trained in resource management programs. Because personality correlates with attitudes before and after training, it is felt that selection may be necessary even with a leadership-oriented training cirriculum.

  5. On the complexity of discrete feature selection for optimal classification.

    PubMed

    Peña, Jose M; Nilsson, Roland

    2010-08-01

    Consider a classification problem involving only discrete features that are represented as random variables with some prescribed discrete sample space. In this paper, we study the complexity of two feature selection problems. The first problem consists in finding a feature subset of a given size k that has minimal Bayes risk. We show that for any increasing ordering of the Bayes risks of the feature subsets (consistent with an obvious monotonicity constraint), there exists a probability distribution that exhibits that ordering. This implies that solving the first problem requires an exhaustive search over the feature subsets of size k. The second problem consists of finding the minimal feature subset that has minimal Bayes risk. In the light of the complexity of the first problem, one may think that solving the second problem requires an exhaustive search over all of the feature subsets. We show that, under mild assumptions, this is not true. We also study the practical implications of our solutions to the second problem.

  6. Storage of human biospecimens: selection of the optimal storage temperature.

    PubMed

    Hubel, Allison; Spindler, Ralf; Skubitz, Amy P N

    2014-06-01

    Millions of biological samples are currently kept at low tempertures in cryobanks/biorepositories for long-term storage. The quality of the biospecimen when thawed, however, is not only determined by processing of the biospecimen but the storage conditions as well. The overall objective of this article is to describe the scientific basis for selecting a storage temperature for a biospecimen based on current scientific understanding. To that end, this article reviews some physical basics of the temperature, nucleation, and ice crystal growth present in biological samples stored at low temperatures (-20°C to -196°C), and our current understanding of the role of temperature on the activity of degradative molecules present in biospecimens. The scientific literature relevant to the stability of specific biomarkers in human fluid, cell, and tissue biospecimens is also summarized for the range of temperatures between -20°C to -196°C. These studies demonstrate the importance of storage temperature on the stability of critical biomarkers for fluid, cell, and tissue biospecimens.

  7. Optimizing landfill site selection by using land classification maps.

    PubMed

    Eskandari, M; Homaee, M; Mahmoodi, S; Pazira, E; Van Genuchten, M Th

    2015-05-01

    Municipal solid waste disposal is a major environmental concern throughout the world. Proper landfill siting involves many environmental, economic, technical, and sociocultural challenges. In this study, a new quantitative method for landfill siting that reduces the number of evaluation criteria, simplifies siting procedures, and enhances the utility of available land evaluation maps was proposed. The method is demonstrated by selecting a suitable landfill site near the city of Marvdasht in Iran. The approach involves two separate stages. First, necessary criteria for preliminary landfill siting using four constraints and eight factors were obtained from a land classification map initially prepared for irrigation purposes. Thereafter, the criteria were standardized using a rating approach and then weighted to obtain a suitability map for landfill siting, with ratings in a 0-1 domain and divided into five suitability classes. Results were almost identical to those obtained with a more traditional environmental landfill siting approach. Because of far fewer evaluation criteria, the proposed weighting method was much easier to implement while producing a more convincing database for landfill siting. The classification map also considered land productivity. In the second stage, the six best alternative sites were evaluated for final landfill siting using four additional criteria. Sensitivity analyses were furthermore conducted to assess the stability of the obtained ranking. Results indicate that the method provides a precise siting procedure that should convince all pertinent stakeholders.

  8. Applying optimal model selection in principal stratification for causal inference.

    PubMed

    Odondi, Lang'o; McNamee, Roseanne

    2013-05-20

    Noncompliance to treatment allocation is a key source of complication for causal inference. Efficacy estimation is likely to be compounded by the presence of noncompliance in both treatment arms of clinical trials where the intention-to-treat estimate provides a biased estimator for the true causal estimate even under homogeneous treatment effects assumption. Principal stratification method has been developed to address such posttreatment complications. The present work extends a principal stratification method that adjusts for noncompliance in two-treatment arms trials by developing model selection for covariates predicting compliance to treatment in each arm. We apply the method to analyse data from the Esprit study, which was conducted to ascertain whether unopposed oestrogen (hormone replacement therapy) reduced the risk of further cardiac events in postmenopausal women who survive a first myocardial infarction. We adjust for noncompliance in both treatment arms under a Bayesian framework to produce causal risk ratio estimates for each principal stratum. For mild values of a sensitivity parameter and using separate predictors of compliance in each arm, principal stratification results suggested that compliance with hormone replacement therapy only would reduce the risk for death and myocardial reinfarction by about 47% and 25%, respectively, whereas compliance with either treatment would reduce the risk for death by 13% and reinfarction by 60% among the most compliant. However, the results were sensitive to the user-defined sensitivity parameter.

  9. Plastic scintillation dosimetry: Optimal selection of scintillating fibers and scintillators

    SciTech Connect

    Archambault, Louis; Arsenault, Jean; Gingras, Luc; Sam Beddar, A.; Roy, Rene; Beaulieu, Luc

    2005-07-15

    Scintillation dosimetry is a promising avenue for evaluating dose patterns delivered by intensity-modulated radiation therapy plans or for the small fields involved in stereotactic radiosurgery. However, the increase in signal has been the goal for many authors. In this paper, a comparison is made between plastic scintillating fibers and plastic scintillator. The collection of scintillation light was measured experimentally for four commercial models of scintillating fibers (BCF-12, BCF-60, SCSF-78, SCSF-3HF) and two models of plastic scintillators (BC-400, BC-408). The emission spectra of all six scintillators were obtained by using an optical spectrum analyzer and they were compared with theoretical behavior. For scintillation in the blue region, the signal intensity of a singly clad scintillating fiber (BCF-12) was 120% of that of the plastic scintillator (BC-400). For the multiclad fiber (SCSF-78), the signal reached 144% of that of the plastic scintillator. The intensity of the green scintillating fibers was lower than that of the plastic scintillator: 47% for the singly clad fiber (BCF-60) and 77% for the multiclad fiber (SCSF-3HF). The collected light was studied as a function of the scintillator length and radius for a cylindrical probe. We found that symmetric detectors with nearly the same spatial resolution in each direction (2 mm in diameter by 3 mm in length) could be made with a signal equivalent to those of the more commonly used asymmetric scintillators. With augmentation of the signal-to-noise ratio in consideration, this paper presents a series of comparisons that should provide insight into selection of a scintillator type and volume for development of a medical dosimeter.

  10. Visualization of multi-property landscapes for compound selection and optimization

    NASA Astrophysics Data System (ADS)

    de la Vega de León, Antonio; Kayastha, Shilva; Dimova, Dilyana; Schultz, Thomas; Bajorath, Jürgen

    2015-08-01

    Compound optimization generally requires considering multiple properties in concert and reaching a balance between them. Computationally, this process can be supported by multi-objective optimization methods that produce numerical solutions to an optimization task. Since a variety of comparable multi-property solutions are usually obtained further prioritization is required. However, the underlying multi-dimensional property spaces are typically complex and difficult to rationalize. Herein, an approach is introduced to visualize multi-property landscapes by adapting the concepts of star and parallel coordinates from computer graphics. The visualization method is designed to complement multi-objective compound optimization. We show that visualization makes it possible to further distinguish between numerically equivalent optimization solutions and helps to select drug-like compounds from multi-dimensional property spaces. The methodology is intuitive, applicable to a wide range of chemical optimization problems, and made freely available to the scientific community.

  11. Visualization of multi-property landscapes for compound selection and optimization.

    PubMed

    de la Vega de León, Antonio; Kayastha, Shilva; Dimova, Dilyana; Schultz, Thomas; Bajorath, Jürgen

    2015-08-01

    Compound optimization generally requires considering multiple properties in concert and reaching a balance between them. Computationally, this process can be supported by multi-objective optimization methods that produce numerical solutions to an optimization task. Since a variety of comparable multi-property solutions are usually obtained further prioritization is required. However, the underlying multi-dimensional property spaces are typically complex and difficult to rationalize. Herein, an approach is introduced to visualize multi-property landscapes by adapting the concepts of star and parallel coordinates from computer graphics. The visualization method is designed to complement multi-objective compound optimization. We show that visualization makes it possible to further distinguish between numerically equivalent optimization solutions and helps to select drug-like compounds from multi-dimensional property spaces. The methodology is intuitive, applicable to a wide range of chemical optimization problems, and made freely available to the scientific community.

  12. A particle swarm optimization algorithm for beam angle selection in intensity-modulated radiotherapy planning.

    PubMed

    Li, Yongjie; Yao, Dezhong; Yao, Jonathan; Chen, Wufan

    2005-08-07

    Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated.

  13. Coupling between protein level selection and codon usage optimization in the evolution of bacteria and archaea.

    PubMed

    Ran, Wenqi; Kristensen, David M; Koonin, Eugene V

    2014-03-25

    The relationship between the selection affecting codon usage and selection on protein sequences of orthologous genes in diverse groups of bacteria and archaea was examined by using the Alignable Tight Genome Clusters database of prokaryote genomes. The codon usage bias is generally low, with 57.5% of the gene-specific optimal codon frequencies (Fopt) being below 0.55. This apparent weak selection on codon usage contrasts with the strong purifying selection on amino acid sequences, with 65.8% of the gene-specific dN/dS ratios being below 0.1. For most of the genomes compared, a limited but statistically significant negative correlation between Fopt and dN/dS was observed, which is indicative of a link between selection on protein sequence and selection on codon usage. The strength of the coupling between the protein level selection and codon usage bias showed a strong positive correlation with the genomic GC content. Combined with previous observations on the selection for GC-rich codons in bacteria and archaea with GC-rich genomes, these findings suggest that selection for translational fine-tuning could be an important factor in microbial evolution that drives the evolution of genome GC content away from mutational equilibrium. This type of selection is particularly pronounced in slowly evolving, "high-status" genes. A significantly stronger link between the two aspects of selection is observed in free-living bacteria than in parasitic bacteria and in genes encoding metabolic enzymes and transporters than in informational genes. These differences might reflect the special importance of translational fine-tuning for the adaptability of gene expression to environmental changes. The results of this work establish the coupling between protein level selection and selection for translational optimization as a distinct and potentially important factor in microbial evolution. IMPORTANCE Selection affects the evolution of microbial genomes at many levels, including both

  14. Rational optimization of tolC as a powerful dual selectable marker for genome engineering

    PubMed Central

    Gregg, Christopher J.; Lajoie, Marc J.; Napolitano, Michael G.; Mosberg, Joshua A.; Goodman, Daniel B.; Aach, John; Isaacs, Farren J.; Church, George M.

    2014-01-01

    Selection has been invaluable for genetic manipulation, although counter-selection has historically exhibited limited robustness and convenience. TolC, an outer membrane pore involved in transmembrane transport in E. coli, has been implemented as a selectable/counter-selectable marker, but counter-selection escape frequency using colicin E1 precludes using tolC for inefficient genetic manipulations and/or with large libraries. Here, we leveraged unbiased deep sequencing of 96 independent lineages exhibiting counter-selection escape to identify loss-of-function mutations, which offered mechanistic insight and guided strain engineering to reduce counter-selection escape frequency by ∼40-fold. We fundamentally improved the tolC counter-selection by supplementing a second agent, vancomycin, which reduces counter-selection escape by 425-fold, compared colicin E1 alone. Combining these improvements in a mismatch repair proficient strain reduced counter-selection escape frequency by 1.3E6-fold in total, making tolC counter-selection as effective as most selectable markers, and adding a valuable tool to the genome editing toolbox. These improvements permitted us to perform stable and continuous rounds of selection/counter-selection using tolC, enabling replacement of 10 alleles without requiring genotypic screening for the first time. Finally, we combined these advances to create an optimized E. coli strain for genome engineering that is ∼10-fold more efficient at achieving allelic diversity than previous best practices. PMID:24452804

  15. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis.

    PubMed

    Li, Qiang; Chen, Huiling; Huang, Hui; Zhao, Xuehua; Cai, ZhenNao; Tong, Changfei; Liu, Wenbin; Tian, Xin

    2017-01-01

    In this study, a new predictive framework is proposed by integrating an improved grey wolf optimization (IGWO) and kernel extreme learning machine (KELM), termed as IGWO-KELM, for medical diagnosis. The proposed IGWO feature selection approach is used for the purpose of finding the optimal feature subset for medical data. In the proposed approach, genetic algorithm (GA) was firstly adopted to generate the diversified initial positions, and then grey wolf optimization (GWO) was used to update the current positions of population in the discrete searching space, thus getting the optimal feature subset for the better classification purpose based on KELM. The proposed approach is compared against the original GA and GWO on the two common disease diagnosis problems in terms of a set of performance metrics, including classification accuracy, sensitivity, specificity, precision, G-mean, F-measure, and the size of selected features. The simulation results have proven the superiority of the proposed method over the other two competitive counterparts.

  16. Optimization of meander line radiators for frequency selective surfaces by using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Bucuci, Stefania C.; Dumitrascu, Ana; Danisor, Alin; Berescu, Serban; Tamas, Razvan D.

    2015-02-01

    In this paper we propose the use of frequency selective surfaces based on meander line radiators, as targets for monitoring slow displacements with synthetic aperture radars. The optimization of the radiators is performed by using genetic algorithms on only two parameters i.e., gain and size. As an example, we have optimized a single meander antenna, resonating in the X-band, at 9.65 GHz.

  17. Compression of biomedical signals with mother wavelet optimization and best-basis wavelet packet selection.

    PubMed

    Brechet, Laurent; Lucas, Marie-Françoise; Doncarli, Christian; Farina, Dario

    2007-12-01

    We propose a novel scheme for signal compression based on the discrete wavelet packet transform (DWPT) decompositon. The mother wavelet and the basis of wavelet packets were optimized and the wavelet coefficients were encoded with a modified version of the embedded zerotree algorithm. This signal dependant compression scheme was designed by a two-step process. The first (internal optimization) was the best basis selection that was performed for a given mother wavelet. For this purpose, three additive cost functions were applied and compared. The second (external optimization) was the selection of the mother wavelet based on the minimal distortion of the decoded signal given a fixed compression ratio. The mother wavelet was parameterized in the multiresolution analysis framework by the scaling filter, which is sufficient to define the entire decomposition in the orthogonal case. The method was tested on two sets of ten electromyographic (EMG) and ten electrocardiographic (ECG) signals that were compressed with compression ratios in the range of 50%-90%. For 90% compression ratio of EMG (ECG) signals, the percent residual difference after compression decreased from (mean +/- SD) 48.6 +/- 9.9% (21.5 +/- 8.4%) with discrete wavelet transform (DWT) using the wavelet leading to poorest performance to 28.4 +/- 3.0% (6.7 +/- 1.9%) with DWPT, with optimal basis selection and wavelet optimization. In conclusion, best basis selection and optimization of the mother wavelet through parameterization led to substantial improvement of performance in signal compression with respect to DWT and randon selection of the mother wavelet. The method provides an adaptive approach for optimal signal representation for compression and can thus be applied to any type of biomedical signal.

  18. Analysis of double stub tuner control stability in a many element phased array antenna with strong cross-coupling

    SciTech Connect

    Wallace, G. M.; Fitzgerald, E.; Johnson, D. K.; Kanojia, A. D.; Koert, P.; Lin, Y.; Murray, R.; Shiraiwa, S.; Terry, D. R.; Wukitch, S. J.; Hillairet, J.

    2014-02-12

    Active stub tuning with a fast ferrite tuner (FFT) allows for the system to respond dynamically to changes in the plasma impedance such as during the L-H transition or edge localized modes (ELMs), and has greatly increased the effectiveness of fusion ion cyclotron range of frequency systems. A high power waveguide double-stub tuner is under development for use with the Alcator C-Mod lower hybrid current drive (LHCD) system. Exact impedance matching with a double-stub is possible for a single radiating element under most load conditions, with the reflection coefficient reduced from Γ to Γ{sup 2} in the “forbidden region.” The relative phase shift between adjacent columns of a LHCD antenna is critical for control of the launched n{sub ∥} spectrum. Adding a double-stub tuning network will perturb the phase of the forward wave particularly if the unmatched reflection coefficient is high. This effect can be compensated by adjusting the phase of the low power microwave drive for each klystron amplifier. Cross-coupling of the reflected power between columns of the launcher must also be considered. The problem is simulated by cascading a scattering matrix for the plasma provided by a linear coupling model with the measured launcher scattering matrix and that of the FFTs. The solution is advanced in an iterative manner similar to the time-dependent behavior of the real system. System performance is presented under a range of edge density conditions from under-dense to over-dense and a range of launched n{sub ∥}.

  19. Analysis of double stub tuner control stability in a many element phased array antenna with strong cross-coupling

    NASA Astrophysics Data System (ADS)

    Wallace, G. M.; Fitzgerald, E.; Hillairet, J.; Johnson, D. K.; Kanojia, A. D.; Koert, P.; Lin, Y.; Murray, R.; Shiraiwa, S.; Terry, D. R.; Wukitch, S. J.

    2014-02-01

    Active stub tuning with a fast ferrite tuner (FFT) allows for the system to respond dynamically to changes in the plasma impedance such as during the L-H transition or edge localized modes (ELMs), and has greatly increased the effectiveness of fusion ion cyclotron range of frequency systems. A high power waveguide double-stub tuner is under development for use with the Alcator C-Mod lower hybrid current drive (LHCD) system. Exact impedance matching with a double-stub is possible for a single radiating element under most load conditions, with the reflection coefficient reduced from Γ to Γ2 in the "forbidden region." The relative phase shift between adjacent columns of a LHCD antenna is critical for control of the launched n∥ spectrum. Adding a double-stub tuning network will perturb the phase of the forward wave particularly if the unmatched reflection coefficient is high. This effect can be compensated by adjusting the phase of the low power microwave drive for each klystron amplifier. Cross-coupling of the reflected power between columns of the launcher must also be considered. The problem is simulated by cascading a scattering matrix for the plasma provided by a linear coupling model with the measured launcher scattering matrix and that of the FFTs. The solution is advanced in an iterative manner similar to the time-dependent behavior of the real system. System performance is presented under a range of edge density conditions from under-dense to over-dense and a range of launched n∥.

  20. Selective Optimization

    DTIC Science & Technology

    2015-07-06

    thanks Dave Goldsman and Alexander Shapiro for valuable discussions. References [1] Shabbir Ahmed and Alper Atamtürk. Maximizing a class of...Progressive hedging-based metaheuristics for stochastic network design. Networks, 58(2):114–124, 2011. [7] Edgar Gabriel, Graham E. Fagg, George...J. Daniel, Richard L. Graham , and Timothy S. Woodall. Open MPI: Goals, concept, and design of a next generation MPI implementation. In Proceedings

  1. Selective waste collection optimization in Romania and its impact to urban climate

    NASA Astrophysics Data System (ADS)

    Mihai, Šercǎianu; Iacoboaea, Cristina; Petrescu, Florian; Aldea, Mihaela; Luca, Oana; Gaman, Florian; Parlow, Eberhard

    2016-08-01

    According to European Directives, transposed in national legislation, the Member States should organize separate collection systems at least for paper, metal, plastic, and glass until 2015. In Romania, since 2011 only 12% of municipal collected waste was recovered, the rest being stored in landfills, although storage is considered the last option in the waste hierarchy. At the same time there was selectively collected only 4% of the municipal waste. Surveys have shown that the Romanian people do not have selective collection bins close to their residencies. The article aims to analyze the current situation in Romania in the field of waste collection and management and to make a proposal for selective collection containers layout, using geographic information systems tools, for a case study in Romania. Route optimization is used based on remote sensing technologies and network analyst protocols. Optimizing selective collection system the greenhouse gases, particles and dust emissions can be reduced.

  2. Optimal band selection for high dimensional remote sensing data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfeng; Sun, Quan; Li, Jonathan

    2009-06-01

    A 'fused' method may not be suitable for reducing the dimensionality of data and a band/feature selection method needs to be used for selecting an optimal subset of original data bands. This study examined the efficiency of GA in band selection for remote sensing classification. A GA-based algorithm for band selection was designed deliberately in which a Bhattacharyya distance index that indicates separability between classes of interest is used as fitness function. A binary string chromosome is designed in which each gene location has a value of 1 representing a feature being included or 0 representing a band being not included. The algorithm was implemented in MATLAB programming environment, and a band selection task for lithologic classification in the Chocolate Mountain area (California) was used to test the proposed algorithm. The proposed feature selection algorithm can be useful in multi-source remote sensing data preprocessing, especially in hyperspectral dimensionality reduction.

  3. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    PubMed Central

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-01-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060

  4. Ant Colony Optimization Based Feature Selection Method for QEEG Data Classification

    PubMed Central

    Ozekes, Serhat; Gultekin, Selahattin; Tarhan, Nevzat

    2014-01-01

    Objective Many applications such as biomedical signals require selecting a subset of the input features in order to represent the whole set of features. A feature selection algorithm has recently been proposed as a new approach for feature subset selection. Methods Feature selection process using ant colony optimization (ACO) for 6 channel pre-treatment electroencephalogram (EEG) data from theta and delta frequency bands is combined with back propagation neural network (BPNN) classification method for 147 major depressive disorder (MDD) subjects. Results BPNN classified R subjects with 91.83% overall accuracy and 95.55% subjects detection sensitivity. Area under ROC curve (AUC) value after feature selection increased from 0.8531 to 0.911. The features selected by the optimization algorithm were Fp1, Fp2, F7, F8, F3 for theta frequency band and eliminated 7 features from 12 to 5 feature subset. Conclusion ACO feature selection algorithm improves the classification accuracy of BPNN. Using other feature selection algorithms or classifiers to compare the performance for each approach is important to underline the validity and versatility of the designed combination. PMID:25110496

  5. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification.

    PubMed

    Zhang, Yong; Gong, Dun-Wei; Cheng, Jian

    2017-01-01

    Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.

  6. Quantum-behaved particle swarm optimization: analysis of individual particle behavior and parameter selection.

    PubMed

    Sun, Jun; Fang, Wei; Wu, Xiaojun; Palade, Vasile; Xu, Wenbo

    2012-01-01

    Quantum-behaved particle swarm optimization (QPSO), motivated by concepts from quantum mechanics and particle swarm optimization (PSO), is a probabilistic optimization algorithm belonging to the bare-bones PSO family. Although it has been shown to perform well in finding the optimal solutions for many optimization problems, there has so far been little analysis on how it works in detail. This paper presents a comprehensive analysis of the QPSO algorithm. In the theoretical analysis, we analyze the behavior of a single particle in QPSO in terms of probability measure. Since the particle's behavior is influenced by the contraction-expansion (CE) coefficient, which is the most important parameter of the algorithm, the goal of the theoretical analysis is to find out the upper bound of the CE coefficient, within which the value of the CE coefficient selected can guarantee the convergence or boundedness of the particle's position. In the experimental analysis, the theoretical results are first validated by stochastic simulations for the particle's behavior. Then, based on the derived upper bound of the CE coefficient, we perform empirical studies on a suite of well-known benchmark functions to show how to control and select the value of the CE coefficient, in order to obtain generally good algorithmic performance in real world applications. Finally, a further performance comparison between QPSO and other variants of PSO on the benchmarks is made to show the efficiency of the QPSO algorithm with the proposed parameter control and selection methods.

  7. Adaptive feature selection using v-shaped binary particle swarm optimization

    PubMed Central

    Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850

  8. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme

    PubMed Central

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    Purpose: Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. Methods: An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Results: Among these four methods, SFFS has highest efficacy, which takes 3%–5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC

  9. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems

    ERIC Educational Resources Information Center

    Robinson, Stephanie A.; Rickenbach, Elizabeth H.; Lachman, Margie E.

    2016-01-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to…

  10. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    PubMed Central

    Liu, Sheng; Jin, Haiqiang; Mao, Xiaojun; Zhai, Binbin; Zhan, Ye; Feng, Xiaofei

    2013-01-01

    This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches. PMID:23766717

  11. Optimization of a Dibenzodiazepine Hit to a Potent and Selective Allosteric PAK1 Inhibitor

    PubMed Central

    2015-01-01

    The discovery of inhibitors targeting novel allosteric kinase sites is very challenging. Such compounds, however, once identified could offer exquisite levels of selectivity across the kinome. Herein we report our structure-based optimization strategy of a dibenzodiazepine hit 1, discovered in a fragment-based screen, yielding highly potent and selective inhibitors of PAK1 such as 2 and 3. Compound 2 was cocrystallized with PAK1 to confirm binding to an allosteric site and to reveal novel key interactions. Compound 3 modulated PAK1 at the cellular level and due to its selectivity enabled valuable research to interrogate biological functions of the PAK1 kinase. PMID:26191365

  12. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  13. A general method to select representative models for decision making and optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Shirangi, Mehrdad G.; Durlofsky, Louis J.

    2016-11-01

    The optimization of subsurface flow processes under geological uncertainty technically requires flow simulation to be performed over a large set of geological realizations for each function evaluation at every iteration of the optimizer. Because flow simulation over many permeability realizations (only permeability is considered to be uncertain in this study) may entail excessive computation, simulations are often performed for only a subset of 'representative' realizations. It is however challenging to identify a representative subset that provides flow statistics in close agreement with those from the full set, especially when the decision parameters (e.g., time-varying well pressures, well locations) are unknown a priori, as they are in optimization problems. In this work, we introduce a general framework, based on clustering, for selecting a representative subset of realizations for use in simulations involving 'new' sets of decision parameters. Prior to clustering, each realization is represented by a low-dimensional feature vector that contains a combination of permeability-based and flow-based quantities. Calculation of flow-based features requires the specification of a (base) flow problem and simulation over the full set of realizations. Permeability information is captured concisely through use of principal component analysis. By computing the difference between the flow response for the subset and the full set, we quantify the performance of various realization-selection methods. The impact of different weightings for flow and permeability information in the cluster-based selection procedure is assessed for a range of examples involving different types of decision parameters. These decision parameters are generated either randomly, in a manner that is consistent with the solutions proposed in global stochastic optimization procedures such as GA and PSO, or through perturbation around a base case, consistent with the solutions considered in pattern search

  14. A Novel Method of Failure Sample Selection for Electrical Systems Using Ant Colony Optimization

    PubMed Central

    Tian, Shulin; Yang, Chenglin; Liu, Cheng

    2016-01-01

    The influence of failure propagation is ignored in failure sample selection based on traditional testability demonstration experiment method. Traditional failure sample selection generally causes the omission of some failures during the selection and this phenomenon could lead to some fearful risks of usage because these failures will lead to serious propagation failures. This paper proposes a new failure sample selection method to solve the problem. First, the method uses a directed graph and ant colony optimization (ACO) to obtain a subsequent failure propagation set (SFPS) based on failure propagation model and then we propose a new failure sample selection method on the basis of the number of SFPS. Compared with traditional sampling plan, this method is able to improve the coverage of testing failure samples, increase the capacity of diagnosis, and decrease the risk of using. PMID:27738424

  15. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization

    SciTech Connect

    Kim, Minsun Stewart, Robert D.; Phillips, Mark H.

    2015-11-15

    Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T{sub d}), and the size and location of tumor target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D{sub mean} ≤ 45 Gy), lungs (D{sub mean} ≤ 20 Gy), cord (D{sub max} ≤ 45 Gy), esophagus (D{sub max} ≤ 63 Gy), and unspecified tissues (D{sub 05} ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D{sub 95} of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T{sub d} (3–100 days), tumor lag-time (T{sub k} = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D{sub 95} were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T{sub d} and T{sub k} used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating

  16. Optimizing SNR for indoor visible light communication via selecting communicating LEDs

    NASA Astrophysics Data System (ADS)

    Wang, Lang; Wang, Chunyue; Chi, Xuefen; Zhao, Linlin; Dong, Xiaoli

    2017-03-01

    In this paper, we investigate the layout of LED to optimize SNR by selecting communicating LEDs (C-LEDs) in indoor visible light communication (VLC) system. Due to the inter-symbol interference (ISI) caused by the different arrival time of different optical rays, the SNR for any user is not optimal if a simple layout is adopted. It is interesting to investigate the LEDs layout for achieving optimal SNR in indoor VLC system. For a single user, LED signal is divided into the positive and negative components, they provide the power of desired signal and the power of ISI respectively. We introduce the concept of valid ratio (VR) which refers to the value of positive component over the negative component. Then we propose a VR threshold-based LED selection scheme which chooses C-LEDs by their VRs. For downlink broadcast VLC with multiple users, the SNRs of all users are different in a layout of C-LEDs. It is difficult to find a proper layout of C-LEDs to guarantee the BER of all users. To solve this problem, we propose an evolutionary algorithm (EA)-based scheme to optimize the SNR. The simulation results show that it is an effective method to improve SNR by selecting C-LEDs.

  17. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  18. Isotope selective photoionization of NaK by optimal control: theory and experiment.

    PubMed

    Schäfer-Bung, Boris; Bonacić-Koutecký, Vlasta; Sauer, Franziska; Weber, Stefan M; Wöste, Ludger; Lindinger, Albrecht

    2006-12-07

    We present a joint theoretical and experimental study of the maximization of the isotopomer ratio (23)Na(39)K(23)Na(41)K using tailored phase-only as well as amplitude and phase modulated femtosecond laser fields obtained in the framework of optimal control theory and closed loop learning (CLL) technique. A good agreement between theoretically and experimentally optimized pulse shapes is achieved which allows to assign the optimized processes directly to the pulse shapes obtained by the experimental isotopomer selective CLL approach. By analyzing the dynamics induced by the optimized pulses we show that the mechanism involving the dephasing of the wave packets between the isotopomers (23)Na (39)K and (23)Na (41)K on the first excited state is responsible for high isotope selective ionization. Amplitude and phase modulated pulses, moreover, allow to establish the connection between the spectral components of the pulse and corresponding occupied vibronic states. It will be also shown that the leading features of the theoretically shaped pulses are independent from the initial conditions. Since the underlying processes can be assigned to the individual features of the shaped pulses, we show that optimal control can be used as a tool for analysis.

  19. An experimental and theoretical investigation of a fuel system tuner for the suppression of combustion driven oscillations

    NASA Astrophysics Data System (ADS)

    Scarborough, David E.

    Manufacturers of commercial, power-generating, gas turbine engines continue to develop combustors that produce lower emissions of nitrogen oxides (NO x) in order to meet the environmental standards of governments around the world. Lean, premixed combustion technology is one technique used to reduce NOx emissions in many current power and energy generating systems. However, lean, premixed combustors are susceptible to thermo-acoustic oscillations, which are pressure and heat-release fluctuations that occur because of a coupling between the combustion process and the natural acoustic modes of the system. These pressure oscillations lead to premature failure of system components, resulting in very costly maintenance and downtime. Therefore, a great deal of work has gone into developing methods to prevent or eliminate these combustion instabilities. This dissertation presents the results of a theoretical and experimental investigation of a novel Fuel System Tuner (FST) used to damp detrimental combustion oscillations in a gas turbine combustor by changing the fuel supply system impedance, which controls the amplitude and phase of the fuel flowrate. When the FST is properly tuned, the heat release oscillations resulting from the fuel-air ratio oscillations damp, rather than drive, the combustor acoustic pressure oscillations. A feasibility study was conducted to prove the validity of the basic idea and to develop some basic guidelines for designing the FST. Acoustic models for the subcomponents of the FST were developed, and these models were experimentally verified using a two-microphone impedance tube. Models useful for designing, analyzing, and predicting the performance of the FST were developed and used to demonstrate the effectiveness of the FST. Experimental tests showed that the FST reduced the acoustic pressure amplitude of an unstable, model, gas-turbine combustor over a wide range of operating conditions and combustor configurations. Finally, combustor

  20. In Vitro Selection of Optimal DNA Substrates for Ligation by a Water-Soluble Carbodiimide

    NASA Technical Reports Server (NTRS)

    Harada, Kazuo; Orgel, Leslie E.

    1994-01-01

    We have used in vitro selection to investigate the sequence requirements for efficient template-directed ligation of oligonucleotides at 0 deg C using a water-soluble carbodiimide as condensing agent. We find that only 2 bp at each side of the ligation junction are needed. We also studied chemical ligation of substrate ensembles that we have previously selected as optimal by RNA ligase or by DNA ligase. As anticipated, we find that substrates selected with DNA ligase ligate efficiently with a chemical ligating agent, and vice versa. Substrates selected using RNA ligase are not ligated by the chemical condensing agent and vice versa. The implications of these results for prebiotic chemistry are discussed.

  1. Optoelectronic optimization of mode selective converter based on liquid crystal on silicon

    NASA Astrophysics Data System (ADS)

    Wang, Yongjiao; Liang, Lei; Yu, Dawei; Fu, Songnian

    2016-03-01

    We carry out comprehensive optoelectronic optimization of mode selective converter used for the mode division multiplexing, based on liquid crystal on silicon (LCOS) in binary mode. The conversion error of digital-to-analog (DAC) is investigated quantitatively for the purpose of driving the LCOS in the application of mode selective conversion. Results indicate the DAC must have a resolution of 8-bit, in order to achieve high mode extinction ratio (MER) of 28 dB. On the other hand, both the fast axis position error of half-wave-plate (HWP) and rotation angle error of Faraday rotator (FR) have negative influence on the performance of mode selective conversion. However, the commercial products provide enough angle error tolerance for the LCOS-based mode selective converter, taking both of insertion loss (IL) and MER into account.

  2. Discovery, Optimization, and Characterization of Novel D2 Dopamine Receptor Selective Antagonists

    PubMed Central

    2015-01-01

    The D2 dopamine receptor (D2 DAR) is one of the most validated drug targets for neuropsychiatric and endocrine disorders. However, clinically approved drugs targeting D2 DAR display poor selectivity between the D2 and other receptors, especially the D3 DAR. This lack of selectivity may lead to undesirable side effects. Here we describe the chemical and pharmacological characterization of a novel D2 DAR antagonist series with excellent D2 versus D1, D3, D4, and D5 receptor selectivity. The final probe 65 was obtained through a quantitative high-throughput screening campaign, followed by medicinal chemistry optimization, to yield a selective molecule with good in vitro physical properties, metabolic stability, and in vivo pharmacokinetics. The optimized molecule may be a useful in vivo probe for studying D2 DAR signal modulation and could also serve as a lead compound for the development of D2 DAR-selective druglike molecules for the treatment of multiple neuropsychiatric and endocrine disorders. PMID:24666157

  3. An Approach to Feature Selection Based on Ant Colony Optimization and Rough Set

    NASA Astrophysics Data System (ADS)

    Wu, Junyun; Qiu, Taorong; Wang, Lu; Huang, Haiquan

    Feature selection plays an important role in many fields. This paper proposes a method for feature selection which combined the rough set method and ant colony optimization algorithm. The algorithm used the attribute dependence and the attribute importance as the inspiration factor which applied to the transfer rules. For further, the quality of classification based on rough set method and the length of the feature subset were used to build the pheromone update strategy. Through the test of data set, results show that the proposed method is feasible.

  4. Discovery of GSK2656157: An Optimized PERK Inhibitor Selected for Preclinical Development.

    PubMed

    Axten, Jeffrey M; Romeril, Stuart P; Shu, Arthur; Ralph, Jeffrey; Medina, Jesús R; Feng, Yanhong; Li, William Hoi Hong; Grant, Seth W; Heerding, Dirk A; Minthorn, Elisabeth; Mencken, Thomas; Gaul, Nathan; Goetz, Aaron; Stanley, Thomas; Hassell, Annie M; Gampe, Robert T; Atkins, Charity; Kumar, Rakesh

    2013-10-10

    We recently reported the discovery of GSK2606414 (1), a selective first in class inhibitor of protein kinase R (PKR)-like endoplasmic reticulum kinase (PERK), which inhibited PERK activation in cells and demonstrated tumor growth inhibition in a human tumor xenograft in mice. In continuation of our drug discovery program, we applied a strategy to decrease inhibitor lipophilicity as a means to improve physical properties and pharmacokinetics. This report describes our medicinal chemistry optimization culminating in the discovery of the PERK inhibitor GSK2656157 (6), which was selected for advancement to preclinical development.

  5. Maximal area and conformal welding heuristics for optimal slice selection in splenic volume estimation

    NASA Astrophysics Data System (ADS)

    Gutenko, Ievgeniia; Peng, Hao; Gu, Xianfeng; Barish, Mathew; Kaufman, Arie

    2016-03-01

    Accurate estimation of splenic volume is crucial for the determination of disease progression and response to treatment for diseases that result in enlargement of the spleen. However, there is no consensus with respect to the use of single or multiple one-dimensional, or volumetric measurement. Existing methods for human reviewers focus on measurement of cross diameters on a representative axial slice and craniocaudal length of the organ. We propose two heuristics for the selection of the optimal axial plane for splenic volume estimation: the maximal area axial measurement heuristic and the novel conformal welding shape-based heuristic. We evaluate these heuristics on time-variant data derived from both healthy and sick subjects and contrast them to established heuristics. Under certain conditions our heuristics are superior to standard practice volumetric estimation methods. We conclude by providing guidance on selecting the optimal heuristic for splenic volume estimation.

  6. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    NASA Astrophysics Data System (ADS)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  7. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1992-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  8. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1993-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  9. Imaging multicellular specimens with real-time optimized tiling light-sheet selective plane illumination microscopy

    PubMed Central

    Fu, Qinyi; Martin, Benjamin L.; Matus, David Q.; Gao, Liang

    2016-01-01

    Despite the progress made in selective plane illumination microscopy, high-resolution 3D live imaging of multicellular specimens remains challenging. Tiling light-sheet selective plane illumination microscopy (TLS-SPIM) with real-time light-sheet optimization was developed to respond to the challenge. It improves the 3D imaging ability of SPIM in resolving complex structures and optimizes SPIM live imaging performance by using a real-time adjustable tiling light sheet and creating a flexible compromise between spatial and temporal resolution. We demonstrate the 3D live imaging ability of TLS-SPIM by imaging cellular and subcellular behaviours in live C. elegans and zebrafish embryos, and show how TLS-SPIM can facilitate cell biology research in multicellular specimens by studying left-right symmetry breaking behaviour of C. elegans embryos. PMID:27004937

  10. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors

    PubMed Central

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-01-01

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor. PMID:26151212

  11. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  12. Selection, optimization, and compensation as strategies of life management: correction to Freund and Baltes (1998)

    PubMed

    Freund; Baltes

    1999-12-01

    Because of a scoring error, the data reported in Freund and Baltes (1998) were reanalyzed. Except for finding a lower positive manifold involving the 3 components of selection, optimization, and compensation (SOC), the outcome of this reanalysis supports the major findings previously reported: Old and very old participants of the Berlin Aging Study reporting SOC-related behaviors also reported higher levels of well-being and aging well. Corrected versions of Tables 3, 6, and 7 are presented.

  13. Ant-cuckoo colony optimization for feature selection in digital mammogram.

    PubMed

    Jona, J B; Nagaveni, N

    2014-01-15

    Digital mammogram is the only effective screening method to detect the breast cancer. Gray Level Co-occurrence Matrix (GLCM) textural features are extracted from the mammogram. All the features are not essential to detect the mammogram. Therefore identifying the relevant feature is the aim of this work. Feature selection improves the classification rate and accuracy of any classifier. In this study, a new hybrid metaheuristic named Ant-Cuckoo Colony Optimization a hybrid of Ant Colony Optimization (ACO) and Cuckoo Search (CS) is proposed for feature selection in Digital Mammogram. ACO is a good metaheuristic optimization technique but the drawback of this algorithm is that the ant will walk through the path where the pheromone density is high which makes the whole process slow hence CS is employed to carry out the local search of ACO. Support Vector Machine (SVM) classifier with Radial Basis Kernal Function (RBF) is done along with the ACO to classify the normal mammogram from the abnormal mammogram. Experiments are conducted in miniMIAS database. The performance of the new hybrid algorithm is compared with the ACO and PSO algorithm. The results show that the hybrid Ant-Cuckoo Colony Optimization algorithm is more accurate than the other techniques.

  14. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis

    PubMed Central

    Li, Qiang; Zhao, Xuehua; Cai, ZhenNao; Tong, Changfei; Liu, Wenbin; Tian, Xin

    2017-01-01

    In this study, a new predictive framework is proposed by integrating an improved grey wolf optimization (IGWO) and kernel extreme learning machine (KELM), termed as IGWO-KELM, for medical diagnosis. The proposed IGWO feature selection approach is used for the purpose of finding the optimal feature subset for medical data. In the proposed approach, genetic algorithm (GA) was firstly adopted to generate the diversified initial positions, and then grey wolf optimization (GWO) was used to update the current positions of population in the discrete searching space, thus getting the optimal feature subset for the better classification purpose based on KELM. The proposed approach is compared against the original GA and GWO on the two common disease diagnosis problems in terms of a set of performance metrics, including classification accuracy, sensitivity, specificity, precision, G-mean, F-measure, and the size of selected features. The simulation results have proven the superiority of the proposed method over the other two competitive counterparts. PMID:28246543

  15. Optimal landing site selection based on safety index during planetary descent

    NASA Astrophysics Data System (ADS)

    Cui, Pingyuan; Ge, Dantong; Gao, Ai

    2017-03-01

    Landing safety is the prior concern in planetary exploration missions. With the development of precise landing technology, future missions require vehicles to land on places of great scientific interest which are usually surrounded by rocks and craters. In order to perform a safe landing, the vehicle should be capable of detecting hazards, estimating its fuel consumption as well as touchdown performance, and locating a safe spot to land. The landing site selection process can be treated as an optimization problem which, however, cannot be efficiently solved through traditional optimization methods due to its complexity. Hence, the paper proposes a synthetic landing area assessment criterion, safety index, as a solution of the problem, which selects the best landing site by assessing terrain safety, fuel consumption and touchdown performance during descent. The computation effort is cut down after reducing the selection scope and the optimal landing site is found through a quick one-dimensional search. A typical example based on the Mars Science Laboratory mission is simulated to demonstrate the capability of the method. It is proved that the proposed strategy manages to pick out a safe landing site for the mission effectively. The safety index can be applied in various planetary descent phases and provides reference for future mission designs.

  16. Successful aging at work: an applied study of selection, optimization, and compensation through impression management.

    PubMed

    Abraham, J D; Hansson, R O

    1995-03-01

    Although many abilities basic to human performance appear to decrease with age, research has shown that job performance does not generally show comparable declines. Baltes and Baltes (1990) have proposed a model of successful aging involving Selection, Optimization, and Compensation (SOC), that may help explain how individuals maintain important competencies despite age-related losses. In the present study, involving a total of 224 working adults ranging in age from 40 to 69 years, occupational measures of Selection, Optimization, and Compensation through impression management (Compensation-IM) were developed. The three measures were factorially distinct and reliable (Cronbach's alpha > .80). Moderated regression analyses indicated that: (1) the relationship between Selection and self-reported ability/performance maintenance increased with age (p < or = .05); and (2) the relationship between both Optimization and Compensation-IM and goal attainment (i.e., importance-weighted ability/performance maintenance) increased with age (p < or = .05). Results suggest that the SOC model of successful aging may be useful in explaining how older workers can maintain important job competencies. Correlational evidence also suggests, however, that characteristics of the job, workplace, and individual may mediate the initiation and effectiveness of SOC behaviors.

  17. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    SciTech Connect

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram; Baumgartner, Gerald; Ramanujam, J.; Sadayappan, Ponnuswamy

    2012-03-01

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empirically measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.

  18. A topography analysis incorporated optimization method for the selection and placement of best management practices.

    PubMed

    Shen, Zhenyao; Chen, Lei; Xu, Liang

    2013-01-01

    Best Management Practices (BMPs) are one of the most effective methods to control nonpoint source (NPS) pollution at a watershed scale. In this paper, the use of a topography analysis incorporated optimization method (TAIOM) was proposed, which integrates topography analysis with cost-effective optimization. The surface status, slope and the type of land use were evaluated as inputs for the optimization engine. A genetic algorithm program was coded to obtain the final optimization. The TAIOM was validated in conjunction with the Soil and Water Assessment Tool (SWAT) in the Yulin watershed in Southwestern China. The results showed that the TAIOM was more cost-effective than traditional optimization methods. The distribution of selected BMPs throughout landscapes comprising relatively flat plains and gentle slopes, suggests the need for a more operationally effective scheme, such as the TAIOM, to determine the practicability of BMPs before widespread adoption. The TAIOM developed in this study can easily be extended to other watersheds to help decision makers control NPS pollution.

  19. A multi-objective optimization tool for the selection and placement of BMPs for pesticide control

    NASA Astrophysics Data System (ADS)

    Maringanti, C.; Chaubey, I.; Arabi, M.; Engel, B.

    2008-07-01

    Pesticides (particularly atrazine used in corn fields) are the foremost source of water contamination in many of the water bodies in Midwestern corn belt, exceeding the 3 ppb MCL established by the U.S. EPA for drinking water. Best management practices (BMPs), such as buffer strips and land management practices, have been proven to effectively reduce the pesticide pollution loads from agricultural areas. However, selection and placement of BMPs in watersheds to achieve an ecologically effective and economically feasible solution is a daunting task. BMP placement decisions under such complex conditions require a multi-objective optimization algorithm that would search for the best possible solution that satisfies the given watershed management objectives. Genetic algorithms (GA) have been the most popular optimization algorithms for the BMP selection and placement problem. Most optimization models also had a dynamic linkage with the water quality model, which increased the computation time considerably thus restricting them to apply models on field scale or relatively smaller (11 or 14 digit HUC) watersheds. However, most previous works have considered the two objectives individually during the optimization process by introducing a constraint on the other objective, therefore decreasing the degree of freedom to find the solution. In this study, the optimization for atrazine reduction is performed by considering the two objectives simultaneously using a multi-objective genetic algorithm (NSGA-II). The limitation with the dynamic linkage with a distributed parameter watershed model was overcome through the utilization of a BMP tool, a database that stores the pollution reduction and cost information of different BMPs under consideration. The model was used for the selection and placement of BMPs in Wildcat Creek Watershed (located in Indiana, for atrazine reduction. The most ecologically effective solution from the model had an annual atrazine concentration reduction

  20. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    PubMed

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  1. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  2. Temporal artifact minimization in sonoelastography through optimal selection of imaging parameters.

    PubMed

    Torres, Gabriela; Chau, Gustavo R; Parker, Kevin J; Castaneda, Benjamin; Lavarello, Roberto J

    2016-07-01

    Sonoelastography is an ultrasonic technique that uses Kasai's autocorrelation algorithms to generate qualitative images of tissue elasticity using external mechanical vibrations. In the absence of synchronization between the mechanical vibration device and the ultrasound system, the random initial phase and finite ensemble length of the data packets result in temporal artifacts in the sonoelastography frames and, consequently, in degraded image quality. In this work, the analytic derivation of an optimal selection of acquisition parameters (i.e., pulse repetition frequency, vibration frequency, and ensemble length) is developed in order to minimize these artifacts, thereby eliminating the need for complex device synchronization. The proposed rule was verified through experiments with heterogeneous phantoms, where the use of optimally selected parameters increased the average contrast-to-noise ratio (CNR) by more than 200% and reduced the CNR standard deviation by 400% when compared to the use of arbitrarily selected imaging parameters. Therefore, the results suggest that the rule for specific selection of acquisition parameters becomes an important tool for producing high quality sonoelastography images.

  3. On Sparse representation for Optimal Individualized Treatment Selection with Penalized Outcome Weighted Learning

    PubMed Central

    Song, Rui; Kosorok, Michael; Zeng, Donglin; Zhao, Yingqi; Laber, Eric; Yuan, Ming

    2015-01-01

    As a new strategy for treatment which takes individual heterogeneity into consideration, personalized medicine is of growing interest. Discovering individualized treatment rules (ITRs) for patients who have heterogeneous responses to treatment is one of the important areas in developing personalized medicine. As more and more information per individual is being collected in clinical studies and not all of the information is relevant for treatment discovery, variable selection becomes increasingly important in discovering individualized treatment rules. In this article, we develop a variable selection method based on penalized outcome weighted learning through which an optimal treatment rule is considered as a classification problem where each subject is weighted proportional to his or her clinical outcome. We show that the resulting estimator of the treatment rule is consistent and establish variable selection consistency and the asymptotic distribution of the estimators. The performance of the proposed approach is demonstrated via simulation studies and an analysis of chronic depression data. PMID:25883393

  4. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    PubMed

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia.

  5. Systematic optimization model and algorithm for binding sequence selection in computational enzyme design

    PubMed Central

    Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan

    2013-01-01

    A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589

  6. [Selection of back-ground electrolyte in capillary zone electrophoresis by triangle and tetrahedron optimization methods].

    PubMed

    Sun, Guoxiang; Song, Wenjing; Lin, Ting

    2008-03-01

    The triangle and tetrahedron optimization methods were developed for the selection of back-ground electrolyte (BGE) in capillary zone electrophoresis (CZE). Chromatographic fingerprint index F and chromatographic fingerprint relative index F(r) were used as the objective functions for the evaluation, and the extract of Saussurea involucrate by water was used as the sample. The BGE was composed of borax, boric acid, dibasic sodium phosphate and sodium dihydrogen phosphate solution with different concentrations using triangle and tetrahedron optimization methods. Re-optimization was carried out by adding organic modifier to the BGE and adjusting the pH value. In triangle method, when 50 mmol/L borax-150 mmol/L sodium dihydrogen phosphate (containing 3% acetonitrile) (1 : 1, v/v) was used as BGE, the isolation was considered to be satisfactory. In tetrahedron method, the best BGE was 50 mmol/L borax-150 mmol/L sodium dihydrogen phosphate-200 mmol/L boric acid (1 : 1 : 2, v/v/v; adjusting the pH value to 8.55 by 0.1 mol/L sodium hydroxide). There were 28 peaks and 25 peaks under the different conditions respectively. The results showed that the methods could be applied to the selection of BGE in CZE of the extract of traditional Chinese medicine by water or ethanol.

  7. Memory control beliefs and everyday forgetfulness in adulthood: the effects of selection, optimization, and compensation strategies.

    PubMed

    Scheibner, Gunnar Benjamin; Leathem, Janet

    2012-01-01

    Controlling for age, gender, education, and self-rated health, the present study used regression analyses to examine the relationships between memory control beliefs and self-reported forgetfulness in the context of the meta-theory of Selective Optimization with Compensation (SOC). Findings from this online survey (N = 409) indicate that, among adult New Zealanders, a higher sense of memory control accounts for a 22.7% reduction in self-reported forgetfulness. Similarly, optimization was found to account for a 5% reduction in forgetfulness while the strategies of selection and compensation were not related to self-reports of forgetfulness. Optimization partially mediated the beneficial effects that some memory beliefs (e.g., believing that memory decline is inevitable and believing in the potential for memory improvement) have on forgetfulness. It was concluded that memory control beliefs are important predictors of self-reported forgetfulness while the support for the SOC model in the context of memory controllability and everyday forgetfulness is limited.

  8. Systematic optimization model and algorithm for binding sequence selection in computational enzyme design.

    PubMed

    Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan

    2013-07-01

    A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions.

  9. Particle swarm optimization for feature selection in classification: a multi-objective approach.

    PubMed

    Xue, Bing; Zhang, Mengjie; Browne, Will N

    2013-12-01

    Classification problems often have a large number of features in the data sets, but not all of them are useful for classification. Irrelevant and redundant features may even reduce the performance. Feature selection aims to choose a small number of relevant features to achieve similar or even better classification performance than using all features. It has two main conflicting objectives of maximizing the classification performance and minimizing the number of features. However, most existing feature selection algorithms treat the task as a single objective problem. This paper presents the first study on multi-objective particle swarm optimization (PSO) for feature selection. The task is to generate a Pareto front of nondominated solutions (feature subsets). We investigate two PSO-based multi-objective feature selection algorithms. The first algorithm introduces the idea of nondominated sorting into PSO to address feature selection problems. The second algorithm applies the ideas of crowding, mutation, and dominance to PSO to search for the Pareto front solutions. The two multi-objective algorithms are compared with two conventional feature selection methods, a single objective feature selection method, a two-stage feature selection algorithm, and three well-known evolutionary multi-objective algorithms on 12 benchmark data sets. The experimental results show that the two PSO-based multi-objective algorithms can automatically evolve a set of nondominated solutions. The first algorithm outperforms the two conventional methods, the single objective method, and the two-stage algorithm. It achieves comparable results with the existing three well-known multi-objective algorithms in most cases. The second algorithm achieves better results than the first algorithm and all other methods mentioned previously.

  10. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    PubMed

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness.

  11. Selection of optimal measures of growth and reproduction for the sublethal Leptocheirus plumulosus sediment bioassay

    SciTech Connect

    Gray, B.R.; Wright, R.B.; Duke, B.M.; Farrar, J.D.; Emery, V.L. Jr.; Brandon, D.L.; Moore, D.W.

    1998-11-01

    This article describes the selection process used to identify optimal measures of growth and reproduction for the proposed 28-d sublethal sediment bioassay with the estuarine amphipod Leptocheirus plumulosus. The authors used four criteria (relevance of each measure to its respective endpoint, signal-to-noise ratio, redundancy relative to other measures of the same endpoint, and cost) to evaluate nine growth and seven reproductive measures. Optimal endpoint measures were identified as those receiving relatively high scores for all or most criteria. Measures of growth scored similarly on all criteria, except for cost. The cost of the pooled (female plus male) growth measures was substantially lower than the cost of the female and male growth measures because the latter required more labor (by approx. 25 min per replicate). Pooled dry weight was identified as the optimal growth measure over pooled length because the latter required additional labor and nonstandard software and equipment. Embryo and neonate measures of reproduction exhibited wide differences in labor costs but yielded similar scores for other criteria. In contrast, brooding measures of reproduction scored relatively low on endpoint relevance, signal-to-noise ratio, and redundancy criteria. The authors recommend neonates/survivor as the optimal measure of L. plumulosus reproduction because it exhibited high endpoint relevance and signal-to-noise ratios, was redundant to other reproductive measures, and required minimal time.

  12. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  13. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    PubMed

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  14. Selecting a proper design period for heliostat field layout optimization using Campo code

    NASA Astrophysics Data System (ADS)

    Saghafifar, Mohammad; Gadalla, Mohamed

    2016-09-01

    In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.

  15. Optimal feature selection in the classification of synchronous fluorescence of petroleum oils

    NASA Astrophysics Data System (ADS)

    Siddiqui, Khalid J.; Eastwood, DeLyle

    1996-03-01

    Pattern classification of UV-visible synchronous fluorescence of petroleum oils is performed using a composite system developed by the authors. The system consists of three phases, namely, feature extraction, feature selection and pattern classification. Each of these phases are briefly reviewed, focusing particularly on the feature selection method. Without assuming any particular classification algorithm the method extracts as much information (features) from spectra as conveniently possible and then applies the proposed successive feature elimination process to remove the redundant features. From the remaining features a significantly smaller, yet optimal, feature subset is selected that enhances the recognition performance of the classifier. The successive feature elimination process and optimal feature selection method are formally described. These methods are successfully applied for the classification of UV-visible synchronous fluorescence spectra. The features selected by the algorithm are used to classify twenty different sets of petroleum oils (the design set). A proximity index classifier using the Mahalanobis distance as the proximity criterion is developed using the smaller feature subset. The system was trained on the design set. The recognition performance on the design set was 100%. The recognition performance on the testing set was over 93% by successfully identifying 28 out of 30 samples in six classes. This performance is very encouraging. In addition, the method is computationally inexpensive and is equally useful for large data set problems as it always partitions the problem into a set of two class problems. The method further reduces the need for a careful feature determination problem which a system designer usually encounters during the initial design phase of a pattern classifier.

  16. Optimization of electron transfer dissociation via informed selection of reagents and operating parameters.

    PubMed

    Compton, Philip D; Strukl, Joseph V; Bai, Dina L; Shabanowitz, Jeffrey; Hunt, Donald F

    2012-02-07

    Electron transfer dissociation (ETD) has improved the mass spectrometric analysis of proteins and peptides with labile post-translational modifications and larger intact masses. Here, the parameters governing the reaction rate of ETD are examined experimentally. Currently, due to reagent injection and isolation events as well as longer reaction times, ETD spectra require significantly more time to acquire than collision-induced dissociation (CID) spectra (>100 ms), resulting in a trade-off in the dynamic range of tandem MS analyses when ETD-based methods are compared to CID-based methods. Through fine adjustment of reaction parameters and the selection of reagents with optimal characteristics, we demonstrate a drastic reduction in the time taken per ETD event. In fact, ETD can be performed with optimal efficiency in nearly the same time as CID at low precursor charge state (z = +3) and becomes faster at higher charge state (z > +3).

  17. Optimization of selection of chain amine scrubbers for CO2 capture.

    PubMed

    Al-Marri, Mohammed J; Khader, Mahmoud M; Giannelis, Emmanuel P; Shibl, Mohamed F

    2014-12-01

    In order to optimize the selection of a suitable amine molecule for CO2 scrubbers, a series of ab initio calculations were performed at the B3LYP/6-31+G(d,p) level of theory. Diethylenetriamine was used as a simple chain amine. Methyl and hydroxyl groups served as examples of electron donors, and electron withdrawing groups like trifluoromethyl and nitro substituents were also evaluated. Interaction distances and binding energies were employed as comparison operators. Moreover, natural bond orbital (NBO) analysis, namely the second order perturbation approach, was applied to determine whether the amine-CO2 interaction is chemical or physical. Different sizes of substituents affect the capture ability of diethylenetriamine. For instance, trifluoromethyl shields the nitrogen atom to which it attaches from the interaction with CO2. The results presented here provide a means of optimizing the choice of amine molecules for developing new amine scrubbers.

  18. Pretreatment of wastewater: optimal coagulant selection using Partial Order Scaling Analysis (POSA).

    PubMed

    Tzfati, Eran; Sein, Maya; Rubinov, Angelika; Raveh, Adi; Bick, Amos

    2011-06-15

    Jar-test is a well-known tool for chemical selection for physical-chemical wastewater treatment. Jar test results show the treatment efficiency in terms of suspended matter and organic matter removal. However, in spite of having all these results, coagulant selection is not an easy task because one coagulant can remove efficiently the suspended solids but at the same time increase the conductivity. This makes the final selection of coagulants very dependent on the relative importance assigned to each measured parameter. In this paper, the use of Partial Order Scaling Analysis (POSA) and multi-criteria decision analysis is proposed to help the selection of the coagulant and its concentration in a sequencing batch reactor (SBR). Therefore, starting from the parameters fixed by the jar-test results, these techniques will allow to weight these parameters, according to the judgments of wastewater experts, and to establish priorities among coagulants. An evaluation of two commonly used coagulation/flocculation aids (Alum and Ferric Chloride) was conducted and based on jar tests and POSA model, Ferric Chloride (100 ppm) was the best choice. The results obtained show that POSA and multi-criteria techniques are useful tools to select the optimal chemicals for the physical-technical treatment.

  19. Impact of cultivar selection and process optimization on ethanol yield from different varieties of sugarcane

    PubMed Central

    2014-01-01

    Background The development of ‘energycane’ varieties of sugarcane is underway, targeting the use of both sugar juice and bagasse for ethanol production. The current study evaluated a selection of such ‘energycane’ cultivars for the combined ethanol yields from juice and bagasse, by optimization of dilute acid pretreatment optimization of bagasse for sugar yields. Method A central composite design under response surface methodology was used to investigate the effects of dilute acid pretreatment parameters followed by enzymatic hydrolysis on the combined sugar yield of bagasse samples. The pressed slurry generated from optimum pretreatment conditions (maximum combined sugar yield) was used as the substrate during batch and fed-batch simultaneous saccharification and fermentation (SSF) processes at different solid loadings and enzyme dosages, aiming to reach an ethanol concentration of at least 40 g/L. Results Significant variations were observed in sugar yields (xylose, glucose and combined sugar yield) from pretreatment-hydrolysis of bagasse from different cultivars of sugarcane. Up to 33% difference in combined sugar yield between best performing varieties and industrial bagasse was observed at optimal pretreatment-hydrolysis conditions. Significant improvement in overall ethanol yield after SSF of the pretreated bagasse was also observed from the best performing varieties (84.5 to 85.6%) compared to industrial bagasse (74.5%). The ethanol concentration showed inverse correlation with lignin content and the ratio of xylose to arabinose, but it showed positive correlation with glucose yield from pretreatment-hydrolysis. The overall assessment of the cultivars showed greater improvement in the final ethanol concentration (26.9 to 33.9%) and combined ethanol yields per hectare (83 to 94%) for the best performing varieties with respect to industrial sugarcane. Conclusions These results suggest that the selection of sugarcane variety to optimize ethanol

  20. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    NASA Astrophysics Data System (ADS)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model

  1. Optimal selection of space transportation fleet to meet multi-mission space program needs

    NASA Technical Reports Server (NTRS)

    Morgenthaler, George W.; Montoya, Alex J.

    1989-01-01

    A space program that spans several decades will be comprised of a collection of missions such as low earth orbital space station, a polar platform, geosynchronous space station, lunar base, Mars astronaut mission, and Mars base. The optimal selection of a fleet of several recoverable and expendable launch vehicles, upper stages, and interplanetary spacecraft necessary to logistically establish and support these space missions can be examined by means of a linear integer programming optimization model. Such a selection must be made because the economies of scale which comes from producing large quantities of a few standard vehicle types, rather than many, will be needed to provide learning curve effects to reduce the overall cost of space transportation if these future missions are to be affordable. Optimization model inputs come from data and from vehicle designs. Each launch vehicle currently in existence has a launch history, giving rise to statistical estimates of launch reliability. For future, not-yet-developed launch vehicles, theoretical reliabilities corresponding to the maturity of the launch vehicles' technology and the degree of design redundancy must be estimated. Also, each such launch vehicle has a certain historical or estimated development cost, tooling cost, and a variable cost. The cost of a launch used in this paper includes the variable cost plus an amortized portion of the fixed and development costs. The integer linear programming model will have several constraint equations based on assumptions of mission mass requirements, volume requirements, and number of astronauts needed. The model will minimize launch vehicle logistic support cost and will select the most desirable launch vehicle fleet.

  2. Screening and selection of synthetic peptides for a novel and optimized endotoxin detection method.

    PubMed

    Mujika, M; Zuzuarregui, A; Sánchez-Gómez, S; Martínez de Tejada, G; Arana, S; Pérez-Lorenzo, E

    2014-09-30

    The current validated endotoxin detection methods, in spite of being highly sensitive, present several drawbacks in terms of reproducibility, handling and cost. Therefore novel approaches are being carried out in the scientific community to overcome these difficulties. Remarkable efforts are focused on the development of endotoxin-specific biosensors. The key feature of these solutions relies on the proper definition of the capture protocol, especially of the bio-receptor or ligand. The aim of the presented work is the screening and selection of a synthetic peptide specifically designed for LPS detection, as well as the optimization of a procedure for its immobilization onto gold substrates for further application to biosensors.

  3. Heuristic Optimization Approach to Selecting a Transport Connection in City Public Transport

    NASA Astrophysics Data System (ADS)

    Kul'ka, Jozef; Mantič, Martin; Kopas, Melichar; Faltinová, Eva; Kachman, Daniel

    2017-02-01

    The article presents a heuristic optimization approach to select a suitable transport connection in the framework of a city public transport. This methodology was applied on a part of the public transport in Košice, because it is the second largest city in the Slovak Republic and its network of the public transport creates a complex transport system, which consists of three different transport modes, namely from the bus transport, tram transport and trolley-bus transport. This solution focused on examining the individual transport services and their interconnection in relevant interchange points.

  4. Burnout and job performance: the moderating role of selection, optimization, and compensation strategies.

    PubMed

    Demerouti, Evangelia; Bakker, Arnold B; Leiter, Michael

    2014-01-01

    The present study aims to explain why research thus far has found only low to moderate associations between burnout and performance. We argue that employees use adaptive strategies that help them to maintain their performance (i.e., task performance, adaptivity to change) at acceptable levels despite experiencing burnout (i.e., exhaustion, disengagement). We focus on the strategies included in the selective optimization with compensation model. Using a sample of 294 employees and their supervisors, we found that compensation is the most successful strategy in buffering the negative associations of disengagement with supervisor-rated task performance and both disengagement and exhaustion with supervisor-rated adaptivity to change. In contrast, selection exacerbates the negative relationship of exhaustion with supervisor-rated adaptivity to change. In total, 42% of the hypothesized interactions proved to be significant. Our study uncovers successful and unsuccessful strategies that people use to deal with their burnout symptoms in order to achieve satisfactory job performance.

  5. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  6. Contrast based band selection for optimized weathered oil detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Levaux, Florian; Bostater, Charles R., Jr.; Neyt, Xavier

    2012-09-01

    Hyperspectral imagery offers unique benefits for detection of land and water features due to the information contained in reflectance signatures such as the bi-directional reflectance distribution function or BRDF. The reflectance signature directly shows the relative absorption and backscattering features of targets. These features can be very useful in shoreline monitoring or surveillance applications, for example to detect weathered oil. In real-time detection applications, processing of hyperspectral data can be an important tool and Optimal band selection is thus important in real time applications in order to select the essential bands using the absorption and backscatter information. In the present paper, band selection is based upon the optimization of target detection using contrast algorithms. The common definition of the contrast (using only one band out of all possible combinations available within a hyperspectral image) is generalized in order to consider all the possible combinations of wavelength dependent contrasts using hyperspectral images. The inflection (defined here as an approximation of the second derivative) is also used in order to enhance the variations in the reflectance spectra as well as in the contrast spectrua in order to assist in optimal band selection. The results of the selection in term of target detection (false alarms and missed detection) are also compared with a previous method to perform feature detection, namely the matched filter. In this paper, imagery is acquired using a pushbroom hyperspectral sensor mounted at the bow of a small vessel. The sensor is mechanically rotated using an optical rotation stage. This opto-mechanical scanning system produces hyperspectral images with pixel sizes on the order of mm to cm scales, depending upon the distance between the sensor and the shoreline being monitored. The motion of the platform during the acquisition induces distortions in the collected HSI imagery. It is therefore

  7. Selective mapping: a strategy for optimizing the construction of high-density linkage maps.

    PubMed Central

    Vision, T J; Brown, D G; Shmoys, D B; Durrett, R T; Tanksley, S D

    2000-01-01

    Historically, linkage mapping populations have consisted of large, randomly selected samples of progeny from a given pedigree or cell lines from a panel of radiation hybrids. We demonstrate that, to construct a map with high genome-wide marker density, it is neither necessary nor desirable to genotype all markers in every individual of a large mapping population. Instead, a reduced sample of individuals bearing complementary recombinational or radiation-induced breakpoints may be selected for genotyping subsequent markers from a large, but sparsely genotyped, mapping population. Choosing such a sample can be reduced to a discrete stochastic optimization problem for which the goal is a sample with breakpoints spaced evenly throughout the genome. We have developed several different methods for selecting such samples and have evaluated their performance on simulated and actual mapping populations, including the Lister and Dean Arabidopsis thaliana recombinant inbred population and the GeneBridge 4 human radiation hybrid panel. Our methods quickly and consistently find much-reduced samples with map resolution approaching that of the larger populations from which they are derived. This approach, which we have termed selective mapping, can facilitate the production of high-quality, high-density genome-wide linkage maps. PMID:10790413

  8. An ant colony optimization based feature selection for web page classification.

    PubMed

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  9. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection

    PubMed Central

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme

    2016-01-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  10. Optimal sequence selection in proteins of known structure by simulated evolution.

    PubMed Central

    Hellinga, H W; Richards, F M

    1994-01-01

    Rational design of protein structure requires the identification of optimal sequences to carry out a particular function within a given backbone structure. A general solution to this problem requires that a potential function describing the energy of the system as a function of its atomic coordinates be minimized simultaneously over all available sequences and their three-dimensional atomic configurations. Here we present a method that explicitly minimizes a semiempirical potential function simultaneously in these two spaces, using a simulated annealing approach. The method takes the fixed three-dimensional coordinates of a protein backbone and stochastically generates possible sequences through the introduction of random mutations. The corresponding three-dimensional coordinates are constructed for each sequence by "redecorating" the backbone coordinates of the original structure with the corresponding side chains. These are then allowed to vary in their structure by random rotations around free torsional angles to generate a stochastic walk in configurational space. We have named this method protein simulated evolution, because, in loose analogy with natural selection, it randomly selects for allowed solutions in the sequence of a protein subject to the "selective pressure" of a potential function. Energies predicted by this method for sequences of a small group of residues in the hydrophobic core of the phage lambda cI repressor correlate well with experimentally determined biological activities. This "genetic selection by computer" approach has potential applications in protein engineering, rational protein design, and structure-based drug discovery. PMID:8016069

  11. Optimizing the StackSlide setup and data selection for continuous-gravitational-wave searches in realistic detector data

    NASA Astrophysics Data System (ADS)

    Shaltev, M.

    2016-02-01

    The search for continuous gravitational waves in a wide parameter space at a fixed computing cost is most efficiently done with semicoherent methods, e.g., StackSlide, due to the prohibitive computing cost of the fully coherent search strategies. Prix and Shaltev [Phys. Rev. D 85, 084010 (2012)] have developed a semianalytic method for finding optimal StackSlide parameters at a fixed computing cost under ideal data conditions, i.e., gapless data and a constant noise floor. In this work, we consider more realistic conditions by allowing for gaps in the data and changes in the noise level. We show how the sensitivity optimization can be decoupled from the data selection problem. To find optimal semicoherent search parameters, we apply a numerical optimization using as an example the semicoherent StackSlide search. We also describe three different data selection algorithms. Thus, the outcome of the numerical optimization consists of the optimal search parameters and the selected data set. We first test the numerical optimization procedure under ideal conditions and show that we can reproduce the results of the analytical method. Then we gradually relax the conditions on the data and find that a compact data selection algorithm yields higher sensitivity compared to a greedy data selection procedure.

  12. Optimality and stability of symmetric evolutionary games with applications in genetic selection.

    PubMed

    Huang, Yuanyuan; Hao, Yiping; Wang, Min; Zhou, Wen; Wu, Zhijun

    2015-06-01

    Symmetric evolutionary games, i.e., evolutionary games with symmetric fitness matrices, have important applications in population genetics, where they can be used to model for example the selection and evolution of the genotypes of a given population. In this paper, we review the theory for obtaining optimal and stable strategies for symmetric evolutionary games, and provide some new proofs and computational methods. In particular, we review the relationship between the symmetric evolutionary game and the generalized knapsack problem, and discuss the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We also derive more efficient computational methods for the evaluation of the conditions than conventional approaches. We demonstrate how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders.

  13. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  14. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  15. Feature Selection and Classifier Parameters Estimation for EEG Signals Peak Detection Using Particle Swarm Optimization

    PubMed Central

    Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236

  16. Cancer microarray data feature selection using multi-objective binary particle swarm optimization algorithm.

    PubMed

    Annavarapu, Chandra Sekhara Rao; Dara, Suresh; Banka, Haider

    2016-01-01

    Cancer investigations in microarray data play a major role in cancer analysis and the treatment. Cancer microarray data consists of complex gene expressed patterns of cancer. In this article, a Multi-Objective Binary Particle Swarm Optimization (MOBPSO) algorithm is proposed for analyzing cancer gene expression data. Due to its high dimensionality, a fast heuristic based pre-processing technique is employed to reduce some of the crude domain features from the initial feature set. Since these pre-processed and reduced features are still high dimensional, the proposed MOBPSO algorithm is used for finding further feature subsets. The objective functions are suitably modeled by optimizing two conflicting objectives i.e., cardinality of feature subsets and distinctive capability of those selected subsets. As these two objective functions are conflicting in nature, they are more suitable for multi-objective modeling. The experiments are carried out on benchmark gene expression datasets, i.e., Colon, Lymphoma and Leukaemia available in literature. The performance of the selected feature subsets with their classification accuracy and validated using 10 fold cross validation techniques. A detailed comparative study is also made to show the betterment or competitiveness of the proposed algorithm.

  17. Pareto archived dynamically dimensioned search with hypervolume-based selection for multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Asadzadeh, Masoud; Tolson, Bryan

    2013-12-01

    Pareto archived dynamically dimensioned search (PA-DDS) is a parsimonious multi-objective optimization algorithm with only one parameter to diminish the user's effort for fine-tuning algorithm parameters. This study demonstrates that hypervolume contribution (HVC) is a very effective selection metric for PA-DDS and Monte Carlo sampling-based HVC is very effective for higher dimensional problems (five objectives in this study). PA-DDS with HVC performs comparably to algorithms commonly applied to water resources problems (ɛ-NSGAII and AMALGAM under recommended parameter values). Comparisons on the CEC09 competition show that with sufficient computational budget, PA-DDS with HVC performs comparably to 13 benchmark algorithms and shows improved relative performance as the number of objectives increases. Lastly, it is empirically demonstrated that the total optimization runtime of PA-DDS with HVC is dominated (90% or higher) by solution evaluation runtime whenever evaluation exceeds 10 seconds/solution. Therefore, optimization algorithm runtime associated with the unbounded archive of PA-DDS is negligible in solving computationally intensive problems.

  18. Design-Optimization and Material Selection for a Proximal Radius Fracture-Fixation Implant

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Xie, X.; Arakere, G.; Grujicic, A.; Wagner, D. W.; Vallejo, A.

    2010-11-01

    The problem of optimal size, shape, and placement of a proximal radius-fracture fixation-plate is addressed computationally using a combined finite-element/design-optimization procedure. To expand the set of physiological loading conditions experienced by the implant during normal everyday activities of the patient, beyond those typically covered by the pre-clinical implant-evaluation testing procedures, the case of a wheel-chair push exertion is considered. Toward that end, a musculoskeletal multi-body inverse-dynamics analysis is carried out of a human propelling a wheelchair. The results obtained are used as input to a finite-element structural analysis for evaluation of the maximum stress and fatigue life of the parametrically defined implant design. While optimizing the design of the radius-fracture fixation-plate, realistic functional requirements pertaining to the attainment of the required level of the devise safety factor and longevity/lifecycle were considered. It is argued that the type of analyses employed in the present work should be: (a) used to complement the standard experimental pre-clinical implant-evaluation tests (the tests which normally include a limited number of daily-living physiological loading conditions and which rely on single pass/fail outcomes/decisions with respect to a set of lower-bound implant-performance criteria) and (b) integrated early in the implant design and material/manufacturing-route selection process.

  19. Adaptive Optimal Control Using Frequency Selective Information of the System Uncertainty With Application to Unmanned Aircraft.

    PubMed

    Maity, Arnab; Hocht, Leonhard; Heise, Christian; Holzapfel, Florian

    2016-11-28

    A new efficient adaptive optimal control approach is presented in this paper based on the indirect model reference adaptive control (MRAC) architecture for improvement of adaptation and tracking performance of the uncertain system. The system accounts here for both matched and unmatched unknown uncertainties that can act as plant as well as input effectiveness failures or damages. For adaptation of the unknown parameters of these uncertainties, the frequency selective learning approach is used. Its idea is to compute a filtered expression of the system uncertainty using multiple filters based on online instantaneous information, which is used for augmentation of the update law. It is capable of adjusting a sudden change in system dynamics without depending on high adaptation gains and can satisfy exponential parameter error convergence under certain conditions in the presence of structured matched and unmatched uncertainties as well. Additionally, the controller of the MRAC system is designed using a new optimal control method. This method is a new linear quadratic regulator-based optimal control formulation for both output regulation and command tracking problems. It provides a closed-form control solution. The proposed overall approach is applied in a control of lateral dynamics of an unmanned aircraft problem to show its effectiveness.

  20. Exemplar-Based Policy with Selectable Strategies and its Optimization Using GA

    NASA Astrophysics Data System (ADS)

    Ikeda, Kokolo; Kobayashi, Shigenobu; Kita, Hajime

    As an approach for dynamic control problems and decision making problems, usually formulated as Markov Decision Processes (MDPs), we focus direct policy search (DPS), where a policy is represented by a model with parameters, and the parameters are optimized so as to maximize the evaluation function by applying the parameterized policy to the problem. In this paper, a novel framework for DPS, an exemplar-based policy optimization using genetic algorithm (EBP-GA) is presented and analyzed. In this approach, the policy is composed of a set of virtual exemplars and a case-based action selector, and the set of exemplars are selected and evolved by a genetic algorithm. Here, an exemplar is a real or virtual, free-styled and suggestive information such as ``take the action A at the state S'' or ``the state S1 is better to attain than S2''. One advantage of EBP-GA is the generalization and localization ability for policy expression, based on case-based reasoning methods. Another advantage is that both the introduction of prior knowledge and the extraction of knowledge after optimization are relatively straightforward. These advantages are confirmed through the proposal of two new policy expressions, experiments on two different problems and their analysis.

  1. Optimal energy window selection of a CZT-based small-animal SPECT for quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Park, Su-Jin; Yu, A. Ram; Choi, Yun Young; Kim, Kyeong Min; Kim, Hee-Joung

    2015-05-01

    Cadmium zinc telluride (CZT)-based small-animal single-photon emission computed tomography (SPECT) has desirable characteristics such as superior energy resolution, but data acquisition for SPECT imaging has been widely performed with a conventional energy window. The aim of this study was to determine the optimal energy window settings for technetium-99 m (99mTc) and thallium-201 (201Tl), the most commonly used isotopes in SPECT imaging, using CZT-based small-animal SPECT for quantitative accuracy. We experimentally investigated quantitative measurements with respect to primary count rate, contrast-to-noise ratio (CNR), and scatter fraction (SF) within various energy window settings using Triumph X-SPECT. The two ways of energy window settings were considered: an on-peak window and an off-peak window. In the on-peak window setting, energy centers were set on the photopeaks. In the off-peak window setting, the ratios of energy differences between the photopeak from the lower- and higher-threshold varied from 4:6 to 3:7. In addition, the energy-window width for 99mTc varied from 5% to 20%, and that for 201Tl varied from 10% to 30%. The results of this study enabled us to determine the optimal energy windows for each isotope in terms of primary count rate, CNR, and SF. We selected the optimal energy window that increases the primary count rate and CNR while decreasing SF. For 99mTc SPECT imaging, the energy window of 138-145 keV with a 5% width and off-peak ratio of 3:7 was determined to be the optimal energy window. For 201Tl SPECT imaging, the energy window of 64-85 keV with a 30% width and off-peak ratio of 3:7 was selected as the optimal energy window. Our results demonstrated that the proper energy window should be carefully chosen based on quantitative measurements in order to take advantage of desirable characteristics of CZT-based small-animal SPECT. These results provided valuable reference information for the establishment of new protocol for CZT

  2. Optimized diffusion of buck semen for saving genetic variability in selected dairy goat populations

    PubMed Central

    2011-01-01

    Background Current research on quantitative genetics has provided efficient guidelines for the sustainable management of selected populations: genetic gain is maximized while the loss of genetic diversity is maintained at a reasonable rate. However, actual selection schemes are complex, especially for large domestic species, and they have to take into account many operational constraints. This paper deals with the actual selection of dairy goats where the challenge is to optimize diffusion of buck semen on the field. Three objectives are considered simultaneously: i) natural service buck replacement (NSR); ii) goat replacement (GR); iii) semen distribution of young bucks to be progeny-tested. An appropriate optimization method is developed, which involves five analytical steps. Solutions are obtained by simulated annealing and the corresponding algorithms are presented in detail. Results The whole procedure was tested on two French goat populations (Alpine and Saanen breeds) and the results presented in the abstract were based on the average of the two breeds. The procedure induced an immediate acceleration of genetic gain in comparison with the current annual genetic gain (0.15 genetic standard deviation unit), as shown by two facts. First, the genetic level of replacement natural service (NS) bucks was predicted, 1.5 years ahead at the moment of reproduction, to be equivalent to that of the progeny-tested bucks in service, born from the current breeding scheme. Second, the genetic level of replacement goats was much higher than that of their dams (0.86 unit), which represented 6 years of selection, although dams were only 3 years older than their replacement daughters. This improved genetic gain could be achieved while decreasing inbreeding coefficients substantially. Inbreeding coefficients (%) of NS bucks was lower than that of the progeny-tested bucks (-0.17). Goats were also less inbred than their dams (-0.67). Conclusions It was possible to account for

  3. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction

    SciTech Connect

    Stassi, D.; Ma, H.; Schmidt, T. G.; Dutta, S.; Soderman, A.; Pazzani, D.; Gros, E.; Okerlund, D.

    2016-01-15

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three

  4. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection

    PubMed Central

    Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-01-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  5. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    PubMed

    Read, Mark N; Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-09-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  6. Identification and Optimization of the First Highly Selective GLUT1 Inhibitor BAY‐876

    PubMed Central

    Siebeneicher, Holger; Cleve, Arwed; Rehwinkel, Hartmut; Neuhaus, Roland; Heisler, Iring; Müller, Thomas; Bauser, Marcus

    2016-01-01

    Abstract Despite the long‐known fact that the facilitative glucose transporter GLUT1 is one of the key players safeguarding the increase in glucose consumption of many tumor entities even under conditions of normal oxygen supply (known as the Warburg effect), only few endeavors have been undertaken to find a GLUT1‐selective small‐molecule inhibitor. Because other transporters of the GLUT1 family are involved in crucial processes, these transporters should not be addressed by such an inhibitor. A high‐throughput screen against a library of ∼3 million compounds was performed to find a small molecule with this challenging potency and selectivity profile. The N‐(1H‐pyrazol‐4‐yl)quinoline‐4‐carboxamides were identified as an excellent starting point for further compound optimization. After extensive structure–activity relationship explorations, single‐digit nanomolar inhibitors with a selectivity factor of >100 against GLUT2, GLUT3, and GLUT4 were obtained. The most promising compound, BAY‐876 [N 4‐[1‐(4‐cyanobenzyl)‐5‐methyl‐3‐(trifluoromethyl)‐1H‐pyrazol‐4‐yl]‐7‐fluoroquinoline‐2,4‐dicarboxamide], showed good metabolic stability in vitro and high oral bioavailability in vivo. PMID:27552707

  7. Online stimulus optimization rapidly reveals multidimensional selectivity in auditory cortical neurons.

    PubMed

    Chambers, Anna R; Hancock, Kenneth E; Sen, Kamal; Polley, Daniel B

    2014-07-02

    Neurons in sensory brain regions shape our perception of the surrounding environment through two parallel operations: decomposition and integration. For example, auditory neurons decompose sounds by separately encoding their frequency, temporal modulation, intensity, and spatial location. Neurons also integrate across these various features to support a unified perceptual gestalt of an auditory object. At higher levels of a sensory pathway, neurons may select for a restricted region of feature space defined by the intersection of multiple, independent stimulus dimensions. To further characterize how auditory cortical neurons decompose and integrate multiple facets of an isolated sound, we developed an automated procedure that manipulated five fundamental acoustic properties in real time based on single-unit feedback in awake mice. Within several minutes, the online approach converged on regions of the multidimensional stimulus manifold that reliably drove neurons at significantly higher rates than predefined stimuli. Optimized stimuli were cross-validated against pure tone receptive fields and spectrotemporal receptive field estimates in the inferior colliculus and primary auditory cortex. We observed, from midbrain to cortex, increases in both level invariance and frequency selectivity, which may underlie equivalent sparseness of responses in the two areas. We found that onset and steady-state spike rates increased proportionately as the stimulus was tailored to the multidimensional receptive field. By separately evaluating the amount of leverage each sound feature exerted on the overall firing rate, these findings reveal interdependencies between stimulus features as well as hierarchical shifts in selectivity and invariance that may go unnoticed with traditional approaches.

  8. Optimal feature selection for automated classification of FDG-PET in patients with suspected dementia

    NASA Astrophysics Data System (ADS)

    Serag, Ahmed; Wenzel, Fabian; Thiele, Frank; Buchert, Ralph; Young, Stewart

    2009-02-01

    FDG-PET is increasingly used for the evaluation of dementia patients, as major neurodegenerative disorders, such as Alzheimer's disease (AD), Lewy body dementia (LBD), and Frontotemporal dementia (FTD), have been shown to induce specific patterns of regional hypo-metabolism. However, the interpretation of FDG-PET images of patients with suspected dementia is not straightforward, since patients are imaged at different stages of progression of neurodegenerative disease, and the indications of reduced metabolism due to neurodegenerative disease appear slowly over time. Furthermore, different diseases can cause rather similar patterns of hypo-metabolism. Therefore, classification of FDG-PET images of patients with suspected dementia may lead to misdiagnosis. This work aims to find an optimal subset of features for automated classification, in order to improve classification accuracy of FDG-PET images in patients with suspected dementia. A novel feature selection method is proposed, and performance is compared to existing methods. The proposed approach adopts a combination of balanced class distributions and feature selection methods. This is demonstrated to provide high classification accuracy for classification of FDG-PET brain images of normal controls and dementia patients, comparable with alternative approaches, and provides a compact set of features selected.

  9. Online Stimulus Optimization Rapidly Reveals Multidimensional Selectivity in Auditory Cortical Neurons

    PubMed Central

    Hancock, Kenneth E.; Sen, Kamal

    2014-01-01

    Neurons in sensory brain regions shape our perception of the surrounding environment through two parallel operations: decomposition and integration. For example, auditory neurons decompose sounds by separately encoding their frequency, temporal modulation, intensity, and spatial location. Neurons also integrate across these various features to support a unified perceptual gestalt of an auditory object. At higher levels of a sensory pathway, neurons may select for a restricted region of feature space defined by the intersection of multiple, independent stimulus dimensions. To further characterize how auditory cortical neurons decompose and integrate multiple facets of an isolated sound, we developed an automated procedure that manipulated five fundamental acoustic properties in real time based on single-unit feedback in awake mice. Within several minutes, the online approach converged on regions of the multidimensional stimulus manifold that reliably drove neurons at significantly higher rates than predefined stimuli. Optimized stimuli were cross-validated against pure tone receptive fields and spectrotemporal receptive field estimates in the inferior colliculus and primary auditory cortex. We observed, from midbrain to cortex, increases in both level invariance and frequency selectivity, which may underlie equivalent sparseness of responses in the two areas. We found that onset and steady-state spike rates increased proportionately as the stimulus was tailored to the multidimensional receptive field. By separately evaluating the amount of leverage each sound feature exerted on the overall firing rate, these findings reveal interdependencies between stimulus features as well as hierarchical shifts in selectivity and invariance that may go unnoticed with traditional approaches. PMID:24990917

  10. A strategy that iteratively retains informative variables for selecting optimal variable subset in multivariate calibration.

    PubMed

    Yun, Yong-Huan; Wang, Wei-Ting; Tan, Min-Li; Liang, Yi-Zeng; Li, Hong-Dong; Cao, Dong-Sheng; Lu, Hong-Mei; Xu, Qing-Song

    2014-01-07

    Nowadays, with a high dimensionality of dataset, it faces a great challenge in the creation of effective methods which can select an optimal variables subset. In this study, a strategy that considers the possible interaction effect among variables through random combinations was proposed, called iteratively retaining informative variables (IRIV). Moreover, the variables are classified into four categories as strongly informative, weakly informative, uninformative and interfering variables. On this basis, IRIV retains both the strongly and weakly informative variables in every iterative round until no uninformative and interfering variables exist. Three datasets were employed to investigate the performance of IRIV coupled with partial least squares (PLS). The results show that IRIV is a good alternative for variable selection strategy when compared with three outstanding and frequently used variable selection methods such as genetic algorithm-PLS, Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS) and competitive adaptive reweighted sampling (CARS). The MATLAB source code of IRIV can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list.

  11. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods.

    PubMed

    Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.

  12. Optimization methods for selecting founder individuals for captive breeding or reintroduction of endangered species.

    PubMed

    Miller, Webb; Wright, Stephen J; Zhang, Yu; Schuster, Stephan C; Hayes, Vanessa M

    2010-01-01

    Methods from genetics and genomics can be employed to help save endangered species. One potential use is to provide a rational strategy for selecting a population of founders for a captive breeding program. The hope is to capture most of the available genetic diversity that remains in the wild population, to provide a safe haven where representatives of the species can be bred, and eventually to release the progeny back into the wild. However, the founders are often selected based on a random-sampling strategy whose validity is based on unrealistic assumptions. Here we outline an approach that starts by using cutting-edge genome sequencing and genotyping technologies to objectively assess the available genetic diversity. We show how combinatorial optimization methods can be applied to these data to guide the selection of the founder population. In particular, we develop a mixed-integer linear programming technique that identifies a set of animals whose genetic profile is as close as possible to specified abundances of alleles (i.e., genetic variants), subject to constraints on the number of founders and their genders and ages.

  13. Evaluation of the selection methods used in the exIWO algorithm based on the optimization of multidimensional functions

    NASA Astrophysics Data System (ADS)

    Kostrzewa, Daniel; Josiński, Henryk

    2016-06-01

    The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.

  14. X-ray backscatter imaging for radiography by selective detection and snapshot: Evolution, development, and optimization

    NASA Astrophysics Data System (ADS)

    Shedlock, Daniel

    Compton backscatter imaging (CBI) is a single-sided imaging technique that uses the penetrating power of radiation and unique interaction properties of radiation with matter to image subsurface features. CBI has a variety of applications that include non-destructive interrogation, medical imaging, security and military applications. Radiography by selective detection (RSD), lateral migration radiography (LMR) and shadow aperture backscatter radiography (SABR) are different CBI techniques that are being optimized and developed. Radiography by selective detection (RSD) is a pencil beam Compton backscatter imaging technique that falls between highly collimated and uncollimated techniques. Radiography by selective detection uses a combination of single- and multiple-scatter photons from a projected area below a collimation plane to generate an image. As a result, the image has a combination of first- and multiple-scatter components. RSD techniques offer greater subsurface resolution than uncollimated techniques, at speeds at least an order of magnitude faster than highly collimated techniques. RSD scanning systems have evolved from a prototype into near market-ready scanning devices for use in a variety of single-sided imaging applications. The design has changed to incorporate state-of-the-art detectors and electronics optimized for backscatter imaging with an emphasis on versatility, efficiency and speed. The RSD system has become more stable, about 4 times faster, and 60% lighter while maintaining or improving image quality and contrast over the past 3 years. A new snapshot backscatter radiography (SBR) CBI technique, shadow aperture backscatter radiography (SABR), has been developed from concept and proof-of-principle to a functional laboratory prototype. SABR radiography uses digital detection media and shaded aperture configurations to generate near-surface Compton backscatter images without scanning, similar to how transmission radiographs are taken. Finally, a

  15. [Study on optimal selection of structure of vaneless centrifugal blood pump with constraints on blood perfusion and on blood damage indexes].

    PubMed

    Hu, Zhaoyan; Pan, Youlian; Chen, Zhenglong; Zhang, Tianyi; Lu, Lijun

    2012-12-01

    This paper is aimed to study the optimal selection of structure of vaneless centrifugal blood pump. The optimal objective is determined according to requirements of clinical use. Possible schemes are generally worked out based on structural feature of vaneless centrifugal blood pump. The optimal structure is selected from possible schemes with constraints on blood perfusion and blood damage indexes. Using an optimal selection method one can find the optimum structure scheme from possible schemes effectively. The results of numerical simulation of optimal blood pump showed that the method of constraints of blood perfusion and blood damage is competent for the requirements of selection of the optimal blood pumps.

  16. Spatial filter and feature selection optimization based on EA for multi-channel EEG.

    PubMed

    Wang, Yubo; Mohanarangam, Krithikaa; Mallipeddi, Rammohan; Veluvolu, K C

    2015-01-01

    The EEG signals employed for BCI systems are generally band-limited. The band-limited multiple Fourier linear combiner (BMFLC) with Kalman filter was developed to obtain amplitude estimates of the EEG signal in a pre-fixed frequency band in real-time. However, the high-dimensionality of the feature vector caused by the application of BMFLC to multi-channel EEG based BCI deteriorates the performance of the classifier. In this work, we apply evolutionary algorithm (EA) to tackle this problem. The real-valued EA encodes both the spatial filter and the feature selection into its solution and optimizes it with respect to the classification error. Three BMFLC based BCI configurations are proposed. Our results show that the BMFLC-KF with covariance matrix adaptation evolution strategy (CMAES) has the best overall performance.

  17. Selection, optimization, and compensation as strategies of life management: correlations with subjective indicators of successful aging.

    PubMed

    Freund, A M; Baltes, P B

    1998-12-01

    The usefulness of self-reported processes of selection, optimization, and compensation (SOC) for predicting on a correlational level the subjective indicators of successful aging was examined. The sample of Berlin residents was a subset of the participants of the Berlin Aging Study. Three domains (marked by 6 variables) served as outcome measures of successful aging: subjective well-being, positive emotions, and absence of feelings of loneliness. Results confirm the central hypothesis of the SOC model: People who reported using SOC-related life-management behaviors (which were unrelated in content to the outcome measures) had higher scores on the 3 indicators of successful aging. The relationships obtained were robust even after controlling for other measures of successful mastery such as personal life investment, neuroticism, extraversion, openness, control beliefs, intelligence, subjective health, or age.

  18. Adapting to aging losses: do resources facilitate strategies of selection, compensation, and optimization in everyday functioning?

    PubMed

    Lang, Frieder R; Rieckmann, Nina; Baltes, Margret M

    2002-11-01

    Previous cross-sectional research has shown that older people who are rich in sensorimotor-cognitive and social-personality resources are better functioning in everyday life and exhibit fewer negative age differences than resource-poor adults. Longitudinal data from the Berlin Aging Study was used to examine these findings across a 4-year time interval and to compare cross-sectional indicators of adaptive everyday functioning among survivors and nonsurvivors. Apart from their higher survival rate, resource-rich older people (a) invest more social time with their family members, (b) reduce the diversity of activities within the most salient leisure domain, (c) sleep more often and longer during daytime, and (d) increase the variability of time investments across activities after 4 years. Overall, findings suggest a greater use of selection, compensation, and optimization strategies in everyday functioning among resource-rich older adults as compared with resource-poor older adults.

  19. PROBEmer: a web-based software tool for selecting optimal DNA oligos

    PubMed Central

    Emrich, Scott J.; Lowe, Mary; Delcher, Arthur L.

    2003-01-01

    PROBEmer (http://probemer.cs.loyola.edu) is a web-based software tool that enables a researcher to select optimal oligos for PCR applications and multiplex detection platforms including oligonucleotide microarrays and bead-based arrays. Given two groups of nucleic-acid sequences, a target group and a non-target group, the software identifies oligo sequences that occur in members of the target group, but not in the non-target group. To help predict potential cross hybridization, PROBEmer computes all near neighbors in the non-target group and displays their alignments. The software has been used to obtain genus-specific prokaryotic probes based on the 16S rRNA gene, gene-specific probes for expression analyses and PCR primers. In this paper, we describe how to use PROBEmer, the computational methods it employs, and experimental results for oligos identified by this software tool. PMID:12824409

  20. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  1. Optimal pig donor selection in islet xenotransplantation: current status and future perspectives.

    PubMed

    Zhu, Hai-tao; Yu, Liang; Lyu, Yi; Wang, Bo

    2014-08-01

    Islet transplantation is an attractive treatment of type 1 diabetes mellitus. Xenotransplantation, using the pig as a donor, offers the possibility of an unlimited supply of islet grafts. Published studies demonstrated that pig islets could function in diabetic primates for a long time (>6 months). However, pig-islet xenotransplantation must overcome the selection of an optimal pig donor to obtain an adequate supply of islets with high-quality, to reduce xeno-antigenicity of islet and prolong xenograft survival, and to translate experimental findings into clinical application. This review discusses the suitable pig donor for islet xenotransplantation in terms of pig age, strain, structure/function of islet, and genetically modified pig.

  2. Compound feature selection and parameter optimization of ELM for fault diagnosis of rolling element bearings.

    PubMed

    Luo, Meng; Li, Chaoshun; Zhang, Xiaoyuan; Li, Ruhai; An, Xueli

    2016-11-01

    This paper proposes a hybrid system named as HGSA-ELM for fault diagnosis of rolling element bearings, in which real-valued gravitational search algorithm (RGSA) is employed to optimize the input weights and bias of ELM, and the binary-valued of GSA (BGSA) is used to select important features from a compound feature set. Three types fault features, namely time and frequency features, energy features and singular value features, are extracted to compose the compound feature set by applying ensemble empirical mode decomposition (EEMD). For fault diagnosis of a typical rolling element bearing system with 56 working condition, comparative experiments were designed to evaluate the proposed method. And results show that HGSA-ELM achieves significant high classification accuracy compared with its original version and methods in literatures.

  3. Research on the optimal selection method of image complexity assessment model index parameter

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Duan, Jin; Qian, Xiaofei; Xiao, Bo

    2015-10-01

    Target recognition is widely used in national economy, space technology and national defense and other fields. There is great difference between the difficulty of the target recognition and target extraction. The image complexity is evaluating the difficulty level of extracting the target from background. It can be used as a prior evaluation index of the target recognition algorithm's effectiveness. The paper, from the perspective of the target and background characteristics measurement, describe image complexity metrics parameters using quantitative, accurate mathematical relationship. For the collinear problems between each measurement parameters, image complexity metrics parameters are clustered with gray correlation method. It can realize the metrics parameters of extraction and selection, improve the reliability and validity of image complexity description and representation, and optimize the image the complexity assessment calculation model. Experiment results demonstrate that when gray system theory is applied to the image complexity analysis, target characteristics image complexity can be measured more accurately and effectively.

  4. Field trials for corrosion inhibitor selection and optimization, using a new generation of electrical resistance probes

    SciTech Connect

    Ridd, B.; Blakset, T.J.; Queen, D.

    1998-12-31

    Even with today`s availability of corrosion resistant alloys, carbon steels protected by corrosion inhibitors still dominate the material selection for pipework in the oil and gas production. Even though laboratory screening tests of corrosion inhibitor performance provides valuable data, the real performance of the chemical can only be studied through field trials which provide the ultimate test to evaluate the effectiveness of an inhibitor under actual operating conditions. A new generation of electrical resistance probe has been developed, allowing highly sensitive and immediate response to changes in corrosion rates on the internal environment of production pipework. Because of the high sensitivity, the probe responds to small changes in the corrosion rate, and it provides the corrosion engineer with a highly effective method of optimizing the use of inhibitor chemicals resulting in confidence in corrosion control and minimizing detrimental environmental effects.

  5. MicroRNAs: Non-coding fine tuners of receptor tyrosine kinase signalling in cancer.

    PubMed

    Donzelli, Sara; Cioce, Mario; Muti, Paola; Strano, Sabrina; Yarden, Yosef; Blandino, Giovanni

    2016-02-01

    Emerging evidence point to a crucial role for non-coding RNAs in modulating homeostatic signaling under physiological and pathological conditions. MicroRNAs, the best-characterized non-coding RNAs to date, can exquisitely integrate spatial and temporal signals in complex networks, thereby confer specificity and sensitivity to tissue response to changes in the microenvironment. MicroRNAs appear as preferential partners for Receptor Tyrosine Kinases (RTKs) in mediating signaling under stress conditions. Stress signaling can be especially relevant to disease. Here we focus on the ability of microRNAs to mediate RTK signaling in cancer, by acting as both tumor suppressors and oncogenes. We will provide a few general examples of microRNAs modulating specific tumorigenic functions downstream of RTK signaling and integrate oncogenic signals from multiple RTKs. A special focus will be devoted to epidermal growth factor receptor (EGFR) signaling, a system offering relatively rich information. We will explore the role of selected microRNAs as bidirectional modulators of EGFR functions in cancer cells. In addition, we will present the emerging evidence for microRNAs being specifically modulated by oncogenic EGFR mutants and we will discuss how this impinges on EGFRmut driven chemoresistance, which fits into the tumor heterogeneity-driven cancer progression. Finally, we discuss how other non-coding RNA species are emerging as important modulators of cancer progression and why the scenario depicted herein is destined to become increasingly complex in the future.

  6. Markov Chain Model-Based Optimal Cluster Heads Selection for Wireless Sensor Networks

    PubMed Central

    Ahmed, Gulnaz; Zou, Jianhua; Zhao, Xi; Sadiq Fareed, Mian Muhammad

    2017-01-01

    The longer network lifetime of Wireless Sensor Networks (WSNs) is a goal which is directly related to energy consumption. This energy consumption issue becomes more challenging when the energy load is not properly distributed in the sensing area. The hierarchal clustering architecture is the best choice for these kind of issues. In this paper, we introduce a novel clustering protocol called Markov chain model-based optimal cluster heads (MOCHs) selection for WSNs. In our proposed model, we introduce a simple strategy for the optimal number of cluster heads selection to overcome the problem of uneven energy distribution in the network. The attractiveness of our model is that the BS controls the number of cluster heads while the cluster heads control the cluster members in each cluster in such a restricted manner that a uniform and even load is ensured in each cluster. We perform an extensive range of simulation using five quality measures, namely: the lifetime of the network, stable and unstable region in the lifetime of the network, throughput of the network, the number of cluster heads in the network, and the transmission time of the network to analyze the proposed model. We compare MOCHs against Sleep-awake Energy Efficient Distributed (SEED) clustering, Artificial Bee Colony (ABC), Zone Based Routing (ZBR), and Centralized Energy Efficient Clustering (CEEC) using the above-discussed quality metrics and found that the lifetime of the proposed model is almost 1095, 2630, 3599, and 2045 rounds (time steps) greater than SEED, ABC, ZBR, and CEEC, respectively. The obtained results demonstrate that the MOCHs is better than SEED, ABC, ZBR, and CEEC in terms of energy efficiency and the network throughput. PMID:28241492

  7. Markov Chain Model-Based Optimal Cluster Heads Selection for Wireless Sensor Networks.

    PubMed

    Ahmed, Gulnaz; Zou, Jianhua; Zhao, Xi; Sadiq Fareed, Mian Muhammad

    2017-02-23

    The longer network lifetime of Wireless Sensor Networks (WSNs) is a goal which is directly related to energy consumption. This energy consumption issue becomes more challenging when the energy load is not properly distributed in the sensing area. The hierarchal clustering architecture is the best choice for these kind of issues. In this paper, we introduce a novel clustering protocol called Markov chain model-based optimal cluster heads (MOCHs) selection for WSNs. In our proposed model, we introduce a simple strategy for the optimal number of cluster heads selection to overcome the problem of uneven energy distribution in the network. The attractiveness of our model is that the BS controls the number of cluster heads while the cluster heads control the cluster members in each cluster in such a restricted manner that a uniform and even load is ensured in each cluster. We perform an extensive range of simulation using five quality measures, namely: the lifetime of the network, stable and unstable region in the lifetime of the network, throughput of the network, the number of cluster heads in the network, and the transmission time of the network to analyze the proposed model. We compare MOCHs against Sleep-awake Energy Efficient Distributed (SEED) clustering, Artificial Bee Colony (ABC), Zone Based Routing (ZBR), and Centralized Energy Efficient Clustering (CEEC) using the above-discussed quality metrics and found that the lifetime of the proposed model is almost 1095, 2630, 3599, and 2045 rounds (time steps) greater than SEED, ABC, ZBR, and CEEC, respectively. The obtained results demonstrate that the MOCHs is better than SEED, ABC, ZBR, and CEEC in terms of energy efficiency and the network throughput.

  8. Charge optimization increases the potency and selectivity of a chorismate mutase inhibitor.

    PubMed

    Mandal, Ajay; Hilvert, Donald

    2003-05-14

    The highest affinity inhibitor for chorismate mutases, a conformationally constrained oxabicyclic dicarboxylate transition state analogue, was modified as suggested by computational charge optimization methods. As predicted, replacement of the C10 carboxylate in this molecule with a nitro group yields an even more potent inhibitor of a chorismate mutase from Bacillus subtilis (BsCM), but the magnitude of the improvement (roughly 3-fold, corresponding to a DeltaDeltaG of -0.7 kcal/mol) is substantially lower than the gain of 2-3 kcal/mol binding free energy anticipated for the reduced desolvation penalty upon binding. Experiments with a truncated version of the enzyme show that the flexible C terminus, which was only partially resolved in the crystal structure and hence omitted from the calculations, provides favorable interactions with the C10 group that partially compensate for its desolvation. Although truncation diminishes the affinity of the enzyme for both inhibitors, the nitro derivative binds 1.7 kcal/mol more tightly than the dicarboxylate, in reasonable agreement with the calculations. Significantly, substitution of the C10 carboxylate with a nitro group also enhances the selectivity of inhibition of BsCM relative to a chorismate mutase from Escherichia coli (EcCM), which has a completely different fold and binding pocket, by 10-fold. These results experimentally verify the utility of charge optimization methods for improving interactions between proteins and low-molecular weight ligands.

  9. A Linked Simulation-Optimization (LSO) Model for Conjunctive Irrigation Management using Clonal Selection Algorithm

    NASA Astrophysics Data System (ADS)

    Islam, Sirajul; Talukdar, Bipul

    2016-09-01

    A Linked Simulation-Optimization (LSO) model based on a Clonal Selection Algorithm (CSA) was formulated for application in conjunctive irrigation management. A series of measures were considered for reducing the computational burden associated with the LSO approach. Certain modifications were incurred to the formulated CSA, so as to decrease the number of function evaluations. In addition, a simple problem specific code for a two dimensional groundwater flow simulation model was developed. The flow model was further simplified by a novel approach of area reduction, in order to save computational time in simulation. The LSO model was applied in the irrigation command of the Pagladiya Dam Project in Assam, India. With a view to evaluate the performance of the CSA, a Genetic Algorithm (GA) was used as a comparison base. The results from the CSA compared well with those from the GA. In fact, the CSA was found to consume less computational time than the GA while converging to the optimal solution, due to the modifications incurred in it.

  10. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.

  11. Dynamic optimization approach for integrated supplier selection and tracking control of single product inventory system with product discount

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Heru Tjahjana, R.

    2017-01-01

    In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.

  12. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Solikhin

    2016-06-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.

  13. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    NASA Astrophysics Data System (ADS)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  14. A bi-objective optimization approach for exclusive bus lane selection and scheduling design

    NASA Astrophysics Data System (ADS)

    Khoo, Hooi Ling; Eng Teoh, Lay; Meng, Qiang

    2014-07-01

    This study proposes a methodology to solve the integrated problems of selection and scheduling of the exclusive bus lane. The selection problem intends to determine which roads (links) should have a lane reserved for buses while the scheduling problem intends to find the time period of the application. It is formulated as a bi-objective optimization model that aims to minimize the total travel time of non-bus traffic and buses simultaneously. The proposed model formulation is solved by the hybrid non-dominated sorting genetic algorithm with Paramics. The results show that the proposed methodology is workable. Sets of Pareto solutions are obtained indicating that a trade-off between buses and non-bus traffic for the improvement of the bus transit system is necessary when the exclusive bus lane is applied. This allows the engineer to choose the best solutions that could balance the performance of both modes in a multimode transport system environment to achieve a sustainable transport system.

  15. On the incomplete architecture of human ontogeny. Selection, optimization, and compensation as foundation of developmental theory.

    PubMed

    Baltes, P B

    1997-04-01

    Drawing on both evolutionary and ontogenetic perspectives, the basic biological-genetic and social-cultural architecture of human development is outlined. Three principles are involved. First, evolutionary selection pressure predicts a negative age correlation, and therefore, genome-based plasticity and biological potential decrease with age. Second, for growth aspects of human development to extend further into the life span, culture-based resources are required at ever-increasing levels. Third, because of age-related losses in biological plasticity, the efficiency of culture is reduced as life span development unfolds. Joint application of these principles suggests that the life span architecture becomes more and more incomplete with age. Degree of completeness can be defined as the ratio between gains and losses in functioning. Two examples illustrate the implications of the life span architecture proposed. The first is a general theory of development involving the orchestration of 3 component processes: selection, optimization, and compensation. The second considers the task of completing the life course in the sense of achieving a positive balance between gains and losses for all age levels. This goal is increasingly more difficult to attain as human development is extended into advanced old age.

  16. A Novel Hybrid Clonal Selection Algorithm with Combinatorial Recombination and Modified Hypermutation Operators for Global Optimization

    PubMed Central

    Lin, Jingjing; Jing, Honglei

    2016-01-01

    Artificial immune system is one of the most recently introduced intelligence methods which was inspired by biological immune system. Most immune system inspired algorithms are based on the clonal selection principle, known as clonal selection algorithms (CSAs). When coping with complex optimization problems with the characteristics of multimodality, high dimension, rotation, and composition, the traditional CSAs often suffer from the premature convergence and unsatisfied accuracy. To address these concerning issues, a recombination operator inspired by the biological combinatorial recombination is proposed at first. The recombination operator could generate the promising candidate solution to enhance search ability of the CSA by fusing the information from random chosen parents. Furthermore, a modified hypermutation operator is introduced to construct more promising and efficient candidate solutions. A set of 16 common used benchmark functions are adopted to test the effectiveness and efficiency of the recombination and hypermutation operators. The comparisons with classic CSA, CSA with recombination operator (RCSA), and CSA with recombination and modified hypermutation operator (RHCSA) demonstrate that the proposed algorithm significantly improves the performance of classic CSA. Moreover, comparison with the state-of-the-art algorithms shows that the proposed algorithm is quite competitive. PMID:27698662

  17. An Optimization Model for the Selection of Bus-Only Lanes in a City.

    PubMed

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers' route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model.

  18. Structure based lead optimization approach in discovery of selective DPP4 inhibitors.

    PubMed

    Ghate, Manjunath; Jain, Shailesh V

    2013-05-01

    Diabetes mellitus is a chronic progressive metabolic disorder that has profound consequences for individuals, families, and society. To date, main available oral antidiabetic medications target either insulin resistance (metformin, glitazones), or insulin deficiency (sulfonylureas, glinides), but leading to shortfalls in medication. Advancement in modern oral hypoglycemic agents may be encouraged with or in place of traditional therapies. The lower risk for hypoglycemic events as compared with other insulinotropic or insulin-sensitizing agents make DPP-4 inhibitors very promising candidates for a more physiological treatment of type-2 diabetes. Only some DPP-4 inhibitors are currently used for the treatment of type 2 diabetes (T2DM) and various inhibitors currently undergoing animal and human testing. A number of catalytically active DPPs distinct from DPP-4 (DPP II, FAP, DPP-8, and DPP-9) have been described that is associated with side-effect and toxicity. To discover potent and selective and safer drugs in a shorter time frame and with reduced cost it requires using an innovative approach for designing novel inhibitors. This review article focuses on the status of advanced lead candidates of DPP group and their binding affinity with the active site residue of target structure which help in discovery of potent and selective DPP-4 inhibitors by lead optimization approach.

  19. Combining heterogeneous features for face detection using multiscale feature selection with binary particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Pan, Hong; Xia, Si-Yu; Jin, Li-Zuo; Xia, Liang-Zheng

    2011-12-01

    We propose a fast multiscale face detector that boosts a set of SVM-based hierarchy classifiers constructed with two heterogeneous features, i.e. Multi-block Local Binary Patterns (MB-LBP) and Speeded Up Robust Features (SURF), at different image resolutions. In this hierarchical architecture, simple and fast classifiers using efficient MB-LBP descriptors remove large parts of the background in low and intermediate scale layers, thus only a small percentage of background patches look similar to faces and require a more accurate but slower classifier that uses distinctive SURF descriptor to avoid false classifications in the finest scale. By propagating only those patterns that are not classified as background, we can quickly decrease the amount of data need to be processed. To lessen the training burden of the hierarchy classifier, in each scale layer, a feature selection scheme using Binary Particle Swarm Optimization (BPSO) searches the entire feature space and filters out the minimum number of discriminative features that give the highest classification rate on a validation set, then these selected distinctive features are fed into the SVM classifier. We compared detection performance of the proposed face detector with other state-of-the-art methods on the CMU+MIT face dataset. Our detector achieves the best overall detection performance. The training time of our algorithm is 60 times faster than the standard Adaboost algorithm. It takes about 70 ms for our face detector to process a 320×240 image, which is comparable to Viola and Jones' detector.

  20. An Optimization Model for the Selection of Bus-Only Lanes in a City

    PubMed Central

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers’ route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model. PMID:26214001

  1. End-to-End Rate-Distortion Optimized MD Mode Selection for Multiple Description Video Coding

    NASA Astrophysics Data System (ADS)

    Heng, Brian A.; Apostolopoulos, John G.; Lim, Jae S.

    2006-12-01

    Multiple description (MD) video coding can be used to reduce the detrimental effects caused by transmission over lossy packet networks. A number of approaches have been proposed for MD coding, where each provides a different tradeoff between compression efficiency and error resilience. How effectively each method achieves this tradeoff depends on the network conditions as well as on the characteristics of the video itself. This paper proposes an adaptive MD coding approach which adapts to these conditions through the use of adaptive MD mode selection. The encoder in this system is able to accurately estimate the expected end-to-end distortion, accounting for both compression and packet loss-induced distortions, as well as for the bursty nature of channel losses and the effective use of multiple transmission paths. With this model of the expected end-to-end distortion, the encoder selects between MD coding modes in a rate-distortion (R-D) optimized manner to most effectively tradeoff compression efficiency for error resilience. We show how this approach adapts to both the local characteristics of the video and network conditions and demonstrates the resulting gains in performance using an H.264-based adaptive MD video coder.

  2. Process optimization for lattice-selective wet etching of crystalline silicon structures

    NASA Astrophysics Data System (ADS)

    Dixson, Ronald G.; Guthrie, William F.; Allen, Richard A.; Orji, Ndubuisi G.; Cresswell, Michael W.; Murabito, Christine E.

    2016-01-01

    Lattice-selective etching of silicon is used in a number of applications, but it is particularly valuable in those for which the lattice-defined sidewall angle can be beneficial to the functional goals. A relatively small but important niche application is the fabrication of tip characterization standards for critical dimension atomic force microscopes (CD-AFMs). CD-AFMs are commonly used as reference tools for linewidth metrology in semiconductor manufacturing. Accurate linewidth metrology using CD-AFM, however, is critically dependent upon calibration of the tip width. Two national metrology institutes and at least two commercial vendors have explored the development of tip calibration standards using lattice-selective etching of crystalline silicon. The National Institute of Standards and Technology standard of this type is called the single crystal critical dimension reference material. These specimens, which are fabricated using a lattice-plane-selective etch on (110) silicon, exhibit near vertical sidewalls and high uniformity and can be used to calibrate CD-AFM tip width to a standard uncertainty of less than 1 nm. During the different generations of this project, we evaluated variations of the starting material and process conditions. Some of our starting materials required a large etch bias to achieve the desired linewidths. During the optimization experiment described in this paper, we found that for potassium hydroxide etching of the silicon features, it was possible to independently tune the target linewidth and minimize the linewidth nonuniformity. Consequently, this process is particularly well suited for small-batch fabrication of CD-AFM linewidth standards.

  3. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1999-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  4. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1998-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  5. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications.

    PubMed

    Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.

  6. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications

    PubMed Central

    Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096

  7. Evaluating Varied Label Designs for Use with Medical Devices: Optimized Labels Outperform Existing Labels in the Correct Selection of Devices and Time to Select

    PubMed Central

    Seo, Do Chan; Ladoni, Moslem; Brunk, Eric; Becker, Mark W.

    2016-01-01

    Purpose Effective standardization of medical device labels requires objective study of varied designs. Insufficient empirical evidence exists regarding how practitioners utilize and view labeling. Objective Measure the effect of graphic elements (boxing information, grouping information, symbol use and color-coding) to optimize a label for comparison with those typical of commercial medical devices. Design Participants viewed 54 trials on a computer screen. Trials were comprised of two labels that were identical with regard to graphics, but differed in one aspect of information (e.g., one had latex, the other did not). Participants were instructed to select the label along a given criteria (e.g., latex containing) as quickly as possible. Dependent variables were binary (correct selection) and continuous (time to correct selection). Participants Eighty-nine healthcare professionals were recruited at Association of Surgical Technologists (AST) conferences, and using a targeted e-mail of AST members. Results Symbol presence, color coding and grouping critical pieces of information all significantly improved selection rates and sped time to correct selection (α = 0.05). Conversely, when critical information was graphically boxed, probability of correct selection and time to selection were impaired (α = 0.05). Subsequently, responses from trials containing optimal treatments (color coded, critical information grouped with symbols) were compared to two labels created based on a review of those commercially available. Optimal labels yielded a significant positive benefit regarding the probability of correct choice ((P<0.0001) LSM; UCL, LCL: 97.3%; 98.4%, 95.5%)), as compared to the two labels we created based on commercial designs (92.0%; 94.7%, 87.9% and 89.8%; 93.0%, 85.3%) and time to selection. Conclusions Our study provides data regarding design factors, namely: color coding, symbol use and grouping of critical information that can be used to significantly enhance

  8. Neural network cascade optimizes microRNA biomarker selection for nasopharyngeal cancer prognosis.

    PubMed

    Zhu, Wenliang; Kan, Xuan

    2014-01-01

    MicroRNAs (miRNAs) have been shown to be promising biomarkers in predicting cancer prognosis. However, inappropriate or poorly optimized processing and modeling of miRNA expression data can negatively affect prediction performance. Here, we propose a holistic solution for miRNA biomarker selection and prediction model building. This work introduces the use of a neural network cascade, a cascaded constitution of small artificial neural network units, for evaluating miRNA expression and patient outcome. A miRNA microarray dataset of nasopharyngeal carcinoma was retrieved from Gene Expression Omnibus to illustrate the methodology. Results indicated a nonlinear relationship between miRNA expression and patient death risk, implying that direct comparison of expression values is inappropriate. However, this method performs transformation of miRNA expression values into a miRNA score, which linearly measures death risk. Spearman correlation was calculated between miRNA scores and survival status for each miRNA. Finally, a nine-miRNA signature was optimized to predict death risk after nasopharyngeal carcinoma by establishing a neural network cascade consisting of 13 artificial neural network units. Area under the ROC was 0.951 for the internal validation set and had a prediction accuracy of 83% for the external validation set. In particular, the established neural network cascade was found to have strong immunity against noise interference that disturbs miRNA expression values. This study provides an efficient and easy-to-use method that aims to maximize clinical application of miRNAs in prognostic risk assessment of patients with cancer.

  9. Experiments for practical education in process parameter optimization for selective laser sintering to increase workpiece quality

    NASA Astrophysics Data System (ADS)

    Reutterer, Bernd; Traxler, Lukas; Bayer, Natascha; Drauschke, Andreas

    2016-04-01

    Selective Laser Sintering (SLS) is considered as one of the most important additive manufacturing processes due to component stability and its broad range of usable materials. However the influence of the different process parameters on mechanical workpiece properties is still poorly studied, leading to the fact that further optimization is necessary to increase workpiece quality. In order to investigate the impact of various process parameters, laboratory experiments are implemented to improve the understanding of the SLS limitations and advantages on an educational level. Experiments are based on two different workstations, used to teach students the fundamentals of SLS. First of all a 50 W CO2 laser workstation is used to investigate the interaction of the laser beam with the used material in accordance with varied process parameters to analyze a single-layered test piece. Second of all the FORMIGA P110 laser sintering system from EOS is used to print different 3D test pieces in dependence on various process parameters. Finally quality attributes are tested including warpage, dimension accuracy or tensile strength. For dimension measurements and evaluation of the surface structure a telecentric lens in combination with a camera is used. A tensile test machine allows testing of the tensile strength and the interpreting of stress-strain curves. The developed laboratory experiments are suitable to teach students the influence of processing parameters. In this context they will be able to optimize the input parameters depending on the component which has to be manufactured and to increase the overall quality of the final workpiece.

  10. Laser dimpling process parameters selection and optimization using surrogate-driven process capability space

    NASA Astrophysics Data System (ADS)

    Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz

    2017-08-01

    Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii

  11. Feature selection for linear SVMs under uncertain data: robust optimization based on difference of convex functions algorithms.

    PubMed

    Le Thi, Hoai An; Vo, Xuan Thanh; Pham Dinh, Tao

    2014-11-01

    In this paper, we consider the problem of feature selection for linear SVMs on uncertain data that is inherently prevalent in almost all datasets. Using principles of Robust Optimization, we propose robust schemes to handle data with ellipsoidal model and box model of uncertainty. The difficulty in treating ℓ0-norm in feature selection problem is overcome by using appropriate approximations and Difference of Convex functions (DC) programming and DC Algorithms (DCA). The computational results show that the proposed robust optimization approaches are superior than a traditional approach in immunizing perturbation of the data.

  12. Protein purification using chromatography: selection of type, modelling and optimization of operating conditions.

    PubMed

    Asenjo, J A; Andrews, B A

    2009-01-01

    To achieve a high level of purity in the purification of recombinant proteins for therapeutic or analytical application, it is necessary to use several chromatographic steps. There is a range of techniques available including anion and cation exchange, which can be carried out at different pHs, hydrophobic interaction chromatography, gel filtration and affinity chromatography. In the case of a complex mixture of partially unknown proteins or a clarified cell extract, there are many different routes one can take in order to choose the minimum and most efficient number of purification steps to achieve a desired level of purity (e.g. 98%, 99.5% or 99.9%). This review shows how an initial 'proteomic' characterization of the complex mixture of target protein and protein contaminants can be used to select the most efficient chromatographic separation steps in order to achieve a specific level of purity with a minimum number of steps. The chosen methodology was implemented in a computer- based Expert System. Two algorithms were developed, the first algorithm was used to select the most efficient purification method to separate a protein from its contaminants based on the physicochemical properties of the protein product and the protein contaminants and the second algorithm was used to predict the number and concentration of contaminants after each separation as well as protein product purity. The application of the Expert System approach was experimentally tested and validated with a mixture of four proteins and the experimental validation was also carried out with a supernatant of Bacillus subtilis producing a recombinant beta-1,3-glucanase. Once the type of chromatography is chosen, optimization of the operating conditions is essential. Chromatographic elution curves for a three-protein mixture (alpha-lactoalbumin, ovalbumin and beta-lactoglobulin), carried out under different flow rates and ionic strength conditions, were simulated using two different mathematical

  13. Optimizing selection of training and auxiliary data for operational land cover classification for the LCMAP initiative

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.

    2016-12-01

    The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow

  14. Analysis of boutique arrays: a universal method for the selection of the optimal data normalization procedure.

    PubMed

    Uszczyńska, Barbara; Zyprych-Walczak, Joanna; Handschuh, Luiza; Szabelska, Alicja; Kaźmierczak, Maciej; Woronowicz, Wiesława; Kozłowski, Piotr; Sikorski, Michał M; Komarnicki, Mieczysław; Siatkowski, Idzi; Figlerowicz, Marek

    2013-09-01

    DNA microarrays, which are among the most popular genomic tools, are widely applied in biology and medicine. Boutique arrays, which are small, spotted, dedicated microarrays, constitute an inexpensive alternative to whole-genome screening methods. The data extracted from each microarray-based experiment must be transformed and processed prior to further analysis to eliminate any technical bias. The normalization of the data is the most crucial step of microarray data pre-processing and this process must be carefully considered as it has a profound effect on the results of the analysis. Several normalization algorithms have been developed and implemented in data analysis software packages. However, most of these methods were designed for whole-genome analysis. In this study, we tested 13 normalization strategies (ten for double-channel data and three for single-channel data) available on R Bioconductor and compared their effectiveness in the normalization of four boutique array datasets. The results revealed that boutique arrays can be successfully normalized using standard methods, but not every method is suitable for each dataset. We also suggest a universal seven-step workflow that can be applied for the selection of the optimal normalization procedure for any boutique array dataset. The described workflow enables the evaluation of the investigated normalization methods based on the bias and variance values for the control probes, a differential expression analysis and a receiver operating characteristic curve analysis. The analysis of each component results in a separate ranking of the normalization methods. A combination of the ranks obtained from all the normalization procedures facilitates the selection of the most appropriate normalization method for the studied dataset and determines which methods can be used interchangeably.

  15. A data driven model for optimal orthosis selection in children with cerebral palsy.

    PubMed

    Ries, Andrew J; Novacheck, Tom F; Schwartz, Michael H

    2014-09-01

    A statistical orthosis selection model was developed using the Random Forest Algorithm (RFA). The model's performance and potential clinical benefit was evaluated. The model predicts which of five orthosis designs - solid (SAFO), posterior leaf spring (PLS), hinged (HAFO), supra-malleolar (SMO), or foot orthosis (FO) - will provide the best gait outcome for individuals with diplegic cerebral palsy (CP). Gait outcome was defined as the change in Gait Deviation Index (GDI) between walking while wearing an orthosis compared to barefoot (ΔGDI=GDIOrthosis-GDIBarefoot). Model development was carried out using retrospective data from 476 individuals who wore one of the five orthosis designs bilaterally. Clinical benefit was estimated by predicting the optimal orthosis and ΔGDI for 1016 individuals (age: 12.6 (6.7) years), 540 of whom did not have an existing orthosis prescription. Among limbs with an orthosis, the model agreed with the prescription only 14% of the time. For 56% of limbs without an orthosis, the model agreed that no orthosis was expected to provide benefit. Using the current standard of care orthosis (i.e. existing orthosis prescriptions), ΔGDI is only +0.4 points on average. Using the orthosis prediction model, average ΔGDI for orthosis users was estimated to improve to +5.6 points. The results of this study suggest that an orthosis selection model derived from the RFA can significantly improve outcomes from orthosis use for the diplegic CP population. Further validation of the model is warranted using data from other centers and a prospective study.

  16. Selecting Optimal Peptides for Targeted Proteomic Experiments in Human Plasma Using In Vitro Synthesized Proteins as Analytical Standards.

    PubMed

    Bollinger, James G; Stergachis, Andrew B; Johnson, Richard S; Egertson, Jarrett D; MacCoss, Michael J

    2016-01-01

    In targeted proteomics, the development of robust methodologies is dependent upon the selection of a set of optimal peptides for each protein-of-interest. Unfortunately, predicting which peptides and respective product ion transitions provide the greatest signal-to-noise ratio in a particular assay matrix is complicated. Using in vitro synthesized proteins as analytical standards, we report here an empirically driven method for the selection of said peptides in a human plasma assay matrix.

  17. Optimal selection of piezoelectric substrates and crystal cuts for SAW-based pressure and temperature sensors.

    PubMed

    Zhang, Xiangwen; Wang, Fei-Yue; Li, Li

    2007-06-01

    In this paper, the perturbation method is used to study the velocity shift of surface acoustic waves (SAW) caused by surface pressure and temperature variations of piezoelectric substrates. Effects of pressures and temperatures on elastic, piezoelectric, and dielectric constants of piezoelectric substrates are fully considered as well as the initial stresses and boundary conditions. First, frequency pressure/temperature coefficients are introduced to reflect the relationship between the SAW resonant frequency and the pressure/temperature of the piezoelectric substrates. Second, delay pressure/temperature coefficients are introduced to reflect the relationship among the SAW delay time/phase and SAW delay line-based sensors' pressure and temperature. An objective function for performance evaluation of piezoelectric substrates is then defined in terms of their effective SAW coupling coefficients, power flow angles (PFA), acoustic propagation losses, and pressure and temperature coefficients. Finally, optimal selections of piezo-electric substrates and crystal cuts for SAW-based pressure, temperature, and pressure/temperature sensors are derived by calculating the corresponding objective function values among the range of X-cut, Y-cut, Z-cut, and rotated Y-cut quartz, lithium niobate, and lithium tantalate crystals in different propagation directions.

  18. Selection of plants for optimization of vegetative filter strips treating runoff from turfgrass.

    PubMed

    Smith, Katy E; Putnam, Raymond A; Phaneuf, Clifford; Lanza, Guy R; Dhankher, Om P; Clark, John M

    2008-01-01

    Runoff from turf environments, such as golf courses, is of increasing concern due to the associated chemical contamination of lakes, reservoirs, rivers, and ground water. Pesticide runoff due to fungicides, herbicides, and insecticides used to maintain golf courses in acceptable playing condition is a particular concern. One possible approach to mitigate such contamination is through the implementation of effective vegetative filter strips (VFS) on golf courses and other recreational turf environments. The objective of the current study was to screen ten aesthetically acceptable plant species for their ability to remove four commonly-used and degradable pesticides: chlorpyrifos (CP), chlorothalonil (CT), pendimethalin (PE), and propiconazole (PR) from soil in a greenhouse setting, thus providing invaluable information as to the species composition that would be most efficacious for use in VFS surrounding turf environments. Our results revealed that blue flag iris (Iris versicolor) (76% CP, 94% CT, 48% PE, and 33% PR were lost from soil after 3 mo of plant growth), eastern gama grass (Tripsacum dactyloides) (47% CP, 95% CT, 17% PE, and 22% PR were lost from soil after 3 mo of plant growth), and big blue stem (Andropogon gerardii) (52% CP, 91% CT, 19% PE, and 30% PR were lost from soil after 3 mo of plant growth) were excellent candidates for the optimization of VFS as buffer zones abutting turf environments. Blue flag iris was most effective at removing selected pesticides from soil and had the highest aesthetic value of the plants tested.

  19. EEG Channel Selection Using Particle Swarm Optimization for the Classification of Auditory Event-Related Potentials

    PubMed Central

    Hokari, Haruhide

    2014-01-01

    Brain-machine interfaces (BMI) rely on the accurate classification of event-related potentials (ERPs) and their performance greatly depends on the appropriate selection of classifier parameters and features from dense-array electroencephalography (EEG) signals. Moreover, in order to achieve a portable and more compact BMI for practical applications, it is also desirable to use a system capable of accurate classification using information from as few EEG channels as possible. In the present work, we propose a method for classifying P300 ERPs using a combination of Fisher Discriminant Analysis (FDA) and a multiobjective hybrid real-binary Particle Swarm Optimization (MHPSO) algorithm. Specifically, the algorithm searches for the set of EEG channels and classifier parameters that simultaneously maximize the classification accuracy and minimize the number of used channels. The performance of the method is assessed through offline analyses on datasets of auditory ERPs from sound discrimination experiments. The proposed method achieved a higher classification accuracy than that achieved by traditional methods while also using fewer channels. It was also found that the number of channels used for classification can be significantly reduced without greatly compromising the classification accuracy. PMID:24982944

  20. Wind selection and drift compensation optimize migratory pathways in a high-flying moth.

    PubMed

    Chapman, Jason W; Reynolds, Don R; Mouritsen, Henrik; Hill, Jane K; Riley, Joe R; Sivell, Duncan; Smith, Alan D; Woiwod, Ian P

    2008-04-08

    Numerous insect species undertake regular seasonal migrations in order to exploit temporary breeding habitats [1]. These migrations are often achieved by high-altitude windborne movement at night [2-6], facilitating rapid long-distance transport, but seemingly at the cost of frequent displacement in highly disadvantageous directions (the so-called "pied piper" phenomenon [7]). This has lead to uncertainty about the mechanisms migrant insects use to control their migratory directions [8, 9]. Here we show that, far from being at the mercy of the wind, nocturnal moths have unexpectedly complex behavioral mechanisms that guide their migratory flight paths in seasonally-favorable directions. Using entomological radar, we demonstrate that free-flying individuals of the migratory noctuid moth Autographa gamma actively select fast, high-altitude airstreams moving in a direction that is highly beneficial for their autumn migration. They also exhibit common orientation close to the downwind direction, thus maximizing the rectilinear distance traveled. Most unexpectedly, we find that when winds are not closely aligned with the moth's preferred heading (toward the SSW), they compensate for cross-wind drift, thus increasing the probability of reaching their overwintering range. We conclude that nocturnally migrating moths use a compass and an inherited preferred direction to optimize their migratory track.

  1. a Geographic Analysis of Optimal Signage Location Selection in Scenic Area

    NASA Astrophysics Data System (ADS)

    Ruan, Ling; Long, Ying; Zhang, Ling; Wu, Xiao Ling

    2016-06-01

    As an important part of the scenic area infrastructure services, signage guiding system plays an indispensable role in guiding the way and improving the quality of tourism experience. This paper proposes an optimal method in signage location selection and direction content design in a scenic area based on geographic analysis. The object of the research is to provide a best solution to arrange limited guiding boards in a tourism area to show ways arriving at any scenic spot from any entrance. There are four steps to achieve the research object. First, the spatial distribution of the junction of the scenic road, the passageway and the scenic spots is analyzed. Then, the count of scenic roads intersection on the shortest path between all entrances and all scenic spots is calculated. Next, combing with the grade of the scenic road and scenic spots, the importance of each road intersection is estimated quantitatively. Finally, according to the importance of all road intersections, the most suitable layout locations of signage guiding boards can be provided. In addition, the method is applied in the Ming Tomb scenic area in China and the result is compared with the existing signage guiding space layout.

  2. Personalizing colon cancer adjuvant therapy: selecting optimal treatments for individual patients.

    PubMed

    Dienstmann, Rodrigo; Salazar, Ramon; Tabernero, Josep

    2015-06-01

    For more than three decades, postoperative chemotherapy-initially fluoropyrimidines and more recently combinations with oxaliplatin-has reduced the risk of tumor recurrence and improved survival for patients with resected colon cancer. Although universally recommended for patients with stage III disease, there is no consensus about the survival benefit of postoperative chemotherapy in stage II colon cancer. The most recent adjuvant clinical trials have not shown any value for adding targeted agents, namely bevacizumab and cetuximab, to standard chemotherapies in stage III disease, despite improved outcomes in the metastatic setting. However, biomarker analyses of multiple studies strongly support the feasibility of refining risk stratification in colon cancer by factoring in molecular characteristics with pathologic tumor staging. In stage II disease, for example, microsatellite instability supports observation after surgery. Furthermore, the value of BRAF or KRAS mutations as additional risk factors in stage III disease is greater when microsatellite status and tumor location are taken into account. Validated predictive markers of adjuvant chemotherapy benefit for stage II or III colon cancer are lacking, but intensive research is ongoing. Recent advances in understanding the biologic hallmarks and drivers of early-stage disease as well as the micrometastatic environment are expected to translate into therapeutic strategies tailored to select patients. This review focuses on the pathologic, molecular, and gene expression characterizations of early-stage colon cancer; new insights into prognostication; and emerging predictive biomarkers that could ultimately help define the optimal adjuvant treatments for patients in routine clinical practice.

  3. Discrepancy among the synonymous codons with respect to their selection as optimal codon in bacteria

    PubMed Central

    Satapathy, Siddhartha Sankar; Powdel, Bhesh Raj; Buragohain, Alak Kumar; Ray, Suvendra Kumar

    2016-01-01

    The different triplets encoding the same amino acid, termed as synonymous codons, are not equally abundant in a genome. Factors such as G + C% and tRNA are known to influence their abundance in a genome. However, the order of the nucleotide in each codon per se might also be another factor impacting on its abundance values. Of the synonymous codons for specific amino acids, some are preferentially used in the high expression genes that are referred to as the ‘optimal codons’ (OCs). In this study, we compared OCs of the 18 amino acids in 221 species of bacteria. It is observed that there is amino acid specific influence for the selection of OCs. There is also influence of phylogeny in the choice of OCs for some amino acids such as Glu, Gln, Lys and Leu. The phenomenon of codon bias is also supported by the comparative studies of the abundance values of the synonymous codons with same G + C. It is likely that the order of the nucleotides in the triplet codon is also perhaps involved in the phenomenon of codon usage bias in organisms. PMID:27426467

  4. Cancer Feature Selection and Classification Using a Binary Quantum-Behaved Particle Swarm Optimization and Support Vector Machine.

    PubMed

    Xi, Maolong; Sun, Jun; Liu, Li; Fan, Fangyun; Wu, Xiaojun

    2016-01-01

    This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO) for cancer feature gene selection, coupling support vector machine (SVM) for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV). Finally, the BQPSO coupling SVM (BQPSO/SVM), binary PSO coupling SVM (BPSO/SVM), and genetic algorithm coupling SVM (GA/SVM) are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms.

  5. Cancer Feature Selection and Classification Using a Binary Quantum-Behaved Particle Swarm Optimization and Support Vector Machine

    PubMed Central

    Sun, Jun; Liu, Li; Fan, Fangyun; Wu, Xiaojun

    2016-01-01

    This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO) for cancer feature gene selection, coupling support vector machine (SVM) for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV). Finally, the BQPSO coupling SVM (BQPSO/SVM), binary PSO coupling SVM (BPSO/SVM), and genetic algorithm coupling SVM (GA/SVM) are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms. PMID:27642363

  6. A novel low-noise linear-in-dB intermediate frequency variable-gain amplifier for DRM/DAB tuners

    NASA Astrophysics Data System (ADS)

    Keping, Wang; Zhigong, Wang; Jianzheng, Zhou; Xuemei, Lei; Mingzhu, Zhou

    2009-03-01

    A broadband CMOS intermediate frequency (IF) variable-gain amplifier (VGA) for DRM/DAB tuners is presented. The VGA comprises two cascaded stages: one is for noise-canceling and another is for signal-summing. The chip is fabricated in a standard 0.18 μm 1P6M RF CMOS process of SMIC. Measured results show a good linear-in-dB gain characteristic in 28 dB dynamic gain range of -10 to 18 dB. It can operate in the frequency range of 30-700 MHz and consumes 27 mW at 1.8 V supply with the on-chip test buffer. The minimum noise figure is only 3.1 dB at maximum gain and the input-referred 1 dB gain compression point at the minimum gain is -3.9 dBm.

  7. Comparing the Selection and Placement of Best Management Practices in Improving Water Quality Using a Multiobjective Optimization and Targeting Method

    PubMed Central

    Chiang, Li-Chi; Chaubey, Indrajeet; Maringanti, Chetan; Huang, Tao

    2014-01-01

    Suites of Best Management Practices (BMPs) are usually selected to be economically and environmentally efficient in reducing nonpoint source (NPS) pollutants from agricultural areas in a watershed. The objective of this research was to compare the selection and placement of BMPs in a pasture-dominated watershed using multiobjective optimization and targeting methods. Two objective functions were used in the optimization process, which minimize pollutant losses and the BMP placement areas. The optimization tool was an integration of a multi-objective genetic algorithm (GA) and a watershed model (Soil and Water Assessment Tool—SWAT). For the targeting method, an optimum BMP option was implemented in critical areas in the watershed that contribute the greatest pollutant losses. A total of 171 BMP combinations, which consist of grazing management, vegetated filter strips (VFS), and poultry litter applications were considered. The results showed that the optimization is less effective when vegetated filter strips (VFS) are not considered, and it requires much longer computation times than the targeting method to search for optimum BMPs. Although the targeting method is effective in selecting and placing an optimum BMP, larger areas are needed for BMP implementation to achieve the same pollutant reductions as the optimization method. PMID:24619160

  8. Insights into the Experiences of Older Workers and Change: Through the Lens of Selection, Optimization, and Compensation

    ERIC Educational Resources Information Center

    Unson, Christine; Richardson, Margaret

    2013-01-01

    Purpose: The study examined the barriers faced, the goals selected, and the optimization and compensation strategies of older workers in relation to career change. Method: Thirty open-ended interviews, 12 in the United States and 18 in New Zealand, were conducted, recorded, transcribed verbatim, and analyzed for themes. Results: Barriers to…

  9. Optimization of an indazole series of selective estrogen receptor degraders: Tumor regression in a tamoxifen-resistant breast cancer xenograft.

    PubMed

    Govek, Steven P; Nagasawa, Johnny Y; Douglas, Karensa L; Lai, Andiliy G; Kahraman, Mehmet; Bonnefous, Celine; Aparicio, Anna M; Darimont, Beatrice D; Grillot, Katherine L; Joseph, James D; Kaufman, Joshua A; Lee, Kyoung-Jin; Lu, Nhin; Moon, Michael J; Prudente, Rene Y; Sensintaffar, John; Rix, Peter J; Hager, Jeffrey H; Smith, Nicholas D

    2015-11-15

    Selective estrogen receptor degraders (SERDs) have shown promise for the treatment of ER+ breast cancer. Disclosed herein is the continued optimization of our indazole series of SERDs. Exploration of ER degradation and antagonism in vitro followed by in vivo antagonism and oral exposure culminated in the discovery of indazoles 47 and 56, which induce tumor regression in a tamoxifen-resistant breast cancer xenograft.

  10. Optimizing selective cutting strategies for maximum carbon stocks and yield of Moso bamboo forest using BIOME-BGC model.

    PubMed

    Mao, Fangjie; Zhou, Guomo; Li, Pingheng; Du, Huaqiang; Xu, Xiaojun; Shi, Yongjun; Mo, Lufeng; Zhou, Yufeng; Tu, Guoqing

    2017-04-15

    The selective cutting method currently used in Moso bamboo forests has resulted in a reduction of stand productivity and carbon sequestration capacity. Given the time and labor expense involved in addressing this problem manually, simulation using an ecosystem model is the most suitable approach. The BIOME-BGC model was improved to suit managed Moso bamboo forests, which was adapted to include age structure, specific ecological processes and management measures of Moso bamboo forest. A field selective cutting experiment was done in nine plots with three cutting intensities (high-intensity, moderate-intensity and low-intensity) during 2010-2013, and biomass of these plots was measured for model validation. Then four selective cutting scenarios were simulated by the improved BIOME-BGC model to optimize the selective cutting timings, intervals, retained ages and intensities. The improved model matched the observed aboveground carbon density and yield of different plots, with a range of relative error from 9.83% to 15.74%. The results of different selective cutting scenarios suggested that the optimal selective cutting measure should be cutting 30% culms of age 6, 80% culms of age 7, and all culms thereafter (above age 8) in winter every other year. The vegetation carbon density and harvested carbon density of this selective cutting method can increase by 74.63% and 21.5%, respectively, compared with the current selective cutting measure. The optimized selective cutting measure developed in this study can significantly promote carbon density, yield, and carbon sink capacity in Moso bamboo forests.

  11. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  12. Automatised selection of load paths to construct reduced-order models in computational damage micromechanics: from dissipation-driven random selection to Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Goury, Olivier; Amsallem, David; Bordas, Stéphane Pierre Alain; Liu, Wing Kam; Kerfriden, Pierre

    2016-08-01

    In this paper, we present new reliable model order reduction strategies for computational micromechanics. The difficulties rely mainly upon the high dimensionality of the parameter space represented by any load path applied onto the representative volume element. We take special care of the challenge of selecting an exhaustive snapshot set. This is treated by first using a random sampling of energy dissipating load paths and then in a more advanced way using Bayesian optimization associated with an interlocked division of the parameter space. Results show that we can insure the selection of an exhaustive snapshot set from which a reliable reduced-order model can be built.

  13. Knowledge-Based, Central Nervous System (CNS) Lead Selection and Lead Optimization for CNS Drug Discovery

    PubMed Central

    2011-01-01

    The central nervous system (CNS) is the major area that is affected by aging. Alzheimer’s disease (AD), Parkinson’s disease (PD), brain cancer, and stroke are the CNS diseases that will cost trillions of dollars for their treatment. Achievement of appropriate blood–brain barrier (BBB) penetration is often considered a significant hurdle in the CNS drug discovery process. On the other hand, BBB penetration may be a liability for many of the non-CNS drug targets, and a clear understanding of the physicochemical and structural differences between CNS and non-CNS drugs may assist both research areas. Because of the numerous and challenging issues in CNS drug discovery and the low success rates, pharmaceutical companies are beginning to deprioritize their drug discovery efforts in the CNS arena. Prompted by these challenges and to aid in the design of high-quality, efficacious CNS compounds, we analyzed the physicochemical property and the chemical structural profiles of 317 CNS and 626 non-CNS oral drugs. The conclusions derived provide an ideal property profile for lead selection and the property modification strategy during the lead optimization process. A list of substructural units that may be useful for CNS drug design was also provided here. A classification tree was also developed to differentiate between CNS drugs and non-CNS oral drugs. The combined analysis provided the following guidelines for designing high-quality CNS drugs: (i) topological molecular polar surface area of <76 Å2 (25–60 Å2), (ii) at least one (one or two, including one aliphatic amine) nitrogen, (iii) fewer than seven (two to four) linear chains outside of rings, (iv) fewer than three (zero or one) polar hydrogen atoms, (v) volume of 740–970 Å3, (vi) solvent accessible surface area of 460–580 Å2, and (vii) positive QikProp parameter CNS. The ranges within parentheses may be used during lead optimization. One violation to this proposed profile may be acceptable. The

  14. Optimal Wavelength Selection on Hyperspectral Data with Fused Lasso for Biomass Estimation of Tropical Rain Forest

    NASA Astrophysics Data System (ADS)

    Takayama, T.; Iwasaki, A.

    2016-06-01

    Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  15. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems

    PubMed Central

    Stephanie, Robinson; Margie, Lachman; Elizabeth, Rickenbach

    2015-01-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to aging, low cognitive resources, and daily stress in relation to everyday memory problems. We examined whether SOC usage varied by age and level of constraints, and if the relationship between resources and memory problems was mitigated by SOC usage. A daily diary paradigm was used to explore day-to-day fluctuations in these relationships. Participants (n=145, ages 22 to 94) completed a baseline interview and a daily diary for seven consecutive days. Multilevel models examined between- and within-person relationships between daily SOC use, daily stressors, cognitive resources, and everyday memory problems. Middle-aged adults had the highest SOC usage, although older adults also showed high SOC use if they had high cognitive resources. More SOC strategies were used on high stress compared to low stress days. Moreover, the relationship between daily stress and memory problems was buffered by daily SOC use, such that on high-stress days, those who used more SOC strategies reported fewer memory problems than participants who used fewer SOC strategies. The paradox of resources and SOC use can be qualified by the type of resource-limitation. Deficits in global resources were not tied to SOC usage or benefits. Conversely, under daily constraints tied to stress, the use of SOC increased and led to fewer memory problems. PMID:26997686

  16. Controversial issues of optimal surgical timing and patient selection in the treatment planning of otosclerosis.

    PubMed

    Shiao, An-Suey; Kuo, Chin-Lung; Cheng, Hsiu-Lien; Wang, Mao-Che; Chu, Chia-Huei

    2014-05-01

    The aim of this study was to analyze the impact of clinical factors on the outcomes of otosclerosis surgery and support patients' access to evidence-based information in pre-operative counseling to optimize their choices. A total of 109 ears in 93 patients undergoing stapes surgery in a tertiary referral center were included. Variables with a potential impact on hearing outcomes were recorded, with an emphasis on factors that were readily available pre-operatively. Hearing success was defined as a post-operative air-bone gap ≤10 dB. Logistic regression analysis was used to determine the factors independently contributing to the prediction of hearing success. The mean follow-up period was 18.0 months. Univariate and multivariate analyses indicated that none of the pre-operative factors (piston type, age, sex, affected side, tinnitus, vertigo, and pre-operative hearing thresholds) affected hearing success significantly (all p > 0.05). In conclusion, self-crimping Nitinol piston provides comparable hearing outcomes with conventional manual-crimping prostheses. However, Nitinol piston offers a technical simplification of a surgical procedure and an easier surgical choice for patients. In addition, age is not a detriment to hearing gain and instead might result in better use of hearing aids in older adults, thus facilitating social hearing recovery. Finally, hearing success does not depend on the extent of pre-operative hearing loss. Hence, patients with poor cochlear function should not be considered poor candidates for surgery. The predictive model has established recommendations for otologists for better case selection, and factors that are readily available pre-operatively may inform patients more explicitly about expected post-operative audiometric results.

  17. Synthesis and purification of iodoaziridines involving quantitative selection of the optimal stationary phase for chromatography.

    PubMed

    Boultwood, Tom; Affron, Dominic P; Bull, James A

    2014-05-16

    The highly diastereoselective preparation of cis-N-Ts-iodoaziridines through reaction of diiodomethyllithium with N-Ts aldimines is described. Diiodomethyllithium is prepared by the deprotonation of diiodomethane with LiHMDS, in a THF/diethyl ether mixture, at -78 °C in the dark. These conditions are essential for the stability of the LiCHI2 reagent generated. The subsequent dropwise addition of N-Ts aldimines to the preformed diiodomethyllithium solution affords an amino-diiodide intermediate, which is not isolated. Rapid warming of the reaction mixture to 0 °C promotes cyclization to afford iodoaziridines with exclusive cis-diastereoselectivity. The addition and cyclization stages of the reaction are mediated in one reaction flask by careful temperature control. Due to the sensitivity of the iodoaziridines to purification, assessment of suitable methods of purification is required. A protocol to assess the stability of sensitive compounds to stationary phases for column chromatography is described. This method is suitable to apply to new iodoaziridines, or other potentially sensitive novel compounds. Consequently this method may find application in range of synthetic projects. The procedure involves firstly the assessment of the reaction yield, prior to purification, by (1)H NMR spectroscopy with comparison to an internal standard. Portions of impure product mixture are then exposed to slurries of various stationary phases appropriate for chromatography, in a solvent system suitable as the eluent in flash chromatography. After stirring for 30 min to mimic chromatography, followed by filtering, the samples are analyzed by (1)H NMR spectroscopy. Calculated yields for each stationary phase are then compared to that initially obtained from the crude reaction mixture. The results obtained provide a quantitative assessment of the stability of the compound to the different stationary phases; hence the optimal can be selected. The choice of basic alumina, modified to

  18. Optimal selection of on-site generation with combined heat andpower applications

    SciTech Connect

    Siddiqui, Afzal S.; Marnay, Chris; Bailey, Owen; HamachiLaCommare, Kristina

    2004-11-30

    While demand for electricity continues to grow, expansion of the traditional electricity supply system, or macrogrid, is constrained and is unlikely to keep pace with the growing thirst western economies have for electricity. Furthermore, no compelling case has been made that perpetual improvement in the overall power quality and reliability (PQR)delivered is technically possible or economically desirable. An alternative path to providing high PQR for sensitive loads would generate close to them in microgrids, such as the Consortium for Electricity Reliability Technology Solutions (CERTS) Microgrid. Distributed generation would alleviate the pressure for endless improvement in macrogrid PQR and might allow the establishment of a sounder economically based level of universal grid service. Energy conversion from available fuels to electricity close to loads can also provide combined heat and power (CHP) opportunities that can significantly improve the economics of small-scale on-site power generation, especially in hot climates when the waste heat serves absorption cycle cooling equipment that displaces expensive on-peak electricity. An optimization model, the Distributed Energy Resources Customer Adoption Model (DER-CAM), developed at Berkeley Lab identifies the energy bill minimizing combination of on-site generation and heat recovery equipment for sites, given their electricity and heat requirements, the tariffs they face, and a menu of available equipment. DER-CAM is used to conduct a systemic energy analysis of a southern California naval base building and demonstrates atypical current economic on-site power opportunity. Results achieve cost reductions of about 15 percent with DER, depending on the tariff.Furthermore, almost all of the energy is provided on-site, indicating that modest cost savings can be achieved when the microgrid is free to select distributed generation and heat recovery equipment in order to minimize its over all costs.

  19. In vitro selection of optimal DNA substrates for T4 RNA ligase

    NASA Technical Reports Server (NTRS)

    Harada, Kazuo; Orgel, Leslie E.

    1993-01-01

    We have used in vitro selection techniques to characterize DNA sequences that are ligated efficiently by T4 RNA ligase. We find that the ensemble of selected sequences ligated about 10 times as efficiently as the random mixture of sequences used as the input for selection. Surprisingly, the majority of the selected sequences approximated a well-defined consensus sequence.

  20. Optimizing candidate selection--a vision in business limited conference. 1-2 December 1998, Basel, Switzerland.

    PubMed

    Audus, K L

    1999-02-01

    The pharmaceutical industry is faced with filtering hundreds of thousands of compounds to identify successful drug candidates. Given these numbers, how does the pharmaceutical industry identify optimal therapeutic agents rapidly, efficiently, economically and successfully, with the ultimate result of the patient receiving the best drug? The conference summarized the present and future requirements for evaluating emerging technologies, integrating that technology into a filter for large and growing numbers of compounds, building and linking diverse knowledge bases, and establishing predictive foundations that will optimize and accelerate drug discovery and development. Specific conference topics focused on organizational and management approaches as well as some of the major technologies and emerging techniques for supporting drug candidate selection and optimization. It is predicted that the pharmaceutical industry will be synthesizing and screening a million or more compounds for multiple therapeutic targets in the near future. Pulling together the resources of current and emerging technology, knowledge, and multidisciplinary teamwork, so that discovery and selection of successful drug candidates from this large pool of compounds can take place rapidly, is a significant challenge. This conference focused on the organizational issues and experimental tools that can provide for a shortening of discovery time, identification of current and future selection techniques and criteria, the linking of technologies and business strategies to reduce risk, and novel processes for optimizing candidates more quickly and efficiently. The conference was directed at industrial scientists involved in all stages along the drug discovery and development interface. This conference was well-attended, with approximately 100 participants.

  1. Potent and selective inhibitors of the TASK-1 potassium channel through chemical optimization of a bis-amide scaffold

    PubMed Central

    Flaherty, Daniel P.; Simpson, Denise S.; Miller, Melissa; Maki, Brooks E.; Zou, Beiyan; Shi, Jie; Wu, Meng; McManus, Owen B.; Aubé, Jeffrey; Li, Min; Golden, Jennifer E.

    2014-01-01

    TASK-1 is a two-pore domain potassium channel that is important to modulating cell excitability, most notably in the context of neuronal pathways. In order to leverage TASK-1 for therapeutic benefit, its physiological role needs better characterization; however, designing selective inhibitors that avoid the closely related TASK-3 channel has been challenging. In this study, a series of bis-amide derived compounds were found to demonstrate improved TASK-1 selectivity over TASK-3 compared to reported inhibitors. Optimization of a marginally selective hit led to analog 35 which displays a TASK-1 IC50 = 16 nM with 62-fold selectivity over TASK-3 in an orthogonal electrophysiology assay. PMID:25017033

  2. Selection of appropriate training and validation set chemicals for modelling dermal permeability by U-optimal design.

    PubMed

    Xu, G; Hughes-Oliver, J M; Brooks, J D; Yeatts, J L; Baynes, R E

    2013-01-01

    Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].

  3. Discovery and optimization of indazoles as potent and selective interleukin-2 inducible T cell kinase (ITK) inhibitors.

    PubMed

    Pastor, Richard M; Burch, Jason D; Magnuson, Steven; Ortwine, Daniel F; Chen, Yuan; De La Torre, Kelly; Ding, Xiao; Eigenbrot, Charles; Johnson, Adam; Liimatta, Marya; Liu, Yichin; Shia, Steven; Wang, Xiaolu; Wu, Lawren C; Pei, Zhonghua

    2014-06-01

    There is evidence that small molecule inhibitors of the non-receptor tyrosine kinase ITK, a component of the T-cell receptor signaling cascade, could represent a novel asthma therapeutic class. Moreover, given the expected chronic dosing regimen of any asthma treatment, highly selective as well as potent inhibitors would be strongly preferred in any potential therapeutic. Here we report hit-to-lead optimization of a series of indazoles that demonstrate sub-nanomolar inhibitory potency against ITK with strong cellular activity and good kinase selectivity. We also elucidate the binding mode of these inhibitors by solving the X-ray crystal structures of the complexes.

  4. Discovery of 7-aminofuro[2,3-c]pyridine inhibitors of TAK1: optimization of kinase selectivity and pharmacokinetics.

    PubMed

    Hornberger, Keith R; Chen, Xin; Crew, Andrew P; Kleinberg, Andrew; Ma, Lifu; Mulvihill, Mark J; Wang, Jing; Wilde, Victoria L; Albertella, Mark; Bittner, Mark; Cooke, Andrew; Kadhim, Salam; Kahler, Jennifer; Maresca, Paul; May, Earl; Meyn, Peter; Romashko, Darlene; Tokar, Brianna; Turton, Roy

    2013-08-15

    The kinase selectivity and pharmacokinetic optimization of a series of 7-aminofuro[2,3-c]pyridine inhibitors of TAK1 is described. The intersection of insights from molecular modeling, computational prediction of metabolic sites, and in vitro metabolite identification studies resulted in a simple and unique solution to both of these problems. These efforts culminated in the discovery of compound 13a, a potent, relatively selective inhibitor of TAK1 with good pharmacokinetic properties in mice, which was active in an in vivo model of ovarian cancer.

  5. High-Efficiency Nonfullerene Polymer Solar Cell Enabling by Integration of Film-Morphology Optimization, Donor Selection, and Interfacial Engineering.

    PubMed

    Zhang, Xin; Li, Weiping; Yao, Jiannian; Zhan, Chuanlang

    2016-06-22

    Carrier mobility is a vital factor determining the electrical performance of organic solar cells. In this paper we report that a high-efficiency nonfullerene organic solar cell (NF-OSC) with a power conversion efficiency of 6.94 ± 0.27% was obtained by optimizing the hole and electron transportations via following judicious selection of polymer donor and engineering of film-morphology and cathode interlayers: (1) a combination of solvent annealing and solvent vapor annealing optimizes the film morphology and hence both hole and electron mobilities, leading to a trade-off of fill factor and short-circuit current density (Jsc); (2) the judicious selection of polymer donor affords a higher hole and electron mobility, giving a higher Jsc; and (3) engineering the cathode interlayer affords a higher electron mobility, which leads to a significant increase in electrical current generation and ultimately the power conversion efficiency (PCE).

  6. Sulfonamides as Selective NaV1.7 Inhibitors: Optimizing Potency and Pharmacokinetics While Mitigating Metabolic Liabilities.

    PubMed

    Weiss, Matthew M; Dineen, Thomas A; Marx, Isaac E; Altmann, Steven; Boezio, Alessandro A; Bregman, Howard; Chu-Moyer, Margaret Y; DiMauro, Erin F; Feric Bojic, Elma; Foti, Robert S; Gao, Hua; Graceffa, Russell F; Gunaydin, Hakan; Guzman-Perez, Angel; Huang, Hongbing; Huang, Liyue; Jarosh, Michael; Kornecook, Thomas; Kreiman, Charles R; Ligutti, Joseph; La, Daniel S; Lin, Min-Hwa Jasmine; Liu, Dong; Moyer, Bryan D; Nguyen, Hanh Nho; Peterson, Emily A; Rose, Paul E; Taborn, Kristin; Youngblood, Beth D; Yu, Violeta L; Fremeau, Robert T

    2017-03-13

    Several reports have recently emerged regarding the identification of heteroarylsulfonamides as NaV1.7 inhibitors that demonstrate high levels of selectivity over other NaV isoforms. The optimization of a series of internal NaV1.7 leads that address a number of metabolic liabilities including bioactivation, PXR activation, as well as CYP3A4 induction and inhibition led to the identification of potent and selective inhibitors that demonstrated favorable pharmacokinetic profiles and were devoid of the aforementioned liabilities. Key to achieving this within a series prone to transporter-mediated clearance was the identification of a small range of optimal cLogD values and the discovery of subtle PXR SAR that was not lipophilicity-dependent. This enabled the identification of compound 20 which was advanced into a target engagement pharmacodynamic model where it exhibited robust reversal of histamine-induced scratching bouts in mice.

  7. Optimal design and patient selection for interventional trials using radiogenomic biomarkers: A REQUITE and Radiogenomics consortium statement.

    PubMed

    De Ruysscher, Dirk; Defraene, Gilles; Ramaekers, Bram L T; Lambin, Philippe; Briers, Erik; Stobart, Hilary; Ward, Tim; Bentzen, Søren M; Van Staa, Tjeerd; Azria, David; Rosenstein, Barry; Kerns, Sarah; West, Catharine

    2016-12-01

    The optimal design and patient selection for interventional trials in radiogenomics seem trivial at first sight. However, radiogenomics do not give binary information like in e.g. targetable mutation biomarkers. Here, the risk to develop severe side effects is continuous, with increasing incidences of side effects with higher doses and/or volumes. In addition, a multi-SNP assay will produce a predicted probability of developing side effects and will require one or more cut-off thresholds for classifying risk into discrete categories. A classical biomarker trial design is therefore not optimal, whereas a risk factor stratification approach is more appropriate. Patient selection is crucial and this should be based on the dose-response relations for a specific endpoint. Alternatives to standard treatment should be available and this should take into account the preferences of patients. This will be discussed in detail.

  8. Automatic selection of an optimal systolic and diastolic reconstruction windows for dual-source CT coronary angiography

    NASA Astrophysics Data System (ADS)

    Seifarth, H.; Puesken, M.; Wienbeck, S.; Maintz, D.; Heindel, W.; Juergens, K.-U.

    2008-03-01

    Purpose: To assess the performance of a motion map algorithm to automatically determine the optimal systolic and diastolic reconstruction window for coronary CT Angiography using Dual Source CT. Materials and Methods: Dual Source coronary CT angiography data sets (Somatom Definition, Siemens Medical Solutions) from 50 consecutive patients were included in the analysis. Optimal systolic and diastolic reconstruction windows were determined using a motion map algorithm (BestPhase, Siemens Medical Solutions). Additionally data sets were reconstructed in 5% steps throughout the RR-interval. For each major vessel (RCA, LAD and LCX) an optimal systolic and diastolic reconstruction window was manually determined by two independent readers using volume rendering displays. Image quality was rated using a five-point scale (1 = no motion artifacts, 5 = severe motion artifacts over entire length of the vessel). Results: The mean heart rate during the scan was 72.4bpm (+/-15.8bpm). Median systolic and diastolic reconstruction windows using the BestPhase algorithm were at 37% and 73% RR. The median manually selected systolic reconstruction window was 35 %, 30% and 35% for RCA, LAD, and LCX. For all vessels the median observer selected diastolic reconstruction window was 75%. Mean image quality using the BestPhase algorithm was 2.4 +/-0.9 for systolic reconstructions and 1.9 +/-1.1 for diastolic reconstructions. Using the manual approach, the mean image quality was 1.9 +/-0.5 and 1.7 +/-0.8 respectively. There was a significant difference in image quality between automatically and manually determined systolic reconstructions (p<0.01) but there was no significant difference in image quality in diastolic reconstructions. Conclusion: Automatic determination of the optimal reconstruction interval using the BestPhase algorithm is feasible and yields reconstruction windows similar to observer selected reconstruction windows. In diastolic reconstructions overall image quality is similar

  9. The Why of Waiting: How mathematical Best-Choice Models demonstrate optimality of a Refractory Period in Habitat Selection

    NASA Astrophysics Data System (ADS)

    Brugger, M. F.; Waymire, E. C.; Betts, M. G.

    2010-12-01

    When brush mice, fruit flies, and other animals disperse from their natal site, they are immediately tasked with selecting new habitat, and must do so in such a way as to optimize their chances of surviving and breeding. Habitat selection connects the fields of behavioral ecology and landscape ecology by describing the role the physical quality of habitat plays in the selection process. Interestingly, observations indicate a strategy that occurs with a certain prescribed statistical regularity. It has been demonstrated (Stamps, Davis, Blozis, Boundy-Mills, Anim. Behav., 2007) that brush mice and fruit flies employ a refractory period: a period wherein a disperser, after leaving its natal site, will not accept highly-preferred natural habitats. Assuming this behavior has adaptive benefit, the apparent optimality of this strategy is mirrored in mathematical models of Stochastic Optimization. In one such model, the Classical Best Choice Problem, a selector views some permutation of the numbers {1, ..., n} one-by-one, seeing only their relative ranks and then either selecting that element or discarding it. The goal is to choose the ``n" element. The optimal strategy is to wait for the ⌈ n/e ⌉ th element and then pick an element if it is better than all those already seen; this might demonstrate why refractory periods have adaptive benefit. We present three extensions to the Best Choice Problem: a partial ordering on the set of elements (Kubicki & Morayne, SIAM J. Discrete Math., 2005), a new goal of minimizing the expected rank (Chow, Moriguti, Robbins, Samuels, Israel J. Math., 1964), and a general utility function (Gusein-Zade, Theory of Prob. and Applications, 1966), allowing the top r sites to be equally desirable. These extensions relate to ecological phenomena not represented by the Classical Problem. In each, we discuss the effect on the duration or existence of the Refractory Period.

  10. Optimal frequency selection of multi-channel O2-band different absorption barometric radar for air pressure measurements

    NASA Astrophysics Data System (ADS)

    Lin, Bing; Min, Qilong

    2017-02-01

    Through theoretical analysis, optimal selection of frequencies for O2 differential absorption radar systems on air pressure field measurements is achieved. The required differential absorption optical depth between a radar frequency pair is 0.5. With this required value and other considerations on water vapor absorption and the contamination of radio wave transmission, frequency pairs of present considered radar system are obtained. Significant impacts on general design of differential absorption remote sensing systems are expected from current results.

  11. Selection of optimal welding condition for GTA pulse welding in root-pass of V-groove butt joint

    NASA Astrophysics Data System (ADS)

    Yun, Seok-Chul; Kim, Jae-Woong

    2010-12-01

    In the manufacture of high-quality welds or pipeline, a full-penetration weld has to be made along the weld joint. Therefore, root-pass welding is very important, and its conditions have to be selected carefully. In this study, an experimental method for the selection of optimal welding conditions is proposed for gas tungsten arc (GTA) pulse welding in the root pass which is done along the V-grooved butt-weld joint. This method uses response surface analysis in which the width and height of back bead are chosen as quality variables of the weld. The overall desirability function, which is the combined desirability function for the two quality variables, is used as the objective function to obtain the optimal welding conditions. In our experiments, the target values of back bead width and height are 4 mm and zero, respectively, for a V-grooved butt-weld joint of a 7-mm-thick steel plate. The optimal welding conditions could determine the back bead profile (bead width and height) as 4.012 mm and 0.02 mm. From a series of welding tests, it was revealed that a uniform and full-penetration weld bead can be obtained by adopting the optimal welding conditions determined according to the proposed method.

  12. Quantitative and qualitative optimization of allergen extraction from peanut and selected tree nuts. Part 1. Screening of optimal extraction conditions using a D-optimal experimental design.

    PubMed

    L'Hocine, Lamia; Pitre, Mélanie

    2016-03-01

    A D-optimal design was constructed to optimize allergen extraction efficiency simultaneously from roasted, non-roasted, defatted, and non-defatted almond, hazelnut, peanut, and pistachio flours using three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various conditions of ionic strength, buffer-to-protein ratio, extraction temperature, and extraction duration. Statistical analysis showed that roasting and non-defatting significantly lowered protein recovery for all nuts. Increasing the temperature and the buffer-to-protein ratio during extraction significantly increased protein recovery, whereas increasing the extraction time had no significant impact. The impact of the three buffers on protein recovery varied significantly among the nuts. Depending on the extraction conditions, protein recovery varied from 19% to 95% for peanut, 31% to 73% for almond, 17% to 64% for pistachio, and 27% to 88% for hazelnut. A modulation by the buffer type and ionic strength of protein and immunoglobuline E binding profiles of extracts was evidenced, where high protein recovery levels did not always correlate with high immunoreactivity.

  13. Optimized fabrication of Ca-P/PHBV nanocomposite scaffolds via selective laser sintering for bone tissue engineering.

    PubMed

    Duan, Bin; Cheung, Wai Lam; Wang, Min

    2011-03-01

    Biomaterials for scaffolds and scaffold fabrication techniques are two key elements in scaffold-based tissue engineering. Nanocomposites that consist of biodegradable polymers and osteoconductive bioceramic nanoparticles and advanced scaffold manufacturing techniques, such as rapid prototyping (RP) technologies, have attracted much attention for developing new bone tissue engineering strategies. In the current study, poly(hydroxybutyrate-co-hydroxyvalerate) (PHBV) microspheres and calcium phosphate (Ca-P)/PHBV nanocomposite microspheres were fabricated using the oil-in-water (O/W) and solid-in-oil-in-water (S/O/W) emulsion solvent evaporation methods. The microspheres with suitable sizes were then used as raw materials for scaffold fabrication via selective laser sintering (SLS), which is a mature RP technique. A three-factor three-level complete factorial design was applied to investigate the effects of the three factors (laser power, scan spacing, and layer thickness) in SLS and to optimize SLS parameters for producing good-quality PHBV polymer scaffolds and Ca-P/PHBV nanocomposite scaffolds. The plots of the main effects of these three factors and the three-dimensional response surface were constructed and discussed. Based on the regression equation, optimized PHBV scaffolds and Ca-P/PHBV scaffolds were fabricated using the optimized values of SLS parameters. Characterization of optimized PHBV scaffolds and Ca-P/PHBV scaffolds verified the optimization process. It has also been demonstrated that SLS has the capability of constructing good-quality, sophisticated porous structures of complex shape, which some tissue engineering applications may require.

  14. Design and optimization of a multi-element piezoelectric transducer for mode-selective generation of guided waves

    NASA Astrophysics Data System (ADS)

    Yazdanpanah Moghadam, Peyman; Quaegebeur, Nicolas; Masson, Patrice

    2016-07-01

    A novel multi-element piezoelectric transducers (MEPT) is designed, optimized, machined and experimentally tested to improve structural health monitoring systems for mode-selective generation of guided waves (GW) in an isotropic structure. GW generation using typical piezoceramics makes the signal processing and consequently damage detection very complicated because at any driving frequency at least two fundamental symmetric (S 0) and antisymmetric (A 0) modes are generated. To prevent this, mode selective transducer design is proposed based on MEPT. A numerical method is first developed to extract the interfacial stress between a single piezoceramic element and a host structure and then used as the input of an analytical model to predict the GW propagation through the thickness of an isotropic plate. Two novel objective functions are proposed to optimize the interfacial shear stress for both suppressing unwanted mode(s) and maximizing the desired mode. Simplicity and low manufacturing cost are two main targets driving the design of the MEPT. A prototype MEPT is then manufactured using laser micro-machining. An experimental procedure is presented to validate the performances of the MEPT as a new solution for mode-selective GW generation. Experimental tests illustrate the high capability of the MEPT for mode-selective GW generation, as unwanted mode is suppressed by a factor up to 170 times compared with the results obtained with a single piezoceramic.

  15. Optimization of 2-phenylcyclopropylmethylamines as selective serotonin 2C receptor agonists and their evaluation as potential antipsychotic agents.

    PubMed

    Cheng, Jianjun; Giguère, Patrick M; Onajole, Oluseye K; Lv, Wei; Gaisin, Arsen; Gunosewoyo, Hendra; Schmerberg, Claire M; Pogorelov, Vladimir M; Rodriguiz, Ramona M; Vistoli, Giulio; Wetsel, William C; Roth, Bryan L; Kozikowski, Alan P

    2015-02-26

    The discovery of a new series of compounds that are potent, selective 5-HT2C receptor agonists is described herein as we continue our efforts to optimize the 2-phenylcyclopropylmethylamine scaffold. Modifications focused on the alkoxyl substituent present on the aromatic ring led to the identification of improved ligands with better potency at the 5-HT2C receptor and excellent selectivity against the 5-HT2A and 5-HT2B receptors. ADMET studies coupled with a behavioral test using the amphetamine-induced hyperactivity model identified four compounds possessing drug-like profiles and having antipsychotic properties. Compound (+)-16b, which displayed an EC50 of 4.2 nM at 5-HT2C, no activity at 5-HT2B, and an 89-fold selectivity against 5-HT2A, is one of the most potent and selective 5-HT2C agonists reported to date. The likely binding mode of this series of compounds to the 5-HT2C receptor was also investigated in a modeling study, using optimized models incorporating the structures of β2-adrenergic receptor and 5-HT2B receptor.

  16. A modified binary particle swarm optimization for selecting the small subset of informative genes from gene expression data.

    PubMed

    Mohamad, Mohd Saberi; Omatu, Sigeru; Deris, Safaai; Yoshioka, Michifumi

    2011-11-01

    Gene expression data are expected to be of significant help in the development of efficient cancer diagnoses and classification platforms. In order to select a small subset of informative genes from the data for cancer classification, recently, many researchers are analyzing gene expression data using various computational intelligence methods. However, due to the small number of samples compared to the huge number of genes (high dimension), irrelevant genes, and noisy genes, many of the computational methods face difficulties to select the small subset. Thus, we propose an improved (modified) binary particle swarm optimization to select the small subset of informative genes that is relevant for the cancer classification. In this proposed method, we introduce particles' speed for giving the rate at which a particle changes its position, and we propose a rule for updating particle's positions. By performing experiments on ten different gene expression datasets, we have found that the performance of the proposed method is superior to other previous related works, including the conventional version of binary particle swarm optimization (BPSO) in terms of classification accuracy and the number of selected genes. The proposed method also produces lower running times compared to BPSO.

  17. Regression metamodels of an optimal genomic testing strategy in dairy cattle when selection intensity is low

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic testing of dairy cattle increases reliability and can be used to select animals with superior genetic merit. Genomic testing is not free and not all candidates for selection should necessarily be tested. One common algorithm used to compare alternative decisions is time-consuming and not eas...

  18. A Conceptual Framework for Procurement Decision Making Model to Optimize Supplier Selection: The Case of Malaysian Construction Industry

    NASA Astrophysics Data System (ADS)

    Chuan, Ngam Min; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Yong, Lee Choon; Ghazali, Azrul; Ezanee Rusli, Mohd; Itam, Zarina Binti; Beddu, Salmia; Liyana Mohd Kamal, Nur

    2016-03-01

    This paper intends to fathom the current state of procurement system in Malaysia specifically in the construction industry in the aspect of supplier selection. This paper propose a comprehensive study on the supplier selection metrics for infrastructure building, weight the importance of each metrics assigned and to find the relationship between the metrics among initiators, decision makers, buyers and users. With the metrics hierarchy of criteria importance, a supplier selection process can be defined, repeated and audited with lesser complications or difficulties. This will help the field of procurement to improve as this research is able to develop and redefine policies and procedures that have been set in supplier selection. Developing this systematic process will enable optimization of supplier selection and thus increasing the value for every stakeholders as the process of selection is greatly simplified. With a new redefined policy and procedure, it does not only increase the company’s effectiveness and profit, but also make it available for the company to reach greater heights in the advancement of procurement in Malaysia.

  19. Multi-scale textural feature extraction and particle swarm optimization based model selection for false positive reduction in mammography.

    PubMed

    Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin

    2015-12-01

    The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique.

  20. Optimization of fermentation parameters to study the behavior of selected lactic cultures on soy solid state fermentation.

    PubMed

    Rodríguez de Olmos, A; Bru, E; Garro, M S

    2015-03-02

    The use of solid fermentation substrate (SSF) has been appreciated by the demand for natural and healthy products. Lactic acid bacteria and bifidobacteria play a leading role in the production of novel functional foods and their behavior is practically unknown in these systems. Soy is an excellent substrate for the production of functional foods for their low cost and nutritional value. The aim of this work was to optimize different parameters involved in solid state fermentation (SSF) using selected lactic cultures to improve soybean substrate as a possible strategy for the elaboration of new soy food with enhanced functional and nutritional properties. Soy flour and selected lactic cultures were used under different conditions to optimize the soy SSF. The measured responses were bacterial growth, free amino acids and β-glucosidase activity, which were analyzed by applying response surface methodology. Based on the proposed statistical model, different fermentation conditions were raised by varying the moisture content (50-80%) of the soy substrate and temperature of incubation (31-43°C). The effect of inoculum amount was also investigated. These studies demonstrated the ability of selected strains (Lactobacillus paracasei subsp. paracasei and Bifidobacterium longum) to grow with strain-dependent behavior on the SSF system. β-Glucosidase activity was evident in both strains and L. paracasei subsp. paracasei was able to increase the free amino acids at the end of fermentation under assayed conditions. The used statistical model has allowed the optimization of fermentation parameters on soy SSF by selected lactic strains. Besides, the possibility to work with lower initial bacterial amounts to obtain results with significant technological impact was demonstrated.

  1. Pattern Search Ranking and Selection Algorithms for Mixed-Variable Optimization of Stochastic Systems

    DTIC Science & Technology

    2004-09-01

    optimization problems with stochastic objective functions and a mixture of design variable types. The generalized pattern search (GPS) class of algorithms is...provide computational enhancements to the basic algorithm. Im- plementation alternatives include the use of modern R&S procedures designed to provide...83 vii Page 4.3 Termination Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.4 Algorithm Design

  2. Selecting Segmental Errors in Non-Native Dutch for Optimal Pronunciation Training

    ERIC Educational Resources Information Center

    Neri, Ambra; Cucchiarini, Catia; Strik, Helmer

    2006-01-01

    The current emphasis in second language teaching lies in the achievement of communicative effectiveness. In line with this approach, pronunciation training is nowadays geared towards helping learners avoid serious pronunciation errors, rather than eradicating the finest traces of foreign accent. However, to devise optimal pronunciation training…

  3. Optimal sensor selection for noisy binary detection in stochastic pooling networks

    NASA Astrophysics Data System (ADS)

    McDonnell, Mark D.; Li, Feng; Amblard, P.-O.; Grant, Alex J.

    2013-08-01

    Stochastic Pooling Networks (SPNs) are a useful model for understanding and explaining how naturally occurring encoding of stochastic processes can occur in sensor systems ranging from macroscopic social networks to neuron populations and nanoscale electronics. Due to the interaction of nonlinearity, random noise, and redundancy, SPNs support various unexpected emergent features, such as suprathreshold stochastic resonance, but most existing mathematical results are restricted to the simplest case where all sensors in a network are identical. Nevertheless, numerical results on information transmission have shown that in the presence of independent noise, the optimal configuration of a SPN is such that there should be partial heterogeneity in sensor parameters, such that the optimal solution includes clusters of identical sensors, where each cluster has different parameter values. In this paper, we consider a SPN model of a binary hypothesis detection task and show mathematically that the optimal solution for a specific bound on detection performance is also given by clustered heterogeneity, such that measurements made by sensors with identical parameters either should all be excluded from the detection decision or all included. We also derive an algorithm for numerically finding the optimal solution and illustrate its utility with several examples, including a model of parallel sensory neurons with Poisson firing characteristics.

  4. Item Selection for the Development of Short Forms of Scales Using an Ant Colony Optimization Algorithm

    ERIC Educational Resources Information Center

    Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.

    2008-01-01

    This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and…

  5. Optimizing the selective recognition of protein isoforms through tuning of nanoparticle hydrophobicity†

    PubMed Central

    Moyano, Daniel F.; Xu, Yisheng; Rotello, Vincent M.

    2014-01-01

    We demonstrate that ligand hydrophobicity can be used to increase affinity and selectivity of binding between monolayer-protected cationic gold nanoparticles and β– lactoglobulin protein isoforms containing two amino acid mutations. PMID:24838611

  6. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  7. Stochastic optimization algorithm selection in hydrological model calibration based on fitness landscape characterization

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Brissette, François P.; Poulin, Annie; Côté, Pascal; Martel, Jean-Luc

    2014-05-01

    The process of hydrological model parameter calibration is routinely performed with the help of stochastic optimization algorithms. Many such algorithms have been created and they sometimes provide varying levels of performance (as measured by an efficiency metric such as Nash-Sutcliffe). This is because each algorithm is better suited for one type of optimization problem rather than another. This research project's aim was twofold. First, it was sought upon to find various features in the calibration problem fitness landscapes to map the encountered problem types to the best possible optimization algorithm. Second, the optimal number of model evaluations in order to minimize resources usage and maximize overall model quality was investigated. A total of five stochastic optimization algorithms (SCE-UA, CMAES, DDS, PSO and ASA) were used to calibrate four lumped hydrological models (GR4J, HSAMI, HMETS and MOHYSE) on 421 basins from the US MOPEX database. Each of these combinations was performed using three objective functions (Log(RMSE), NSE, and a metric combining NSE, RMSE and BIAS) to add sufficient diversity to the fitness landscapes. Each run was performed 30 times for statistical analysis. With every parameter set tested during the calibration process, the validation value was taken on a separate period. It was then possible to outline the calibration skill versus the validation skill for the different algorithms. Fitness landscapes were characterized by various metrics, such as the dispersion metric, the mean distance between random points and their respective local minima (found through simple hill-climbing algorithms) and the mean distance between the local minima and the best local optimum found. These metrics were then compared to the calibration score of the various optimization algorithms. Preliminary results tend to show that fitness landscapes presenting a globally convergent structure are more prevalent than other types of landscapes in this

  8. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    PubMed

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection.

  9. Responding to home maintenance challenge scenarios: the role of selection, optimization, and compensation in aging-in-place.

    PubMed

    Kelly, Andrew John; Fausset, Cara Bailey; Rogers, Wendy; Fisk, Arthur D

    2014-12-01

    This study examined potential issues faced by older adults in managing their homes and their proposed solutions for overcoming hypothetical difficulties. Forty-four diverse, independently living older adults (66-85) participated in structured group interviews in which they discussed potential solutions to manage difficulties presented in four scenarios: perceptual, mobility, physical, and cognitive difficulties. The proposed solutions were classified using the Selection, Optimization, and Compensation (SOC) model. Participants indicated they would continue performing most tasks and reported a range of strategies to manage home maintenance challenges. Most participants reported that they would manage home maintenance challenges using compensation; the most frequently mentioned compensation strategy was using tools and technologies. There were also differences across the scenarios: Optimization was discussed most frequently with perceptual and cognitive difficulty scenarios. These results provide insights into supporting older adults' potential needs for aging-in-place and provide evidence of the value of the SOC model in applied research.

  10. CONDENSED MATTER: ELECTRONIC STRUCTURE, ELECTRICAL, MAGNETIC, AND OPTICAL PROPERTIES: Frequency selective surface structure optimized by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Wang, Jian-Bo; Sun, Guan-Cheng

    2009-04-01

    Frequency selective surface (FSS) is a two-dimensional periodic structure which has prominent characteristics of bandpass or bandblock when interacting with electromagnetic waves. In this paper, the thickness, the dielectric constant, the element graph and the arrangement periodicity of an FSS medium are investigated by Genetic Algorithm (GA) when an electromagnetic wave is incident on the FSS at a wide angle, and an optimized FSS structure and transmission characteristics are obtained. The results show that the optimized structure has better stability in relation to incident angle of electromagnetic wave and preserves the stability of centre frequency even at an incident angle as large as 80°, thereby laying the foundation for the application of FSS to curved surfaces at wide angles.

  11. A chaos control optimal algorithm for QoS-based service composition selection in cloud manufacturing system

    NASA Astrophysics Data System (ADS)

    Huang, Biqing; Li, Chenghai; Tao, Fei

    2014-07-01

    This article investigates the problem of cloud service composition optimal-selection (CSCOS) in cloud manufacturing (CMfg). The categories of cloud services and their QoS (quality of service) indexes are established. From the perspective of QoS indexes, the relationship among QoS key factors for different kinds of cloud services are analysed and elaborated, and the corresponding objective functions and constraints of CSCOS are proposed. A new chaos control optimal algorithm (CCOA) is designed to address the CSCOS problem, and the simulation results demonstrate that the proposed algorithm can search better solutions with less time-consumption than widely used algorithms such as genetic algorithm (GA) and typical chaotic genetic algorithm (CGA).

  12. Using Future Value Analysis to Select an Optimal Portfolio of Force Protection Initiatives

    DTIC Science & Technology

    2003-03-01

    Bard, J.F. “A Comparison of the Analytic Hierarchy Process with Multiattribute Utility Theory : A Case Study,” IIE Transactions, 24: 111-121(November...FVA incorporates the ideals of multi-attribute utility theory , specifically using the VFT process, as well as linear programming optimization...objective methodologies in use today are the analytic hierarchy process (AHP) and multi-attribute utility theory (MAUT). These are two distinct approaches

  13. Selecting Observation Platforms for Optimized Anomaly Detectability under Unreliable Partial Observations

    SciTech Connect

    Wen-Chiao Lin; Humberto E. Garcia; Tae-Sic Yoo

    2011-06-01

    Diagnosers for keeping track on the occurrences of special events in the framework of unreliable partially observed discrete-event dynamical systems were developed in previous work. This paper considers observation platforms consisting of sensors that provide partial and unreliable observations and of diagnosers that analyze them. Diagnosers in observation platforms typically perform better as sensors providing the observations become more costly or increase in number. This paper proposes a methodology for finding an observation platform that achieves an optimal balance between cost and performance, while satisfying given observability requirements and constraints. Since this problem is generally computational hard in the framework considered, an observation platform optimization algorithm is utilized that uses two greedy heuristics, one myopic and another based on projected performances. These heuristics are sequentially executed in order to find best observation platforms. The developed algorithm is then applied to an observation platform optimization problem for a multi-unit-operation system. Results show that improved observation platforms can be found that may significantly reduce the observation platform cost but still yield acceptable performance for correctly inferring the occurrences of special events.

  14. The suitability of selected multidisciplinary design and optimization techniques to conceptual aerospace vehicle design

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1992-01-01

    Four methods for preliminary aerospace vehicle design are reviewed. The first three methods (classical optimization, system decomposition, and system sensitivity analysis (SSA)) employ numerical optimization techniques and numerical gradients to feed back changes in the design variables. The optimum solution is determined by stepping through a series of designs toward a final solution. Of these three, SSA is argued to be the most applicable to a large-scale highly coupled vehicle design where an accurate minimum of an objective function is required. With SSA, several tasks can be performed in parallel. The techniques of classical optimization and decomposition can be included in SSA, resulting in a very powerful design method. The Taguchi method is more of a 'smart' parametric design method that analyzes variable trends and interactions over designer specified ranges with a minimum of experimental analysis runs. Its advantages are its relative ease of use, ability to handle discrete variables, and ability to characterize the entire design space with a minimum of analysis runs.

  15. Optimal reference sequence selection for genome assembly using minimum description length principle.

    PubMed

    Wajid, Bilal; Serpedin, Erchin; Nounou, Mohamed; Nounou, Hazem

    2012-11-27

    : Reference assisted assembly requires the use of a reference sequence, as a model, to assist in the assembly of the novel genome. The standard method for identifying the best reference sequence for the assembly of a novel genome aims at counting the number of reads that align to the reference sequence, and then choosing the reference sequence which has the highest number of reads aligning to it. This article explores the use of minimum description length (MDL) principle and its two variants, the two-part MDL and Sophisticated MDL, in identifying the optimal reference sequence for genome assembly. The article compares the MDL based proposed scheme with the standard method coming to the conclusion that "counting the number of reads of the novel genome present in the reference sequence" is not a sufficient condition. Therefore, the proposed MDL scheme includes within itself the standard method of "counting the number of reads that align to the reference sequence" and also moves forward towards looking at the model, the reference sequence, as well, in identifying the optimal reference sequence. The proposed MDL based scheme not only becomes the sufficient criterion for identifying the optimal reference sequence for genome assembly but also improves the reference sequence so that it becomes more suitable for the assembly of the novel genome.

  16. Mapping carbon flux uncertainty and selecting optimal locations for future flux towers in the Great Plains

    USGS Publications Warehouse

    Gu, Y.; Howard, D.M.; Wylie, B.K.; Zhang, L.

    2012-01-01

    Flux tower networks (e. g., AmeriFlux, Agriflux) provide continuous observations of ecosystem exchanges of carbon (e. g., net ecosystem exchange), water vapor (e. g., evapotranspiration), and energy between terrestrial ecosystems and the atmosphere. The long-term time series of flux tower data are essential for studying and understanding terrestrial carbon cycles, ecosystem services, and climate changes. Currently, there are 13 flux towers located within the Great Plains (GP). The towers are sparsely distributed and do not adequately represent the varieties of vegetation cover types, climate conditions, and geophysical and biophysical conditions in the GP. This study assessed how well the available flux towers represent the environmental conditions or "ecological envelopes" across the GP and identified optimal locations for future flux towers in the GP. Regression-based remote sensing and weather-driven net ecosystem production (NEP) models derived from different extrapolation ranges (10 and 50%) were used to identify areas where ecological conditions were poorly represented by the flux tower sites and years previously used for mapping grassland fluxes. The optimal lands suitable for future flux towers within the GP were mapped. Results from this study provide information to optimize the usefulness of future flux towers in the GP and serve as a proxy for the uncertainty of the NEP map.

  17. Optimized selection of anti-tumor recombinant antibodies from phage libraries on intact cells.

    PubMed

    Pavoni, Emiliano; Vaccaro, Paola; Anastasi, Anna Maria; Minenkova, Olga

    2014-02-01

    Generation of human recombinant antibody libraries displayed on the surface of the filamentous phage and selection of specific antibodies against desirable targets allows production of fully human antibodies usable for repeated administration in humans. Various lymphoid tissues from immunized donors, such as lymph nodes or peripheral blood lymphocytes from individuals with tumor or lymphocytes infiltrating tumor masses may serve as a source of specific anti-tumor antibody repertoire for generation of tumor-focused phage display libraries. In the case of lack of tumor-associated antigens in the purified form, high affinity anti-tumor antibodies can be isolated through library panning on whole cells expressing these antigens. However, affinity selection against cell surface specific antigens within highly heterogeneous population of molecules is not a very efficient process that often results in the selection of unspecific antibodies or antibodies against intracellular antigens that are generally useless for targeted immunotherapy. In this work, we developed a new cell-based antibody selection protocol that, by eliminating the contamination of dead cells from the cell suspension, dramatically improves the selection frequency of anti-tumor antibodies recognizing cell surface antigens.

  18. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  19. Web-GIS oriented systems viability for municipal solid waste selective collection optimization in developed and transient economies

    SciTech Connect

    Rada, E.C.; Ragazzi, M.; Fedrizzi, P.

    2013-04-15

    Highlights: ► As an appropriate solution for MSW management in developed and transient countries. ► As an option to increase the efficiency of MSW selective collection. ► As an opportunity to integrate MSW management needs and services inventories. ► As a tool to develop Urban Mining actions. - Abstract: Municipal solid waste management is a multidisciplinary activity that includes generation, source separation, storage, collection, transfer and transport, processing and recovery, and, last but not least, disposal. The optimization of waste collection, through source separation, is compulsory where a landfill based management must be overcome. In this paper, a few aspects related to the implementation of a Web-GIS based system are analyzed. This approach is critically analyzed referring to the experience of two Italian case studies and two additional extra-European case studies. The first case is one of the best examples of selective collection optimization in Italy. The obtained efficiency is very high: 80% of waste is source separated for recycling purposes. In the second reference case, the local administration is going to be faced with the optimization of waste collection through Web-GIS oriented technologies for the first time. The starting scenario is far from an optimized management of municipal solid waste. The last two case studies concern pilot experiences in China and Malaysia. Each step of the Web-GIS oriented strategy is comparatively discussed referring to typical scenarios of developed and transient economies. The main result is that transient economies are ready to move toward Web oriented tools for MSW management, but this opportunity is not yet well exploited in the sector.

  20. Decision method for optimal selection of warehouse material handling strategies by production companies

    NASA Astrophysics Data System (ADS)

    Dobos, P.; Tamás, P.; Illés, B.

    2016-11-01

    Adequate establishment and operation of warehouse logistics determines the companies’ competitiveness significantly because it effects greatly the quality and the selling price of the goods that the production companies produce. In order to implement and manage an adequate warehouse system, adequate warehouse position, stock management model, warehouse technology, motivated work force committed to process improvement and material handling strategy are necessary. In practical life, companies have paid small attantion to select the warehouse strategy properly. Although it has a major influence on the production in the case of material warehouse and on smooth costumer service in the case of finished goods warehouse because this can happen with a huge loss in material handling. Due to the dynamically changing production structure, frequent reorganization of warehouse activities is needed, on what the majority of the companies react basically with no reactions. This work presents a simulation test system frames for eligible warehouse material handling strategy selection and also the decision method for selection.

  1. Spacecraft flight control with the new phase space control law and optimal linear jet select

    NASA Technical Reports Server (NTRS)

    Bergmann, E. V.; Croopnick, S. R.; Turkovich, J. J.; Work, C. C.

    1977-01-01

    An autopilot designed for rotation and translation control of a rigid spacecraft is described. The autopilot uses reaction control jets as control effectors and incorporates a six-dimensional phase space control law as well as a linear programming algorithm for jet selection. The interaction of the control law and jet selection was investigated and a recommended configuration proposed. By means of a simulation procedure the new autopilot was compared with an existing system and was found to be superior in terms of core memory, central processing unit time, firings, and propellant consumption. But it is thought that the cycle time required to perform the jet selection computations might render the new autopilot unsuitable for existing flight computer applications, without modifications. The new autopilot is capable of maintaining attitude control in the presence of a large number of jet failures.

  2. Optimizing the selective recognition of protein isoforms through tuning of nanoparticle hydrophobicity

    NASA Astrophysics Data System (ADS)

    Chen, Kaimin; Rana, Subinoy; Moyano, Daniel F.; Xu, Yisheng; Guo, Xuhong; Rotello, Vincent M.

    2014-05-01

    We demonstrate that ligand hydrophobicity can be used to increase affinity and selectivity of binding between monolayer-protected cationic gold nanoparticles and β-lactoglobulin protein isoforms containing two amino acid mutations.We demonstrate that ligand hydrophobicity can be used to increase affinity and selectivity of binding between monolayer-protected cationic gold nanoparticles and β-lactoglobulin protein isoforms containing two amino acid mutations. Electronic supplementary information (ESI) available: Experimental details, ITC, and DLS analyses. See DOI: 10.1039/c4nr01085j

  3. A methodology for selecting an optimal experimental design for the computer analysis of a complex system

    SciTech Connect

    RUTHERFORD,BRIAN M.

    2000-02-03

    Investigation and evaluation of a complex system is often accomplished through the use of performance measures based on system response models. The response models are constructed using computer-generated responses supported where possible by physical test results. The general problem considered is one where resources and system complexity together restrict the number of simulations that can be performed. The levels of input variables used in defining environmental scenarios, initial and boundary conditions and for setting system parameters must be selected in an efficient way. This report describes an algorithmic approach for performing this selection.

  4. Sulfonamides as Selective NaV1.7 Inhibitors: Optimizing Potency and Pharmacokinetics to Enable in Vivo Target Engagement.

    PubMed

    Marx, Isaac E; Dineen, Thomas A; Able, Jessica; Bode, Christiane; Bregman, Howard; Chu-Moyer, Margaret; DiMauro, Erin F; Du, Bingfan; Foti, Robert S; Fremeau, Robert T; Gao, Hua; Gunaydin, Hakan; Hall, Brian E; Huang, Liyue; Kornecook, Thomas; Kreiman, Charles R; La, Daniel S; Ligutti, Joseph; Lin, Min-Hwa Jasmine; Liu, Dong; McDermott, Jeff S; Moyer, Bryan D; Peterson, Emily A; Roberts, Jonathan T; Rose, Paul; Wang, Jean; Youngblood, Beth D; Yu, Violeta; Weiss, Matthew M

    2016-12-08

    Human genetic evidence has identified the voltage-gated sodium channel NaV1.7 as an attractive target for the treatment of pain. We initially identified naphthalene sulfonamide 3 as a potent and selective inhibitor of NaV1.7. Optimization to reduce biliary clearance by balancing hydrophilicity and hydrophobicity (Log D) while maintaining NaV1.7 potency led to the identification of quinazoline 16 (AM-2099). Compound 16 demonstrated a favorable pharmacokinetic profile in rat and dog and demonstrated dose-dependent reduction of histamine-induced scratching bouts in a mouse behavioral model following oral dosing.

  5. Methodology of research for qualitative composition of municipal solid waste to select an optimal method of recycling

    NASA Astrophysics Data System (ADS)

    Kravtsova, M. V.; Volkov, D. A.

    2015-09-01

    The article offers research methodology for qualitative composition of municipal solid waste to select an optimal method of recycling. The resource potential of waste directly depends on its composition and determines effectiveness of using various techniques, including separation and separate collection of refuge. The decision on re-equipment of waste-separating enterprise, which decreases the supply of waste to the burial site and provides economy of nonrenewable energy sources, is well-grounded, because it allows to diminish an anthropogenic load on environment.

  6. Encapsulation of a Decision-Making Model to Optimize Supplier Selection via Structural Equation Modeling (SEM)

    NASA Astrophysics Data System (ADS)

    Sahul Hameed, Ruzanna; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Ezanee Rusli, Mohd; Yong, Lee Choon; Ghazali, Azrul; Itam, Zarina; Hakimie, Hazlinda; Beddu, Salmia; Liyana Mohd Kamal, Nur

    2016-03-01

    This paper proposes a conceptual framework to compare criteria/factor that influence the supplier selection. A mixed methods approach comprising qualitative and quantitative survey will be used. The study intend to identify and define the metrics that key stakeholders at Public Works Department (PWD) believed should be used for supplier. The outcomes would foresee the possible initiatives to bring procurement in PWD to a strategic level. The results will provide a deeper understanding of drivers for supplier’s selection in the construction industry. The obtained output will benefit many parties involved in the supplier selection decision-making. The findings provides useful information and greater understanding of the perceptions that PWD executives hold regarding supplier selection and the extent to which these perceptions are consistent with findings from prior studies. The findings from this paper can be utilized as input for policy makers to outline any changes in the current procurement code of practice in order to enhance the degree of transparency and integrity in decision-making.

  7. Selecting optimal hyperspectral bands to discriminate nitrogen status in durum wheat: a comparison of statistical approaches.

    PubMed

    Stellacci, A M; Castrignanò, A; Troccoli, A; Basso, B; Buttafuoco, G

    2016-03-01

    Hyperspectral data can provide prediction of physical and chemical vegetation properties, but data handling, analysis, and interpretation still limit their use. In this study, different methods for selecting variables were compared for the analysis of on-the-ground hyperspectral signatures of wheat grown under a wide range of nitrogen supplies. Spectral signatures were recorded at the end of stem elongation, booting, and heading stages in 100 georeferenced locations, using a 512-channel portable spectroradiometer operating in the 325-1075-nm range. The following procedures were compared: (i) a heuristic combined approach including lambda-lambda R(2) (LL R(2)) model, principal component analysis (PCA), and stepwise discriminant analysis (SDA); (ii) variable importance for projection (VIP) statistics derived from partial least square (PLS) regression (PLS-VIP); and (iii) multiple linear regression (MLR) analysis through maximum R-square improvement (MAXR) and stepwise algorithms. The discriminating capability of selected wavelengths was evaluated by canonical discriminant analysis. Leaf-nitrogen concentration was quantified on samples collected at the same locations and dates and used as response variable in regressive methods. The different methods resulted in differences in the number and position of the selected wavebands. Bands extracted through regressive methods were mostly related to response variable, as shown by the importance of the visible region for PLS and stepwise. Band selection techniques can be extremely useful not only to improve the power of predictive models but also for data interpretation or sensor design.

  8. Potential and optimization of genomic selection for fusarium head blight resistance in six-row barley

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fusarium head blight (FHB) is a devastating disease of barley, causing reductions in yield and quality. Marker-based selection for resistance to FHB and lowered deoxynivalenol (DON) grain concentration would save considerable costs and time associated with phenotyping. A comprehensive marker-based s...

  9. Metal–organic framework with optimally selective xenon adsorption and separation

    SciTech Connect

    Banerjee, Debasis; Simon, Cory M.; Plonka, Anna M.; Motkuri, Radha K.; Liu, Jian; Chen, Xianyin; Smit, Berend; Parise, John B.; Haranczyk, Maciej; Thallapally, Praveen K.

    2016-06-13

    Nuclear energy is considered among the most viable alternatives to our current fossil fuel based energy economy.1 The mass-deployment of nuclear energy as an emissions-free source requires the reprocessing of used nuclear fuel to mitigate the waste.2 One of the major concerns with reprocessing used nuclear fuel is the release of volatile radionuclides such as Xe and Kr. The most mature process for removing these radionuclides is energy- and capital-intensive cryogenic distillation. Alternatively, porous materials such as metal-organic frameworks (MOFs) have demonstrated the ability to selectively adsorb Xe and Kr at ambient conditions.3-8 High-throughput computational screening of large databases of porous materials has identified a calcium-based nanoporous MOF, SBMOF-1, as the most selective for Xe over Kr.9,10 Here, we affirm this prediction and report that SBMOF-1 exhibits by far the highest Xe adsorption capacity and a remarkable Xe/Kr selectivity under relevant nuclear reprocessing conditions. The exceptional selectivity of SBMOF-1 is attributed to its pore size tailored to Xe and its dense wall of atoms that constructs a binding site with a high affinity for Xe, as evident by single crystal X-ray diffraction and molecular simulation.

  10. Surface stability and the selection rules of substrate orientation for optimal growth of epitaxial II-VI semiconductors

    SciTech Connect

    Yin, Wan-Jian; Yang, Ji-Hui; Zaunbrecher, Katherine; Gessert, Tim; Barnes, Teresa; Wei, Su-Huai; Yan, Yanfa

    2015-10-05

    The surface structures of ionic zinc-blende CdTe (001), (110), (111), and (211) surfaces are systematically studied by first-principles density functional calculations. Based on the surface structures and surface energies, we identify the detrimental twinning appearing in molecular beam epitaxy (MBE) growth of II-VI compounds as the (111) lamellar twin boundaries. To avoid the appearance of twinning in MBE growth, we propose the following selection rules for choosing optimal substrate orientations: (1) the surface should be nonpolar so that there is no large surface reconstructions that could act as a nucleation center and promote the formation of twins; (2) the surface structure should have low symmetry so that there are no multiple equivalent directions for growth. These straightforward rules, in consistent with experimental observations, provide guidelines for selecting proper substrates for high-quality MBE growth of II-VI compounds.

  11. Topology optimization design of a lightweight ultra-broadband wide-angle resistance frequency selective surface absorber

    NASA Astrophysics Data System (ADS)

    Sui, Sai; Ma, Hua; Wang, Jiafu; Pang, Yongqiang; Qu, Shaobo

    2015-06-01

    In this paper, the topology design of a lightweight ultra-broadband polarization-independent frequency selective surface absorber is proposed. The absorption over a wide frequency range of 6.68-26.08 GHz with reflection below -10 dB can be achieved by optimizing the topology and dimensions of the resistive frequency selective surface by virtue of genetic algorithm. This ultra-broadband absorption can be kept when the incident angle is less than 55 degrees and is independent of the incident wave polarization. The experimental results agree well with the numerical simulations. The density of our ultra-broadband absorber is only 0.35 g cm  -  3 and thus may find potential applications in microwave engineering, such as electromagnetic interference and stealth technology.

  12. Life-management strategies of selection, optimization, and compensation: measurement by self-report and construct validity.

    PubMed

    Freund, Alexandra M; Baltes, Paul B

    2002-04-01

    The authors examined the usefulness of a self-report measure for elective selection, loss-based selection. optimization, and compensation (SOC) as strategies of life management. The expected 4-factor solution was obtained in 2 independent samples (N = 218, 14-87 years; N = 181, 18-89 years) exhibiting high retest stability across 4 weeks (r(tt) = .74-82). As expected, middle-aged adults showed higher endorsement of SOC than younger and older adults. Moreover, SOC showed meaningful convergent and divergent associations to other psychological constructs (e.g., thinking styles, NEO) and evinced positive correlations with measures of well-being which were maintained after other personality and motivational constructs were controlled for. Initial evidence on behavioral associations involving SOC obtained in other studies is summarized.

  13. A hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization.

    PubMed

    Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein

    2016-06-01

    This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy.

  14. Application of drug delivery technologies in lead candidate selection and optimization.

    PubMed

    Chaubal, Mahesh V

    2004-07-15

    Formulation development during early drug discovery and lead optimization, involves several challenges including limited drug supply, the need for rapid turnaround, and limited development time. It is also desirable to develop initial formulations that will be representative of final commercial formulations. Nanoparticles offer a unique platform for the formulation of poorly soluble drugs - such formulations can be injected (intravenous, subcutaneous, intramuscular), as well as administered through other routes, such as oral, ocular and inhalation. Thus, a single formulation can be used to test and eventually develop multiple dosage forms. Furthermore, nanoparticles offer the opportunity for high drug loading, for low potency compounds, and thus support toxicological evaluation of such compounds.

  15. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    An optimizing computer program determined the turboprop aircraft with lowest direct operating cost for various sets of cruise speed and field length constraints. External variables included wing area, wing aspect ratio and engine sea level static horsepower; tail sizes, climb speed and cruise altitude were varied within the function evaluation program. Direct operating cost was minimized for a 150 n.mi typical mission. Generally, DOC increased with increasing speed and decreasing field length but not by a large amount. Ride roughness, however, increased considerably as speed became higher and field length became shorter.

  16. Review of SPECT collimator selection, optimization, and fabrication for clinical and preclinical imaging

    SciTech Connect

    Van Audenhaege, Karen Van Holen, Roel; Vandenberghe, Stefaan; Vanhove, Christian; Moore, Stephen C.

    2015-08-15

    In single photon emission computed tomography, the choice of the collimator has a major impact on the sensitivity and resolution of the system. Traditional parallel-hole and fan-beam collimators used in clinical practice, for example, have a relatively poor sensitivity and subcentimeter spatial resolution, while in small-animal imaging, pinhole collimators are used to obtain submillimeter resolution and multiple pinholes are often combined to increase sensitivity. This paper reviews methods for production, sensitivity maximization, and task-based optimization of collimation for both clinical and preclinical imaging applications. New opportunities for improved collimation are now arising primarily because of (i) new collimator-production techniques and (ii) detectors with improved intrinsic spatial resolution that have recently become available. These new technologies are expected to impact the design of collimators in the future. The authors also discuss concepts like septal penetration, high-resolution applications, multiplexing, sampling completeness, and adaptive systems, and the authors conclude with an example of an optimization study for a parallel-hole, fan-beam, cone-beam, and multiple-pinhole collimator for different applications.

  17. Review of SPECT collimator selection, optimization, and fabrication for clinical and preclinical imaging

    PubMed Central

    Van Audenhaege, Karen; Van Holen, Roel; Vandenberghe, Stefaan; Vanhove, Christian; Metzler, Scott D.; Moore, Stephen C.

    2015-01-01

    In single photon emission computed tomography, the choice of the collimator has a major impact on the sensitivity and resolution of the system. Traditional parallel-hole and fan-beam collimators used in clinical practice, for example, have a relatively poor sensitivity and subcentimeter spatial resolution, while in small-animal imaging, pinhole collimators are used to obtain submillimeter resolution and multiple pinholes are often combined to increase sensitivity. This paper reviews methods for production, sensitivity maximization, and task-based optimization of collimation for both clinical and preclinical imaging applications. New opportunities for improved collimation are now arising primarily because of (i) new collimator-production techniques and (ii) detectors with improved intrinsic spatial resolution that have recently become available. These new technologies are expected to impact the design of collimators in the future. The authors also discuss concepts like septal penetration, high-resolution applications, multiplexing, sampling completeness, and adaptive systems, and the authors conclude with an example of an optimization study for a parallel-hole, fan-beam, cone-beam, and multiple-pinhole collimator for different applications. PMID:26233207

  18. Leveraging information storage to select forecast-optimal parameters for delay-coordinate reconstructions

    NASA Astrophysics Data System (ADS)

    Garland, Joshua; James, Ryan G.; Bradley, Elizabeth

    2016-02-01

    Delay-coordinate reconstruction is a proven modeling strategy for building effective forecasts of nonlinear time series. The first step in this process is the estimation of good values for two parameters, the time delay and the embedding dimension. Many heuristics and strategies have been proposed in the literature for estimating these values. Few, if any, of these methods were developed with forecasting in mind, however, and their results are not optimal for that purpose. Even so, these heuristics—intended for other applications—are routinely used when building delay coordinate reconstruction-based forecast models. In this paper, we propose an alternate strategy for choosing optimal parameter values for forecast methods that are based on delay-coordinate reconstructions. The basic calculation involves maximizing the shared information between each delay vector and the future state of the system. We illustrate the effectiveness of this method on several synthetic and experimental systems, showing that this metric can be calculated quickly and reliably from a relatively short time series, and that it provides a direct indication of how well a near-neighbor based forecasting method will work on a given delay reconstruction of that time series. This allows a practitioner to choose reconstruction parameters that avoid any pathologies, regardless of the underlying mechanism, and maximize the predictive information contained in the reconstruction.

  19. Selection of optimal oligonucleotide probes for microarrays usingmultiple criteria, global alignment and parameter estimation.

    SciTech Connect

    Li, Xingyuan; He, Zhili; Zhou, Jizhong

    2005-10-30

    The oligonucleotide specificity for microarray hybridizationcan be predicted by its sequence identity to non-targets, continuousstretch to non-targets, and/or binding free energy to non-targets. Mostcurrently available programs only use one or two of these criteria, whichmay choose 'false' specific oligonucleotides or miss 'true' optimalprobes in a considerable proportion. We have developed a software tool,called CommOligo using new algorithms and all three criteria forselection of optimal oligonucleotide probes. A series of filters,including sequence identity, free energy, continuous stretch, GC content,self-annealing, distance to the 3'-untranslated region (3'-UTR) andmelting temperature (Tm), are used to check each possibleoligonucleotide. A sequence identity is calculated based on gapped globalalignments. A traversal algorithm is used to generate alignments for freeenergy calculation. The optimal Tm interval is determined based on probecandidates that have passed all other filters. Final probes are pickedusing a combination of user-configurable piece-wise linear functions andan iterative process. The thresholds for identity, stretch and freeenergy filters are automatically determined from experimental data by anaccessory software tool, CommOligo_PE (CommOligo Parameter Estimator).The program was used to design probes for both whole-genome and highlyhomologous sequence data. CommOligo and CommOligo_PE are freely availableto academic users upon request.

  20. Issues in the Optimal Selection of a Cranial Nerve Monitoring System

    PubMed Central

    Selesnick, Samuel H.; Goldsmith, Daniel F.

    1993-01-01

    Intraoperative nerve monitoring (IONM) is a safe technique that is of clear clinical value in the preservation of cranial nerves in skull base surgery and is rapidly becoming the standard of care. Available nerve monitoring systems vary widely in capabilities and costs. A well-informed surgeon may best decide on monitoring needs based on surgical case selection, experience, operating room space, availability of monitoring personnel, and cost. Key system characteristics that should be reviewed in the decision-making process include the monitoring technique (electromyography, pressure transducer, direct nerve monitoring, brainstem auditory evoked potential) and the stimulus technique (stimulating parameters, probe selection). In the past, IONM has been primarily employed in posterior fossa and temporal bone surgery, but the value of IONM is being recognized in more skull base and head and neck surgeries. Suggested IONM strategies for specific surgeries are presented. PMID:17170916

  1. [Potential role of patient-derived tumor xenografts (PDTXs) in the selection of optimal therapeutic strategy].

    PubMed

    Tóvári, József

    2015-12-01

    The rapid selection of the efficient anticancer therapy may decrease the unwanted burden to patients and has financial consequences. Tumor models including xenografts in mice were used previously mostly in the development of new anticancer drugs. Nowadays xenografts from direct patient-derived tumor tissues (PDTT) in immune deficient mice yield better models than experimental tumors originating from cell cultures. The new method enables researchers to observe heterogeneous tumor cells with their surrounding tissue elements and matrices representing the clinical situation in humans much better. The cells in PDTT tumors are alive and functionally active through several generations after serial transplantation. Therefore using these models we may investigate tumor response to different therapies, the selection of resistant cell populations and the formation of metastasis predicting the outcomes in the personalized therapy.

  2. Optimal site selection for a high-resolution ice core record in East Antarctica

    NASA Astrophysics Data System (ADS)

    Vance, Tessa R.; Roberts, Jason L.; Moy, Andrew D.; Curran, Mark A. J.; Tozer, Carly R.; Gallant, Ailie J. E.; Abram, Nerilie J.; van Ommen, Tas D.; Young, Duncan A.; Grima, Cyril; Blankenship, Don D.; Siegert, Martin J.

    2016-03-01

    Ice cores provide some of the best-dated and most comprehensive proxy records, as they yield a vast and growing array of proxy indicators. Selecting a site for ice core drilling is nonetheless challenging, as the assessment of potential new sites needs to consider a variety of factors. Here, we demonstrate a systematic approach to site selection for a new East Antarctic high-resolution ice core record. Specifically, seven criteria are considered: (1) 2000-year-old ice at 300 m depth; (2) above 1000 m elevation; (3) a minimum accumulation rate of 250 mm years-1 IE (ice equivalent); (4) minimal surface reworking to preserve the deposited climate signal; (5) a site with minimal displacement or elevation change in ice at 300 m depth; (6) a strong teleconnection to midlatitude climate; and (7) an appropriately complementary relationship to the existing Law Dome record (a high-resolution record in East Antarctica). Once assessment of these physical characteristics identified promising regions, logistical considerations (for site access and ice core retrieval) were briefly considered. We use Antarctic surface mass balance syntheses, along with ground-truthing of satellite data by airborne radar surveys to produce all-of-Antarctica maps of surface roughness, age at specified depth, elevation and displacement change, and surface air temperature correlations to pinpoint promising locations. We also use the European Centre for Medium-Range Weather Forecast ERA 20th Century reanalysis (ERA-20C) to ensure that a site complementary to the Law Dome record is selected. We find three promising sites in the Indian Ocean sector of East Antarctica in the coastal zone from Enderby Land to the Ingrid Christensen Coast (50-100° E). Although we focus on East Antarctica for a new ice core site, the methodology is more generally applicable, and we include key parameters for all of Antarctica which may be useful for ice core site selection elsewhere and/or for other purposes.

  3. Optimal site selection for a high resolution ice core record in East Antarctica

    NASA Astrophysics Data System (ADS)

    Vance, T.; Roberts, J.; Moy, A.; Curran, M.; Tozer, C.; Gallant, A.; Abram, N.; van Ommen, T.; Young, D.; Grima, C.; Blankenship, D.; Siegert, M.

    2015-11-01

    Ice cores provide some of the best dated and most comprehensive proxy records, as they yield a vast and growing array of proxy indicators. Selecting a site for ice core drilling is nonetheless challenging, as the assessment of potential new sites needs to consider a variety of factors. Here, we demonstrate a systematic approach to site selection for a new East Antarctic high resolution ice core record. Specifically, seven criteria are considered: (1) 2000 year old ice at 300 m depth, (2) above 1000 m elevation, (3) a minimum accumulation rate of 250 mm yr-1 IE, (4) minimal surface re-working to preserve the deposited climate signal, (5) a site with minimal displacement or elevation change of ice at 300 m depth, (6) a strong teleconnection to mid-latitude climate and (7) an appropriately complementary relationship to the existing Law Dome record (a high resolution record in East Antarctica). Once assessment of these physical characteristics identified promising regions, logistical considerations (for site access and ice core retrieval) were briefly considered. We use Antarctic surface mass balance syntheses, along with ground-truthing of satellite data by airborne radar surveys to produce all-of-Antarctica maps of surface roughness, age at specified depth, elevation and displacement change and surface air temperature correlations to pinpoint promising locations. We also use the European Centre for Medium-Range Weather Forecast ERA 20th Century reanalysis (ERA-20C) to ensure a site complementary to the Law Dome record is selected. We find three promising sites in the Indian Ocean sector of East Antarctica in the coastal zone from Enderby Land to the Ingrid Christensen Coast (50-100° E). Although we focus on East Antarctica for a new ice core site, the methodology is more generally applicable and we include key parameters for all of Antarctica which may be useful for ice core site selection elsewhere and/or for other purposes.

  4. Optimization of biguanide derivatives as selective antitumor agents blocking adaptive stress responses in the tumor microenvironment

    PubMed Central

    Narise, Kosuke; Okuda, Kensuke; Enomoto, Yukihiro; Hirayama, Tasuku; Nagasawa, Hideko

    2014-01-01

    Adaptive cellular responses resulting from multiple microenvironmental stresses, such as hypoxia and nutrient deprivation, are potential novel drug targets for cancer treatment. Accordingly, we focused on developing anticancer agents targeting the tumor microenvironment (TME). In this study, to search for selective antitumor agents blocking adaptive responses in the TME, thirteen new compounds, designed and synthesized on the basis of the arylmethylbiguanide scaffold of phenformin, were used in structure activity relationship studies of inhibition of hypoxia inducible factor (HIF)-1 and unfolded protein response (UPR) activation and of selective cytotoxicity under glucose-deprived stress conditions, using HT29 cells. We conducted luciferase reporter assays using stable cell lines expressing either an HIF-1-responsive reporter gene or a glucose-regulated protein 78 promoter-reporter gene, which were induced by hypoxia and glucose deprivation stress, respectively, to screen for TME-targeting antitumor drugs. The guanidine analog (compound 2), obtained by bioisosteric replacement of the biguanide group, had activities comparable with those of phenformin (compound 1). Introduction of various substituents on the phenyl ring significantly affected the activities. In particular, the o-methylphenyl analog compound 7 and the o-chlorophenyl analog compound 12 showed considerably more potent inhibitory effects on HIF-1 and UPR activation than did phenformin, and excellent selective cytotoxicity under glucose deprivation. These compounds, therefore, represent an improvement over phenformin. They also suppressed HIF-1- and UPR-related protein expression and secretion of vascular endothelial growth factor-A. Moreover, these compounds exhibited significant antiangiogenic effects in the chick chorioallantoic membrane assay. Our structural development studies of biguanide derivatives provided promising candidates for a novel anticancer agent targeting the TME for selective cancer

  5. Sacrificing information for the greater good: how to select photometric bands for optimal accuracy

    NASA Astrophysics Data System (ADS)

    Stensbo-Smidt, Kristoffer; Gieseke, Fabian; Igel, Christian; Zirm, Andrew; Steenstrup Pedersen, Kim

    2017-01-01

    Large-scale surveys make huge amounts of photometric data available. Because of the sheer amount of objects, spectral data cannot be obtained for all of them. Therefore, it is important to devise techniques for reliably estimating physical properties of objects from photometric information alone. These estimates are needed to automatically identify interesting objects worth a follow-up investigation as well as to produce the required data for a statistical analysis of the space covered by a survey. We argue that machine learning techniques are suitable to compute these estimates accurately and efficiently. This study promotes a feature selection algorithm, which selects the most informative magnitudes and colours for a given task of estimating physical quantities from photometric data alone. Using k-nearest neighbours regression, a well-known non-parametric machine learning method, we show that using the found features significantly increases the accuracy of the estimations compared to using standard features and standard methods. We illustrate the usefulness of the approach by estimating specific star formation rates (sSFRs) and redshifts (photo-z's) using only the broad-band photometry from the Sloan Digital Sky Survey (SDSS). For estimating sSFRs, we demonstrate that our method produces better estimates than traditional spectral energy distribution fitting. For estimating photo-z's, we show that our method produces more accurate photo-z's than the method employed by SDSS. The study highlights the general importance of performing proper model selection to improve the results of machine learning systems and how feature selection can provide insights into the predictive relevance of particular input features.

  6. Metal-organic framework with optimally selective xenon adsorption and separation

    NASA Astrophysics Data System (ADS)

    Banerjee, Debasis; Simon, Cory M.; Plonka, Anna M.; Motkuri, Radha K.; Liu, Jian; Chen, Xianyin; Smit, Berend; Parise, John B.; Haranczyk, Maciej; Thallapally, Praveen K.

    2016-06-01

    Nuclear energy is among the most viable alternatives to our current fossil fuel-based energy economy. The mass deployment of nuclear energy as a low-emissions source requires the reprocessing of used nuclear fuel to recover fissile materials and mitigate radioactive waste. A major concern with reprocessing used nuclear fuel is the release of volatile radionuclides such as xenon and krypton that evolve into reprocessing facility off-gas in parts per million concentrations. The existing technology to remove these radioactive noble gases is a costly cryogenic distillation; alternatively, porous materials such as metal-organic frameworks have demonstrated the ability to selectively adsorb xenon and krypton at ambient conditions. Here we carry out a high-throughput computational screening of large databases of metal-organic frameworks and identify SBMOF-1 as the most selective for xenon. We affirm this prediction and report that SBMOF-1 exhibits by far the highest reported xenon adsorption capacity and a remarkable Xe/Kr selectivity under conditions pertinent to nuclear fuel reprocessing.

  7. Metal-organic framework with optimally selective xenon adsorption and separation.

    PubMed

    Banerjee, Debasis; Simon, Cory M; Plonka, Anna M; Motkuri, Radha K; Liu, Jian; Chen, Xianyin; Smit, Berend; Parise, John B; Haranczyk, Maciej; Thallapally, Praveen K

    2016-06-13

    Nuclear energy is among the most viable alternatives to our current fossil fuel-based energy economy. The mass deployment of nuclear energy as a low-emissions source requires the reprocessing of used nuclear fuel to recover fissile materials and mitigate radioactive waste. A major concern with reprocessing used nuclear fuel is the release of volatile radionuclides such as xenon and krypton that evolve into reprocessing facility off-gas in parts per million concentrations. The existing technology to remove these radioactive noble gases is a costly cryogenic distillation; alternatively, porous materials such as metal-organic frameworks have demonstrated the ability to selectively adsorb xenon and krypton at ambient conditions. Here we carry out a high-throughput computational screening of large databases of metal-organic frameworks and identify SBMOF-1 as the most selective for xenon. We affirm this prediction and report that SBMOF-1 exhibits by far the highest reported xenon adsorption capacity and a remarkable Xe/Kr selectivity under conditions pertinent to nuclear fuel reprocessing.

  8. Metal–organic framework with optimally selective xenon adsorption and separation

    DOE PAGES

    Banerjee, Debasis; Simon, Cory M.; Plonka, Anna M.; ...

    2016-06-13

    Nuclear energy is among the most viable alternatives to our current fossil fuel-based energy economy. The mass deployment of nuclear energy as a low-emissions source requires the reprocessing of used nuclear fuel to recover fissile materials and mitigate radioactive waste. A major concern with reprocessing used nuclear fuel is the release of volatile radionuclides such as xenon and krypton that evolve into reprocessing facility off-gas in parts per million concentrations. In addition, the existing technology to remove these radioactive noble gases is a costly cryogenic distillation; alternatively, porous materials such as metal–organic frameworks have demonstrated the ability to selectively adsorbmore » xenon and krypton at ambient conditions. Here we carry out a high-throughput computational screening of large databases of metal–organic frameworks and identify SBMOF-1 as the most selective for xenon. We affirm this prediction and report that SBMOF-1 exhibits by far the highest reported xenon adsorption capacity and a remarkable Xe/Kr selectivity under conditions pertinent to nuclear fuel reprocessing.« less

  9. A perspective on tritium versus carbon-14: ensuring optimal label selection in pharmaceutical research and development.

    PubMed

    Krauser, Joel A

    2013-01-01

    Tritium ((3) H) and carbon-14 ((14) C) labels applied in pharmaceutical research and development each offer their own distinctive advantages and disadvantages coupled with benefits and risks. The advantages of (3) H have a higher specific activity, shorter half-life that allows more manageable waste remediation, lower material costs, and often more direct synthetic routes. The advantages of (14) C offer certain analytical benefits and less potential for label loss. Although (3) H labels offer several advantages, they might be overlooked as a viable option because of the concerns about its drawbacks. A main drawback often challenged is metabolic liability. These drawbacks, in some cases, might be overstated leading to underutilization of a perfectly viable option. As a consequence, label selection may automatically default to (14) C, which is a more conservative approach. To challenge this '(14) C-by-default' approach, pharmaceutical agents with strategically selected (3) H-labeling positions based on non-labeled metabolism data have been successfully implemented and evaluated for (3) H loss. From in-house results, the long term success of projects clearly would benefit from a thorough, objective, and balanced assessment regarding label selection ((3) H or (14) C). This assessment should be based on available project information and scientific knowledge. Important considerations are project applicability (preclinical and clinical phases), synthetic feasibility, costs, and timelines.

  10. Metal–organic framework with optimally selective xenon adsorption and separation

    SciTech Connect

    Banerjee, Debasis; Simon, Cory M.; Plonka, Anna M.; Motkuri, Radha K.; Liu, Jian; Chen, Xianyin; Smit, Berend; Parise, John B.; Haranczyk, Maciej

    2016-06-13

    Nuclear energy is among the most viable alternatives to our current fossil fuel-based energy economy. The mass deployment of nuclear energy as a low-emissions source requires the reprocessing of used nuclear fuel to recover fissile materials and mitigate radioactive waste. A major concern with reprocessing used nuclear fuel is the release of volatile radionuclides such as xenon and krypton that evolve into reprocessing facility off-gas in parts per million concentrations. In addition, the existing technology to remove these radioactive noble gases is a costly cryogenic distillation; alternatively, porous materials such as metal–organic frameworks have demonstrated the ability to selectively adsorb xenon and krypton at ambient conditions. Here we carry out a high-throughput computational screening of large databases of metal–organic frameworks and identify SBMOF-1 as the most selective for xenon. We affirm this prediction and report that SBMOF-1 exhibits by far the highest reported xenon adsorption capacity and a remarkable Xe/Kr selectivity under conditions pertinent to nuclear fuel reprocessing.

  11. Metal–organic framework with optimally selective xenon adsorption and separation

    PubMed Central

    Banerjee, Debasis; Simon, Cory M.; Plonka, Anna M.; Motkuri, Radha K.; Liu, Jian; Chen, Xianyin; Smit, Berend; Parise, John B.; Haranczyk, Maciej; Thallapally, Praveen K.

    2016-01-01

    Nuclear energy is among the most viable alternatives to our current fossil fuel-based energy economy. The mass deployment of nuclear energy as a low-emissions source requires the reprocessing of used nuclear fuel to recover fissile materials and mitigate radioactive waste. A major concern with reprocessing used nuclear fuel is the release of volatile radionuclides such as xenon and krypton that evolve into reprocessing facility off-gas in parts per million concentrations. The existing technology to remove these radioactive noble gases is a costly cryogenic distillation; alternatively, porous materials such as metal–organic frameworks have demonstrated the ability to selectively adsorb xenon and krypton at ambient conditions. Here we carry out a high-throughput computational screening of large databases of metal–organic frameworks and identify SBMOF-1 as the most selective for xenon. We affirm this prediction and report that SBMOF-1 exhibits by far the highest reported xenon adsorption capacity and a remarkable Xe/Kr selectivity under conditions pertinent to nuclear fuel reprocessing. PMID:27291101

  12. The adaptiveness of selection, optimization, and compensation as strategies of life management: evidence from a preference study on proverbs.

    PubMed

    Freund, Alexandra M; Baltes, Paul B

    2002-09-01

    Proverbs were used to examine whether laypeople's conceptions of or preferences for life-management strategies are consistent with the model of selection, optimization, and compensation (SOC model). The SOC model posits that there are three fundamental processes of life management: selection, optimization, and compensation. In two studies (N = 64; N = 131), young (19-32 years) and older adults (59-85 years) were asked to match proverbs to sentence stems indicative of life-management situations. Of the proverbs, half reflected one component of SOC and half alternative, non-SOC life-management strategies. SOC-related and alternative proverbs were matched on familiarity, understandability, and meaningfulness. Two main results were obtained: Young and older adults chose proverbs reflecting SOC (a) more frequently and (b) faster than alternative proverbs. Study 3 (N = 60, 19-32 year-old participants) ruled out that these results were due to an artifact resulting from a stronger, purely semantic relationship of the specific sentence stems with the SOC-related proverbs. Studies 4 (N = 48 younger and older adults) and 5 (N = 20 younger adults) were conducted to test discriminant validity. In contrast with tasks involving long-term goal orientation and success, there were no preferences for SOC-related proverbs for life contexts involving relaxation or leisure. Taken together, results of these studies indicate that individuals, when asked to choose between alternative proverbs characterizing ways of managing life, prefer SOC-related proverbs.

  13. On optimization of a composite bone plate using the selective stress shielding approach.

    PubMed

    Samiezadeh, Saeid; Tavakkoli Avval, Pouria; Fawaz, Zouheir; Bougherara, Habiba

    2015-02-01

    Bone fracture plates are used to stabilize fractures while allowing for adequate compressive force on the fracture ends. Yet the high stiffness of conventional bone plates significantly reduces compression at the fracture site, and can lead to subsequent bone loss upon healing. Fibre-reinforced composite bone plates have been introduced to address this drawback. However, no studies have optimized their configurations to fulfill the requirements of proper healing. In the present study, classical laminate theory and the finite element method were employed for optimization of a composite bone plate. A hybrid composite made of carbon fibre/epoxy with a flax/epoxy core, which was introduced previously, was optimized by varying the laminate stacking sequence and the contribution of each material, in order to minimize the axial stiffness and maximize the torsional stiffness for a given range of bending stiffness. The initial 14×4(14) possible configurations were reduced to 13 after applying various design criteria. A comprehensive finite element model, validated against a previous experimental study, was used to evaluate the mechanical performance of each composite configuration in terms of its fracture stability, load sharing, and strength in transverse and oblique Vancouver B1 fracture configurations at immediately post-operative, post-operative, and healed bone stages. It was found that a carbon fibre/epoxy plate with an axial stiffness of 4.6 MN, and bending and torsional stiffness of 13 and 14 N·m(2), respectively, showed an overall superiority compared with other laminate configurations. It increased the compressive force at the fracture site up to 14% when compared to a conventional metallic plate, and maintained fracture stability by ensuring the fracture fragments' relative motions were comparable to those found during metallic plate fixation. The healed stage results revealed that implantation of the titanium plate caused a 40.3% reduction in bone stiffness

  14. Optimal Technology Selection and Operation of Microgrids inCommercial Buildings

    SciTech Connect

    Marnay, Chris; Venkataramanan, Giri; Stadler, Michael; Siddiqui,Afzal; Firestone, Ryan; Chandran, Bala

    2007-01-15

    The deployment of small (<1-2 MW) clusters of generators,heat and electrical storage, efficiency investments, and combined heatand power (CHP) applications (particularly involving heat activatedcooling) in commercial buildings promises significant benefits but posesmany technical and financial challenges, both in system choice and itsoperation; if successful, such systems may be precursors to widespreadmicrogrid deployment. The presented optimization approach to choosingsuch systems and their operating schedules uses Berkeley Lab'sDistributed Energy Resources Customer Adoption Model [DER-CAM], extendedto incorporate electrical storage options. DER-CAM chooses annual energybill minimizing systems in a fully technology-neutral manner. Anillustrative example for a San Francisco hotel is reported. The chosensystem includes two engines and an absorption chiller, providing anestimated 11 percent cost savings and 10 percent carbon emissionreductions, under idealized circumstances.

  15. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    The relationships between field length and cruise speed and aircraft direct operating cost were determined. A gradient optimizing computer program was developed to minimize direct operating cost (DOC) as a function of airplane geometry. In this way, the best airplane operating under one set of constraints can be compared with the best operating under another. A constant 30-passenger fuselage and rubberized engines based on the General Electric CT-7 were used as a baseline. All aircraft had to have a 600 nautical mile maximum range and were designed to FAR part 25 structural integrity and climb gradient regulations. Direct operating cost was minimized for a typical design mission of 150 nautical miles. For purposes of C sub L sub max calculation, all aircraft had double-slotted flaps but with no Fowler action.

  16. Using information Theory in Optimal Test Point Selection for Health Management in NASA's Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Mehr, Ali Farhang; Tumer, Irem

    2005-01-01

    In this paper, we will present a new methodology that measures the "worth" of deploying an additional testing instrument (sensor) in terms of the amount of information that can be retrieved from such measurement. This quantity is obtained using a probabilistic model of RLV's that has been partially developed in the NASA Ames Research Center. A number of correlated attributes are identified and used to obtain the worth of deploying a sensor in a given test point from an information-theoretic viewpoint. Once the information-theoretic worth of sensors is formulated and incorporated into our general model for IHM performance, the problem can be formulated as a constrained optimization problem where reliability and operational safety of the system as a whole is considered. Although this research is conducted specifically for RLV's, the proposed methodology in its generic form can be easily extended to other domains of systems health monitoring.

  17. Optimized conditions for selective gold flotation by ToF-SIMS and ToF-LIMS

    NASA Astrophysics Data System (ADS)

    Chryssoulis, S. L.; Dimov, S. S.

    2004-06-01

    This work describes a comprehensive characterization of the factors controlling the floatability of free gold from flotation test using reagents (collectors) at plant concentration levels. A relationship between the collectors loadings on gold particles and their surface composition has been established. The findings of this study show that silver activates gold flotation and there is a strong correlation between the surface concentration of silver and the loading of certain collectors. The organic surface analysis was done by ToF-SIMS while the inorganic surface analysis was carried out by time-of-flight laser ionization mass spectrometry (ToF-LIMS). The developed testing protocol based on ToF-LIMS and ToF-SIMS complementary surface analysis allows for optimization of the flotation scheme and hence improved gold recovery.

  18. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    SciTech Connect

    Dumidu Wijayasekara; Milos Manic; Piyush Sabharwall; Vivek Utgikar

    2011-07-01

    Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or overlearning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the testing error achieved was in the order of magnitude of within 10{sup -5} - 10{sup -3}. It was also show that the absolute error achieved by EBaLM-OTR was an order of magnitude better than the lowest error achieved by EBaLM-THP.

  19. Sensor selection and chemo-sensory optimization: toward an adaptable chemo-sensory system.

    PubMed

    Vergara, Alexander; Llobet, Eduard

    2011-01-01

    Over the past two decades, despite the tremendous research on chemical sensors and machine olfaction to develop micro-sensory systems that will accomplish the growing existent needs in personal health (implantable sensors), environment monitoring (widely distributed sensor networks), and security/threat detection (chemo/bio warfare agents), simple, low-cost molecular sensing platforms capable of long-term autonomous operation remain beyond the current state-of-the-art of chemical sensing. A fundamental issue within this context is that most of the chemical sensors depend on interactions between the targeted species and the surfaces functionalized with receptors that bind the target species selectively, and that these binding events are coupled with transduction processes that begin to change when they are exposed to the messy world of real samples. With the advent of fundamental breakthroughs at the intersection of materials science, micro- and nano-technology, and signal processing, hybrid chemo-sensory systems have incorporated tunable, optimizable operating parameters, through which changes in the response characteristics can be modeled and compensated as the environmental conditions or application needs change. The objective of this article, in this context, is to bring together the key advances at the device, data processing, and system levels that enable chemo-sensory systems to "adapt" in response to their environments. Accordingly, in this review we will feature the research effort made by selected experts on chemical sensing and information theory, whose work has been devoted to develop strategies that provide tunability and adaptability to single sensor devices or sensory array systems. Particularly, we consider sensor-array selection, modulation of internal sensing parameters, and active sensing. The article ends with some conclusions drawn from the results presented and a visionary look toward the future in terms of how the field may evolve.

  20. Sensor Selection and Chemo-Sensory Optimization: Toward an Adaptable Chemo-Sensory System

    PubMed Central

    Vergara, Alexander; Llobet, Eduard

    2011-01-01

    Over the past two decades, despite the tremendous research on chemical sensors and machine olfaction to develop micro-sensory systems that will accomplish the growing existent needs in personal health (implantable sensors), environment monitoring (widely distributed sensor networks), and security/threat detection (chemo/bio warfare agents), simple, low-cost molecular sensing platforms capable of long-term autonomous operation remain beyond the current state-of-the-art of chemical sensing. A fundamental issue within this context is that most of the chemical sensors depend on interactions between the targeted species and the surfaces functionalized with receptors that bind the target species selectively, and that these binding events are coupled with transduction processes that begin to change when they are exposed to the messy world of real samples. With the advent of fundamental breakthroughs at the intersection of materials science, micro- and nano-technology, and signal processing, hybrid chemo-sensory systems have incorporated tunable, optimizable operating parameters, through which changes in the response characteristics can be modeled and compensated as the environmental conditions or application needs change. The objective of this article, in this context, is to bring together the key advances at the device, data processing, and system levels that enable chemo-sensory systems to “adapt” in response to their environments. Accordingly, in this review we will feature the research effort made by selected experts on chemical sensing and information theory, whose work has been devoted to develop strategies that provide tunability and adaptability to single sensor devices or sensory array systems. Particularly, we consider sensor-array selection, modulation of internal sensing parameters, and active sensing. The article ends with some conclusions drawn from the results presented and a visionary look toward the future in terms of how the field may evolve. PMID

  1. Optimization of process configuration and strain selection for microalgae-based biodiesel production.

    PubMed

    Yu, Nan; Dieu, Linus Tao Jie; Harvey, Simon; Lee, Dong-Yup

    2015-10-01

    A mathematical model was developed for the design of microalgae-based biodiesel production system by systematically integrating all the production stages and strain properties. Through the hypothetical case study, the model suggested the most economical system configuration for the selected microalgae strains from the available processes at each stage, thus resulting in the cheapest biodiesel production cost, S$2.66/kg, which is still higher than the current diesel price (S$1.05/kg). Interestingly, the microalgae strain properties, such as lipid content, effective diameter and productivity, were found to be one of the major factors that significantly affect the production cost as well as system configuration.

  2. Mechanism-Based Inhibitors of Serine Proteases with High Selectivity Through Optimization of S’ Subsite Binding

    PubMed Central

    Li, Yi; Dou, Dengfeng; He, Guijia; Lushington, Gerald H.; Groutas, William C.

    2009-01-01

    A series of mechanism-based inhibitors designed to interact with the S’ subsites of serine proteases was synthesized and their inhibitory activity toward the closely-related serine proteases human neutrophil elastase (HNE) and proteinase 3 (PR 3) was investigated. The compounds were found to be time-dependent inhibitors of HNE and were devoid of any inhibitory activity toward PR 3. The results suggest that highly selective inhibitors of serine proteases whose primary substrate specificity and active sites are similar can be identified by exploiting differences in their S’ subsites. The best inhibitor (compound 16) had a kinact/KI value of 4580 M−1 s−1. PMID:19394830

  3. Optimizing energy yields in black locust through genetic selection: final report

    SciTech Connect

    Bongarten, B.C.; Merkle, S.A.

    1996-10-01

    The purpose of this work was to assess the magnitude of improvement in biomass yield of black locust possible through breeding, and to determine methods for efficiently capturing the yield improvement achievable from selective breeding. To meet this overall objective, six tasks were undertaken to determine: (1) the amount and geographic pattern of natural genetic variation, (2) the mating system of the species, (3) quantitative genetic parameters of relevant traits, (4) the relationship between nitrogen fixation and growth in black locust, (5) the viability of mass vegetative propagation, and (6) the feasibility of improvement through genetic transformation.

  4. NESP: Nonlinear enhancement and selection of plane for optimal segmentation and recognition of scene word images

    NASA Astrophysics Data System (ADS)

    Kumar, Deepak; Anil Prasad, M. N.; Ramakrishnan, A. G.

    2013-01-01

    In this paper, we report a breakthrough result on the difficult task of segmentation and recognition of coloured text from the word image dataset of ICDAR robust reading competition challenge 2: reading text in scene images. We split the word image into individual colour, gray and lightness planes and enhance the contrast of each of these planes independently by a power-law transform. The discrimination factor of each plane is computed as the maximum between-class variance used in Otsu thresholding. The plane that has maximum discrimination factor is selected for segmentation. The trial version of Omnipage OCR is then used on the binarized words for recognition. Our recognition results on ICDAR 2011 and ICDAR 2003 word datasets are compared with those reported in the literature. As baseline, the images binarized by simple global and local thresholding techniques were also recognized. The word recognition rate obtained by our non-linear enhancement and selection of plance method is 72.8% and 66.2% for ICDAR 2011 and 2003 word datasets, respectively. We have created ground-truth for each image at the pixel level to benchmark these datasets using a toolkit developed by us. The recognition rate of benchmarked images is 86.7% and 83.9% for ICDAR 2011 and 2003 datasets, respectively.

  5. Selection of Steady-State Process Simulation Software to Optimize Treatment of Radioactive and Hazardous Waste

    SciTech Connect

    Nichols, T. T.; Barnes, C. M.; Lauerhass, L.; Taylor, D. D.

    2001-06-01

    The process used for selecting a steady-state process simulator under conditions of high uncertainty and limited time is described. Multiple waste forms, treatment ambiguity, and the uniqueness of both the waste chemistries and alternative treatment technologies result in a large set of potential technical requirements that no commercial simulator can totally satisfy. The aim of the selection process was two-fold. First, determine the steady-state simulation software that best, albeit not completely, satisfies the requirements envelope. And second, determine if the best is good enough to justify the cost. Twelve simulators were investigated with varying degrees of scrutiny. The candidate list was narrowed to three final contenders: ASPEN Plus 10.2, PRO/II 5.11, and CHEMCAD 5.1.0. It was concluded from ''road tests'' that ASPEN Plus appears to satisfy the project's technical requirements the best and is worth acquiring. The final software decisions provide flexibility: they involve annual rather than multi-year licensing, and they include periodic re-assessment.

  6. Selection of Steady-State Process Simulation Software to Optimize Treatment of Radioactive and Hazardous Waste

    SciTech Connect

    Nichols, Todd Travis; Barnes, Charles Marshall; Lauerhass, Lance; Taylor, Dean Dalton

    2001-06-01

    The process used for selecting a steady-state process simulator under conditions of high uncertainty and limited time is described. Multiple waste forms, treatment ambiguity, and the uniqueness of both the waste chemistries and alternative treatment technologies result in a large set of potential technical requirements that no commercial simulator can totally satisfy. The aim of the selection process was two-fold. First, determine the steady-state simulation software that best, albeit not completely, satisfies the requirements envelope. And second, determine if the best is good enough to justify the cost. Twelve simulators were investigated with varying degrees of scrutiny. The candidate list was narrowed to three final contenders: ASPEN Plus 10.2, PRO/II 5.11, and CHEMCAD 5.1.0. It was concluded from "road tests" that ASPEN Plus appears to satisfy the project's technical requirements the best and is worth acquiring. The final software decisions provide flexibility: they involve annual rather than multi-year licensing, and they include periodic re-assessment.

  7. Optimal stapler cartridge selection according to the thickness of the pancreas in distal pancreatectomy

    PubMed Central

    Kim, Hongbeom; Jang, Jin-Young; Son, Donghee; Lee, Seungyeoun; Han, Youngmin; Shin, Yong Chan; Kim, Jae Ri; Kwon, Wooil; Kim, Sun-Whe

    2016-01-01

    Abstract Stapling is a popular method for stump closure in distal pancreatectomy (DP). However, research on which cartridges are suitable for different pancreatic thickness is lacking. To identify the optimal stapler cartridge choice in DP according to pancreatic thickness. From November 2011 to April 2015, data were prospectively collected from 217 consecutive patients who underwent DP with 3-layer endoscopic staple closure in Seoul National University Hospital, Korea. Postoperative pancreatic fistula (POPF) was graded according to International Study Group on Pancreatic Fistula definitions. Staplers were grouped based on closed length (CL) (Group I: CL ≤ 1.5 mm, II: 1.5 mm < CL < 2 mm, III: CL ≥ 2 mm). Compression ratio (CR) was defined as pancreas thickness/CL. Distribution of pancreatic thickness was used to find the cut-off point of thickness which predicts POPF according to stapler groups. POPF developed in 130 (59.9%) patients (Grade A; n = 86 [66.1%], B; n = 44 [33.8%]). The numbers in each stapler group were 46, 101, and 70, respectively. Mean thickness was higher in POPF cases (15.2 mm vs 13.5 mm, P = 0.002). High body mass index (P = 0.003), thick pancreas (P = 0.011), and high CR (P = 0.024) were independent risk factors for POPF in multivariate analysis. Pancreatic thickness was grouped into <12 mm, 12 to 17 mm, and >17 mm. With pancreatic thickness <12 mm, the POPF rate was lowest with Group II (I: 50%, II: 27.6%, III: 69.2%, P = 0.035). The optimal stapler cartridges with pancreatic thickness <12 mm were those in Group II (Gold, CL: 1.8 mm). There was no suitable cartridge for thicker pancreases. Further studies are necessary to reduce POPF in thick pancreases. PMID:27583852

  8. A Study on the Selection of Optimal Probability Distributions for Analyzing of Extreme Precipitation Events over the Republic of Korea

    NASA Astrophysics Data System (ADS)

    Lee, Hansu; Choi, Youngeun

    2014-05-01

    This study determined the optimal statistical probability distributions to estimate maximum probability precipitation in the Republic of Korea and examined whether there were any distinct changes on distribution types and extreme precipitation characteristics. Generalized Pareto distribution, and three parameter Burr distribution were most selected distributions for annual maximum series in the Republic of Korea. Furthermore, in the seasonal basis, the most selected distributions was three parameter Dagum distribution for spring, three parameter Burr distribution for summer, generalized Pareto distribution for autumn, three parameter log logistic distribution, generalized Pareto distribution and log-Pearson type III distribution for winter. Maximum probability precipitation was derived from selected optimal probability distributions and compared with that from Ministry of Land, Transport and Maritime Affairs(MOLTMA). Maximum probability precipitation in this study was greater than that of MOLTMA as the duration time and return periods increased. This difference was statistically significant when apply Wilcoxon signed rank test. Because of different distributions, as the return period is longer, greater maximum probability precipitation value were estimated. Annual maximum series from 1973 to 2012 showed that the median was the highest in the south coastal region, but as a duration time was getting longer, Seoul, Gyeonggido, and Gangwondo had higher median values, which located in the central part of Korea. The months of annual maximum series occurrence were concentrated between June and September. Typhoons affected on annual maximum series occurrence in September. Seasonal maximum probability precipitation was greater in most of the south coastal region, and Seoul, Gyeonggido and Gangwondo had greater maximum probability precipitation in summer. Gangwondo had greater maximum probability precipitation in autumn while Ulleung and Daegwallyeong had a greater one in

  9. Optimization of selective emitter fabrication method for solar cells using a laser grooving.

    PubMed

    Jung, W W; Kim, S C; Jung, S W; Moon, I Y; Kumar, K; Lee, Y W; Kim, S Y; Ju, M K; Han, S K; Yi, J

    2011-05-01

    In this paper, screen-printing laser grooved buried contact (LGBC) method was applied, which is compatible with the existing screen-printed solar cell equipment and facilities. Experiments were performed in order to optimize short circuit current (I(sc)), open circuit voltage (V(oc)) and fill factor of high efficiency solar cells. To enhance I(sc), V(oc) and efficiency, heavy doping was performed at low sheet resistance in the laser grooved region of the cell. In contrast, light doping was carried out at a high sheet resistance in the non-laser grooved region. To increase fill factor, porous silicon found on the wafer after dipping in an HF solution to remove SiN(x), was cleared. The fabricated screen-printing LGBC solar cell using a 125 mm x 125 mm single crystalline silicon wafer exhibited an efficiency of 17.2%. The results show that screen-printing LGBC method can be applied for high efficiency solar cells.

  10. Optimal crop selection and water allocation under limited water supply in irrigation

    NASA Astrophysics Data System (ADS)

    Stange, Peter; Grießbach, Ulrike; Schütze, Niels

    2015-04-01

    Due to climate change, extreme weather conditions such as droughts may have an increasing impact on irrigated agriculture. To cope with limited water resources in irrigation systems, a new decision support framework is developed which focuses on an integrated management of both irrigation water supply and demand at the same time. For modeling the regional water demand, local (and site-specific) water demand functions are used which are derived from optimized agronomic response on farms scale. To account for climate variability the agronomic response is represented by stochastic crop water production functions (SCWPF). These functions take into account different soil types, crops and stochastically generated climate scenarios. The SCWPF's are used to compute the water demand considering different conditions, e.g., variable and fixed costs. This generic approach enables the consideration of both multiple crops at farm scale as well as of the aggregated response to water pricing at a regional scale for full and deficit irrigation systems. Within the SAPHIR (SAxonian Platform for High Performance IRrigation) project a prototype of a decision support system is developed which helps to evaluate combined water supply and demand management policies.

  11. Binary particle swarm optimization for frequency band selection in motor imagery based brain-computer interfaces.

    PubMed

    Wei, Qingguo; Wei, Zhonghai

    2015-01-01

    A brain-computer interface (BCI) enables people suffering from affective neurological diseases to communicate with the external world. Common spatial pattern (CSP) is an effective algorithm for feature extraction in motor imagery based BCI systems. However, many studies have proved that the performance of CSP depends heavily on the frequency band of EEG signals used for the construction of covariance matrices. The use of different frequency bands to extract signal features may lead to different classification performances, which are determined by the discriminative and complementary information they contain. In this study, the broad frequency band (8-30 Hz) is divided into 10 sub-bands of band width 4 Hz and overlapping 2 Hz. Binary particle swarm optimization (BPSO) is used to find the best sub-band set to improve the performance of CSP and subsequent classification. Experimental results demonstrate that the proposed method achieved an average improvement of 6.91% in cross-validation accuracy when compared to broad band CSP.

  12. Optimal central-place foraging by beavers: Tree-size selection in relation to defensive chemicals of quaking aspen.

    PubMed

    Basey, John M; Jenkins, Stephen H; Busher, Peter E

    1988-07-01

    At a newly occupied pond, beavers preferentially felled aspen smaller than 7.5 cm in diameter and selected against larger size classes. After one year of cutting, 10% of the aspen had been cut and 14% of the living aspen exhibited the juvenile growth form. A phenolic compound which may act as a deterrent to beavers was found in low concentrations in aspen bark, and there was no significant regression of relative concentration of this compound on tree diameter. At a pond which had been intermittently occupied by beavers for over 20 years, beavers selected against aspen smaller than 4.5 cm in diameter, and selected in favor of aspen larger than 19.5 cm in diameter. After more than 28 years of cutting at this site, 51% of the aspen had been cut and 49% of the living aspen were juvenileform. The phenolic compound was found in significantly higher concentrations in aspen bark than at the newly occupied site, and there was a significant negative regression of relative concentration on tree diameter. The results of this study show that responses to browsing by trees place constraints on the predictive value of standard energy-based optimal foraging models, and limitations on the use of such models. Future models should attempt to account for inducible responses of plants to damage and increases in concentrations of secondary metabolites through time.

  13. A new method for wavelength interval selection that intelligently optimizes the locations, widths and combinations of the intervals.

    PubMed

    Deng, Bai-Chuan; Yun, Yong-Huan; Ma, Pan; Lin, Chen-Chen; Ren, Da-Bing; Liang, Yi-Zeng

    2015-03-21

    In this study, a new algorithm for wavelength interval selection, known as interval variable iterative space shrinkage approach (iVISSA), is proposed based on the VISSA algorithm. It combines global and local searches to iteratively and intelligently optimize the locations, widths and combinations of the spectral intervals. In the global search procedure, it inherits the merit of soft shrinkage from VISSA to search the locations and combinations of informative wavelengths, whereas in the local search procedure, it utilizes the information of continuity in spectroscopic data to determine the widths of wavelength intervals. The global and local search procedures are carried out alternatively to realize wavelength interval selection. This method was tested using three near infrared (NIR) datasets. Some high-performing wavelength selection methods, such as synergy interval partial least squares (siPLS), moving window partial least squares (MW-PLS), competitive adaptive reweighted sampling (CARS), genetic algorithm PLS (GA-PLS) and interval random frog (iRF), were used for comparison. The results show that the proposed method is very promising with good results both on prediction capability and stability. The MATLAB codes for implementing iVISSA are freely available on the website: .

  14. Growth rate and shape as possible control mechanisms for the selection of mode development in optimal biological branching processes

    NASA Astrophysics Data System (ADS)

    Alarcón, Tomás; Castillo, Jorge; García-Ponce, Berenice; Herrero, Miguel Angel; Padilla, Pablo

    2016-11-01

    Recently three branching modes were characterized during the formation of the lung in mice. These modes are highly stereotyped and correspond to domain formation, planar bifurcation and three dimensional branching respectively. At the same time it is proved that although genetic control mechanisms are presumably related to the selection of any of these modes, other external factors will most probably be involved in the branching process during development. In this paper we propose that the underlying controling factors might be related to the rate at which the tubes that form the lung network grow. We present a mathematical model that allows us to formulate specific experimental predictions on these growth rates. Moreover we show that according to this formulation, there is an optimization criterion which governs the branching process during lung development, namely, efficient local space filling properties of the network. If there is no space limitation the branches are allowed to grow freely and faster, selecting one branching mode, namely, domain formation. As soon as volume constraints appear the growth rate decreases, triggering the selection of planar and orthogonal bifurcation.

  15. Selection of energy optimized pump concepts for multi core and multi mode erbium doped fiber amplifiers.

    PubMed

    Krummrich, Peter M; Akhtari, Simon

    2014-12-01

    The selection of an appropriate pump concept has a major impact on amplifier cost and power consumption. The energy efficiency of different pump concepts is compared for multi core and multi mode active fibers. In preamplifier stages, pump power density requirements derived from full C-band low noise WDM operation result in superior energy efficiency of direct pumping of individual cores in a multi core fiber with single mode pump lasers compared to cladding pumping with uncooled multi mode lasers. Even better energy efficiency is achieved by direct pumping of the core in multi mode active fibers. Complexity of pump signal combiners for direct pumping of multi core fibers can be reduced by deploying integrated components.

  16. Robust Depth Estimation and Image Fusion Based on Optimal Area Selection

    PubMed Central

    Lee, Ik-Hyun; Mahmood, Muhammad Tariq; Choi, Tae-Sun

    2013-01-01

    Mostly, 3D cameras having depth sensing capabilities employ active depth estimation techniques, such as stereo, the triangulation method or time-of-flight. However, these methods are expensive. The cost can be reduced by applying optical passive methods, as they are inexpensive and efficient. In this paper, we suggest the use of one of the passive optical methods named shape from focus (SFF) for 3D cameras. In the proposed scheme, first, an adaptive window is computed through an iterative process using a criterion. Then, the window is divided into four regions. In the next step, the best focused area among the four regions is selected based on variation in the data. The effectiveness of the proposed scheme is validated using image sequences of synthetic and real objects. Comparative analysis based on statistical metrics correlation, mean square error (MSE), universal image quality index (UIQI) and structural similarity (SSIM) shows the effectiveness of the proposed scheme. PMID:24008281

  17. Precursor Selection for Property Optimization in Biomorphic SiC Ceramics

    NASA Technical Reports Server (NTRS)

    Varela-Feria, F. M.; Lopez-Robledo, M. J.; Martinez-Fernandez, J.; deArellano-Lopez, A. R.; Singh, M.; Gray, Hugh R. (Technical Monitor)

    2002-01-01

    Biomorphic SiC ceramics have been fabricated using different wood precursors. The evolution of volume, density and microstructure of the woods, carbon performs, and final SiC products are systematically studied in order to establish experimental guidelines that allow materials selection. The wood density is a critical characteristic, which results in a particular final SiC density, and the level of anisotropy in mechanical properties in directions parallel (axial) and perpendicular (radial) to the growth of the wood. The purpose of this work is to explore experimental laws that can help choose a type of wood as precursor for a final SiC product, with a given microstructure, density and level of anisotropy. Preliminary studies of physical properties suggest that not only mechanical properties are strongly anisotropic, but also electrical conductivity and gas permeability, which have great technological importance.

  18. Optimizing selection of microsatellite loci from 454 pyrosequencing via post-sequencing bioinformatic analyses.

    PubMed

    Fernandez-Silva, Iria; Toonen, Robert J

    2013-01-01

    The comparatively low cost of massive parallel sequencing technology, also known as next-generation sequencing (NGS), has transformed the isolation of microsatellite loci. The most common NGS approach consists of obtaining large amounts of sequence data from genomic DNA or enriched microsatellite libraries, which is then mined for the discovery of microsatellite repeats using bioinformatics analyses. Here, we describe a bioinformatics approach to isolate microsatellite loci, starting from the raw sequence data through a subset of microsatellite primer pairs. The primary difference to previously published approaches includes analyses to select the most accurate sequence data and to eliminate repetitive elements prior to the design of primers. These analyses aim to minimize the testing of primer pairs by identifying the most promising microsatellite loci.

  19. Optimizing Selection of Large Animals for Antibody Production by Screening Immune Response to Standard Vaccines

    PubMed Central

    Thompson, Mary K.; Fridy, Peter C.; Keegan, Sarah; Chait, Brian T.; Fenyö, David; Rout, Michael P.

    2016-01-01

    Antibodies made in large animals are integral to many biomedical research endeavors. Domesticated herd animals like goats, sheep, donkeys, horses and camelids all offer distinct advantages in antibody production. However, their cost of use is often prohibitive, especially where poor antigen response is commonplace; choosing a non-responsive animal can set a research program back or even prevent experiments from moving forward entirely. Over the course of production of antibodies from llamas, we found that some animals consistently produced a higher humoral antibody response than others, even to highly divergent antigens, as well as to their standard vaccines. Based on our initial data, we propose that these “high level responders” could be pre-selected by checking antibody titers against common vaccines given to domestic farm animals. Thus, time and money can be saved by reducing the chances of getting poor responding animals and minimizing the use of superfluous animals. PMID:26775851

  20. Optimizing selection of large animals for antibody production by screening immune response to standard vaccines.

    PubMed

    Thompson, Mary K; Fridy, Peter C; Keegan, Sarah; Chait, Brian T; Fenyö, David; Rout, Michael P

    2016-03-01

    Antibodies made in large animals are integral to many biomedical research endeavors. Domesticated herd animals like goats, sheep, donkeys, horses and camelids all offer distinct advantages in antibody production. However, their cost of use is often prohibitive, especially where poor antigen response is commonplace; choosing a non-responsive animal can set a research program back or even prevent experiments from moving forward entirely. Over the course of production of antibodies from llamas, we found that some animals consistently produced a higher humoral antibody response than others, even to highly divergent antigens, as well as to their standard vaccines. Based on our initial data, we propose that these "high level responders" could be pre-selected by checking antibody titers against common vaccines given to domestic farm animals. Thus, time and money can be saved by reducing the chances of getting poor responding animals and minimizing the use of superfluous animals.

  1. Optimized selective lactate excitation with a refocused multiple-quantum filter

    NASA Astrophysics Data System (ADS)

    Holbach, Mirjam; Lambert, Jörg; Johst, Sören; Ladd, Mark E.; Suter, Dieter

    2015-06-01

    Selective detection of lactate signals in in vivo MR spectroscopy with spectral editing techniques is necessary in situations where strong lipid or signals from other molecules overlap the desired lactate resonance in the spectrum. Several pulse sequences have been proposed for this task. The double-quantum filter SSel-MQC provides very good lipid and water signal suppression in a single scan. As a major drawback, it suffers from significant signal loss due to incomplete refocussing in situations where long evolution periods are required. Here we present a refocused version of the SSel-MQC technique that uses only one additional refocussing pulse and regains the full refocused lactate signal at the end of the sequence.

  2. Optimal Site Characterization and Selection Criteria for Oyster Restoration using Multicolinear Factorial Water Quality Approach

    NASA Astrophysics Data System (ADS)

    Yoon, J.

    2015-12-01

    Elevated levels of nutrient loadings have enriched the Chesapeake Bay estuaries and coastal waters via point and nonpoint sources and the atmosphere. Restoring oyster beds is considered a Best Management Practice (BMP) to improve the water quality as well as provide physical aquatic habitat and a healthier estuarine system. Efforts include declaring sanctuaries for brood-stocks, supplementing hard substrate on the bottom and aiding natural populations with the addition of hatchery-reared and disease-resistant stocks. An economic assessment suggests that restoring the ecological functions will improve water quality, stabilize shorelines, and establish a habitat for breeding grounds that outweighs the value of harvestable oyster production. Parametric factorial models were developed to investigate multicolinearities among in situ water quality and oyster restoration activities to evaluate posterior success rates upon multiple substrates, and physical, chemical, hydrological and biological site characteristics to systematically identify significant factors. Findings were then further utilized to identify the optimal sites for successful oyster restoration augmentable with Total Maximum Daily Loads (TMDLs) and BMPs. Factorial models evaluate the relationship among the dependent variable, oyster biomass, and treatments of temperature, salinity, total suspended solids, E. coli/Enterococci counts, depth, dissolved oxygen, chlorophyll a, nitrogen and phosphorus, and blocks consist of alternative substrates (oyster shells versus riprap, granite, cement, cinder blocks, limestone marl or combinations). Factorial model results were then compared to identify which combination of variables produces the highest posterior biomass of oysters. Developed Factorial model can facilitate maximizing the likelihood of successful oyster reef restoration in an effort to establish a healthier ecosystem and to improve overall estuarine water quality in the Chesapeake Bay estuaries.

  3. Optimizing nest survival and female survival: Consequences of nest site selection for Canada Geese

    USGS Publications Warehouse

    Miller, David A.; Grand, J.B.; Fondell, T.F.; Anthony, R. Michael

    2007-01-01

    We examined the relationship between attributes of nest sites used by Canada Geese (Branta canadensis) in the Copper River Delta, Alaska, and patterns in nest and female survival. We aimed to determine whether nest site attributes related to nest and female survival differed and whether nest site attributes related to nest survival changed within and among years. Nest site attributes that we examined included vegetation at and surrounding the nest, as well as associations with other nesting birds. Optimal nest site characteristics were different depending on whether nest survival or female survival was examined. Prior to 25 May, the odds of daily survival for nests in tall shrubs and on islands were 2.92 and 2.26 times greater, respectively, than for nests in short shrub sites. Bald Eagles (Halieaeetus leucocephalus) are the major predator during the early breeding season and their behavior was likely important in determining this pattern. After 25 May, when eagle predation is limited due to the availability of alternative prey, no differences in nest survival among the nest site types were found. In addition, nest survival was positively related to the density of other Canada Goose nests near the nest site. Although the number of detected mortalities for females was relatively low, a clear pattern was found, with mortality three times more likely at nest sites dominated by high shrub density within 50 m than at open sites dominated by low shrub density. The negative relationship of nest concealment and adult survival is consistent with that found in other studies of ground-nesting birds. Physical barriers that limited access to nest sites by predators and sites that allowed for early detection of predators were important characteristics of nest site quality for Canada Geese and nest site quality shifted within seasons, likely as a result of shifting predator-prey interactions.

  4. Aromatic catabolic pathway selection for optimal production of pyruvate and lactate from lignin.

    PubMed

    Johnson, Christopher W; Beckham, Gregg T

    2015-03-01

    Lignin represents an untapped feedstock for the production of fuels and chemicals, but its intrinsic heterogeneity makes lignin valorization a significant challenge. In nature, many aerobic organisms degrade lignin-derived aromatic molecules through conserved central intermediates including catechol and protocatechuate. Harnessing this microbial approach offers potential for lignin upgrading in modern biorefineries, but significant technical development is needed to achieve this end. Catechol and protocatechuate are subjected to aromatic ring cleavage by dioxygenase enzymes that, depending on the position, ortho or meta relative to adjacent hydroxyl groups, result in different products that are metabolized through parallel pathways for entry into the TCA cycle. These degradation pathways differ in the combination of succinate, acetyl-CoA, and pyruvate produced, the reducing equivalents regenerated, and the amount of carbon emitted as CO2-factors that will ultimately impact the yield of the targeted product. As shown here, the ring-cleavage pathways can be interchanged with one another, and such substitutions have a predictable and substantial impact on product yield. We demonstrate that replacement of the catechol ortho degradation pathway endogenous to Pseudomonas putida KT2440 with an exogenous meta-cleavage pathway from P. putida mt-2 increases yields of pyruvate produced from aromatic molecules in engineered strains. Even more dramatically, replacing the endogenous protocatechuate ortho pathway with a meta-cleavage pathway from Sphingobium sp. SYK-6 results in a nearly five-fold increase in pyruvate production. We further demonstrate the aerobic conversion of pyruvate to l-lactate with a yield of 41.1 ± 2.6% (wt/wt). Overall, this study illustrates how aromatic degradation pathways can be tuned to optimize the yield of a desired product in biological lignin upgrading.

  5. Loco-regional therapies for patients with hepatocellular carcinoma awaiting liver transplantation: Selecting an optimal therapy.

    PubMed

    Byrne, Thomas J; Rakela, Jorge

    2016-06-24

    Hepatocellular carcinoma (HCC) is a common, increasingly prevalent malignancy. For all but the smallest lesions, surgical removal of cancer via resection or liver transplantation (LT) is considered the most feasible pathway to cure. Resection - even with favorable survival - is associated with a fairly high rate of recurrence, perhaps since most HCCs occur in the setting of cirrhosis. LT offers the advantage of removing not only the cancer but the diseased liver from which the cancer has arisen, and LT outperforms resection for survival with selected patients. Since time waiting for LT is time during which HCC can progress, loco-regional therapy (LRT) is widely employed by transplant centers. The purpose of LRT is either to bridge patients to LT by preventing progression and waitlist dropout, or to downstage patients who slightly exceed standard eligibility criteria initially but can fall within it after treatment. Transarterial chemoembolization and radiofrequency ablation have been the most widely utilized LRTs to date, with favorable efficacy and safety as a bridge to LT (and for the former, as a downstaging modality). The list of potentially effective LRTs has expanded in recent years, and includes transarterial chemoembolization with drug-eluting beads, radioembolization and novel forms of extracorporal therapy. Herein we appraise the various LRT modalities for HCC, and their potential roles in specific clinical scenarios in patients awaiting LT.

  6. Optimal multi-focus contourlet-based image fusion algorithm selection

    NASA Astrophysics Data System (ADS)

    Lutz, Adam; Giansiracusa, Michael; Messer, Neal; Ezekiel, Soundararajan; Blasch, Erik; Alford, Mark

    2016-05-01

    Multi-focus image fusion is becoming increasingly prevalent, as there is a strong initiative to maximize visual information in a single image by fusing the salient data from multiple images for visualization. This allows an analyst to make decisions based on a larger amount of information in a more efficient manner because multiple images need not be cross-referenced. The contourlet transform has proven to be an effective multi-resolution transform for both denoising and image fusion through its ability to pick up the directional and anisotropic properties while being designed to decompose the discrete two-dimensional domain. Many studies have been done to develop and validate algorithms for wavelet image fusion, but the contourlet has not been as thoroughly studied. When the contourlet coefficients for the wavelet coefficients are substituted in image fusion algorithms, it is contourlet image fusion. There are a multitude of methods for fusing these coefficients together and the results demonstrate that there is an opportunity for fusing coefficients together in the contourlet domain for multi-focus images. This paper compared the algorithms with a variety of no reference image fusion metrics including information theory based, image feature based and structural similarity based assessments to select the image fusion method.

  7. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    USGS Publications Warehouse

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  8. Optimization of chemical structure of Schottky-type selection diode for crossbar resistive memory.

    PubMed

    Kim, Gun Hwan; Lee, Jong Ho; Jeon, Woojin; Song, Seul Ji; Seok, Jun Yeong; Yoon, Jung Ho; Yoon, Kyung Jean; Park, Tae Joo; Hwang, Cheol Seong

    2012-10-24

    The electrical performances of Pt/TiO(2)/Ti/Pt stacked Schottky-type diode (SD) was systematically examined, and this performance is dependent on the chemical structures of the each layer and their interfaces. The Ti layers containing a tolerable amount of oxygen showed metallic electrical conduction characteristics, which was confirmed by sheet resistance measurement with elevating the temperature, transmission line measurement (TLM), and Auger electron spectroscopy (AES) analysis. However, the chemical structure of SD stack and resulting electrical properties were crucially affected by the dissolved oxygen concentration in the Ti layers. The lower oxidation potential of the Ti layer with initially higher oxygen concentration suppressed the oxygen deficiency of the overlying TiO(2) layer induced by consumption of the oxygen from TiO(2) layer. This structure results in the lower reverse current of SDs without significant degradation of forward-state current. Conductive atomic force microscopy (CAFM) analysis showed the current conduction through the local conduction paths in the presented SDs, which guarantees a sufficient forward-current density as a selection device for highly integrated crossbar array resistive memory.

  9. Optimization of internals for Selective Catalytic Reduction (SCR) for NO removal.

    PubMed

    Lei, Zhigang; Wen, Cuiping; Chen, Biaohua

    2011-04-15

    This work tried to identify the relationship between the internals of selective catalytic reduction (SCR) system and mixing performance for controlling ammonia (NH(3)) slip. In the SCR flow section, arranging the flow-guided internals can improve the uniformity of velocity distribution but is unfavorable for the uniformity of NH(3) concentration distribution. The ammonia injection grids (AIG) with four kinds of nozzle diameters (i.e., 1.0 mm, 1.5 mm, 2.0 mm, and mixed diameters) were investigated, and it was found that the AIG with mixed nozzle diameters in which A3, A4, B3, and B4 nozzles' diameters are 1.0 mm and other nozzles' diameters are 1.5 mm is the most favorable for the uniformity of NH(3) concentration distribution. In the SCR reactor section, the appropriate space length between two catalyst layers, which serves as gas mixing in order to prevent maldistribution of gas concentrations into the second catalyst layer, under the investigated conditions is about 100, 1000, and 12 mm for honeycomb-like cordierite catalyst, plate-type catalysts with parallel channel arrangement, and with cross channel arrangement, respectively. Therefore, the cross channel arrangement is superior to the parallel channel arrangement in saving the SCR reactor volume.

  10. Turbine cooling configuration selection and design optimization for the high-reliability gas turbine. Final report

    SciTech Connect

    Smith, M J; Suo, M

    1981-04-01

    The potential of advanced turbine convectively air-cooled concepts for application to the Department of Energy/Electric Power Research Institute (EPRI) Advanced Liquid/Gas-Fueled Engine Program was investigated. Cooling of turbine airfoils is critical technology and significant advances in cooling technology will permit higher efficiency coal-base-fuel gas turbine energy systems. Two new airfoil construction techniques, bonded and wafer, were the principal designs considered. In the bonded construction, two airfoil sections having intricate internal cooling configurations are bonded together to form a complete blade or vane. In the wafer construction, a larger number (50 or more) of wafers having intricate cooling flow passages are bonded together to form a complete blade or vane. Of these two construction techniques, the bonded airfoil is considered to be lower in risk and closer to production readiness. Bonded airfoils are being used in aircraft engines. A variety of industrial materials were evaluated for the turbine airfoils. A columnar grain nickel alloy was selected on the basis of strength and corrosion resistance. Also, cost of electricity and reliability were considered in the final concept evaluation. The bonded airfoil design yielded a 3.5% reduction in cost-of-electricity relative to a baseline Reliable Engine design. A significant conclusion of this study was that the bonded airfoil convectively air-cooled design offers potential for growth to turbine inlet temperatures above 2600/sup 0/F with reasonable development risk.

  11. HLA-DO as the Optimizer of Epitope Selection for MHC Class II Antigen Presentation

    PubMed Central

    Poluektov, Yuri O.; Kim, AeRyon; Hartman, Isamu Z.; Sadegh-Nasseri, Scheherazade

    2013-01-01

    Processing of antigens for presentation to helper T cells by MHC class II involves HLA-DM (DM) and HLA-DO (DO) accessory molecules. A mechanistic understanding of DO in this process has been missing. The leading model on its function proposes that DO inhibits the effects of DM. To directly study DO functions, we designed a recombinant soluble DO and expressed it in insect cells. The kinetics of binding and dissociation of several peptides to HLA-DR1 (DR1) molecules in the presence of DM and DO were measured. We found that DO reduced binding of DR1 to some peptides, and enhanced the binding of some other peptides to DR1. Interestingly, these enhancing and reducing effects were observed in the presence, or absence, of DM. We found that peptides that were negatively affected by DO were DM-sensitive, whereas peptides that were enhanced by DO were DM-resistant. The positive and negative effects of DO could only be measured on binding kinetics as peptide dissociation kinetics were not affected by DO. Using Surface Plasmon Resonance, we demonstrate direct binding of DO to a peptide-receptive, but not a closed conformation of DR1. We propose that DO imposes another layer of control on epitope selection during antigen processing. PMID:23951115

  12. Information access in a dual-task context: testing a model of optimal strategy selection

    NASA Technical Reports Server (NTRS)

    Wickens, C. D.; Seidler, K. S.

    1997-01-01

    Pilots were required to access information from a hierarchical aviation database by navigating under single-task conditions (Experiment 1) and when this task was time-shared with an altitude-monitoring task of varying bandwidth and priority (Experiment 2). In dual-task conditions, pilots had 2 viewports available, 1 always used for the information task and the other to be allocated to either task. Dual-task strategy, inferred from the decision of which task to allocate to the 2nd viewport, revealed that allocation was generally biased in favor of the monitoring task and was only partly sensitive to the difficulty of the 2 tasks and their relative priorities. Some dominant sources of navigational difficulties failed to adaptively influence selection strategy. The implications of the results are to provide tools for jumping to the top of the database, to provide 2 viewports into the common database, and to provide training as to the optimum viewport management strategy in a multitask environment.

  13. Enhanced Magnetoresistance in Molecular Junctions by Geometrical Optimization of Spin-Selective Orbital Hybridization.

    PubMed

    Rakhmilevitch, David; Sarkar, Soumyajit; Bitton, Ora; Kronik, Leeor; Tal, Oren

    2016-03-09

    Molecular junctions based on ferromagnetic electrodes allow the study of electronic spin transport near the limit of spintronics miniaturization. However, these junctions reveal moderate magnetoresistance that is sensitive to the orbital structure at their ferromagnet-molecule interfaces. The key structural parameters that should be controlled in order to gain high magnetoresistance have not been established, despite their importance for efficient manipulation of spin transport at the nanoscale. Here, we show that single-molecule junctions based on nickel electrodes and benzene molecules can yield a significant anisotropic magnetoresistance of up to ∼200% near the conductance quantum G0. The measured magnetoresistance is mechanically tuned by changing the distance between the electrodes, revealing a nonmonotonic response to junction elongation. These findings are ascribed with the aid of first-principles calculations to variations in the metal-molecule orientation that can be adjusted to obtain highly spin-selective orbital hybridization. Our results demonstrate the important role of geometrical considerations in determining the spin transport properties of metal-molecule interfaces.

  14. Loco-regional therapies for patients with hepatocellular carcinoma awaiting liver transplantation: Selecting an optimal therapy

    PubMed Central

    Byrne, Thomas J; Rakela, Jorge

    2016-01-01

    Hepatocellular carcinoma (HCC) is a common, increasingly prevalent malignancy. For all but the smallest lesions, surgical removal of cancer via resection or liver transplantation (LT) is considered the most feasible pathway to cure. Resection - even with favorable survival - is associated with a fairly high rate of recurrence, perhaps since most HCCs occur in the setting of cirrhosis. LT offers the advantage of removing not only the cancer but the diseased liver from which the cancer has arisen, and LT outperforms resection for survival with selected patients. Since time waiting for LT is time during which HCC can progress, loco-regional therapy (LRT) is widely employed by transplant centers. The purpose of LRT is either to bridge patients to LT by preventing progression and waitlist dropout, or to downstage patients who slightly exceed standard eligibility criteria initially but can fall within it after treatment. Transarterial chemoembolization and radiofrequency ablation have been the most widely utilized LRTs to date, with favorable efficacy and safety as a bridge to LT (and for the former, as a downstaging modality). The list of potentially effective LRTs has expanded in recent years, and includes transarterial chemoembolization with drug-eluting beads, radioembolization and novel forms of extracorporal therapy. Herein we appraise the various LRT modalities for HCC, and their potential roles in specific clinical scenarios in patients awaiting LT. PMID:27358775

  15. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness.

    PubMed

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  16. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    PubMed Central

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  17. Resources and life-management strategies as determinants of successful aging: on the protective effect of selection, optimization, and compensation.

    PubMed

    Jopp, Daniela; Smith, Jacqui

    2006-06-01

    In this research, the authors investigated the specific and shared impact of personal resources and selection, optimization, and compensation (SOC) life-management strategies (A. M. Freund & P. B. Baltes, 2002) on subjective well-being. Life-management strategies were expected to be most relevant when resources were constrained, particularly in very old age. In Study 1 (N=156, 71-91 years), age-differential predictive patterns supported this assumption: Young-old individuals' well-being was predicted independently by resources and SOC, whereas SOC buffered the effect of restricted resources in old-old individuals. Study 2 replicated the findings longitudinally with resource-poor and resource-rich older individuals (N=42). In both studies, specific SOC strategies were differentially adaptive. Results confirm that resources are important determinants of well-being but that life-management strategies have a considerable protective effect with limited resources.

  18. Connecting East and West by a developmental theory for older adults: Application of Baltes' Selection, Optimization, and Compensation model.

    PubMed

    Miller, Sally M

    2016-02-01

    Age-related changes in cognitive and physical function can affect safe driving performance. As the population of older adults increases in the United States there will be a simultaneous rise in the number of older adult drivers. Tai Chi, a non-traditional form of exercise with both physical and cognitive benefits may enhance driving performance. A lifespan developmental theory (Baltes' Selective Optimization with Compensation model) is used to study the relationship between Tai Chi exercise and driving performance in older adults. Application of this theory was pivotal in building a bridge between a non-traditional practice and Western-based research to study an intervention that can be used to promote and sustain well-being in older adults.

  19. Discovery and optimization of pyrrolo[1,2-a]pyrazinones leads to novel and selective inhibitors of PIM kinases.

    PubMed

    Casuscelli, Francesco; Ardini, Elena; Avanzi, Nilla; Casale, Elena; Cervi, Giovanni; D'Anello, Matteo; Donati, Daniele; Faiardi, Daniela; Ferguson, Ronald D; Fogliatto, Gianpaolo; Galvani, Arturo; Marsiglio, Aurelio; Mirizzi, Danilo G; Montemartini, Marisa; Orrenius, Christian; Papeo, Gianluca; Piutti, Claudia; Salom, Barbara; Felder, Eduard R

    2013-12-01

    A novel series of PIM inhibitors was derived from a combined effort in natural product-inspired library generation and screening. The novel pyrrolo[1,2-a]pyrazinones initial hits are inhibitors of PIM isoforms with IC50 values in the low micromolar range. The application of a rational optimization strategy, guided by the determination of the crystal structure of the complex in the kinase domain of PIM1 with compound 1, led to the discovery of compound 15a, which is a potent PIM kinases inhibitor exhibiting excellent selectivity against a large panel of kinases, representative of each family. The synthesis, structure-activity relationship studies, and pharmacokinetic data of compounds from this inhibitor class are presented herein. Furthermore, the cellular activities including inhibition of cell growth and modulation of downstream targets are also described.

  20. A New Combinatorial Optimization Approach for Integrated Feature Selection Using Different Datasets: A Prostate Cancer Transcriptomic Study

    PubMed Central

    Puthiyedth, Nisha; Riveros, Carlos; Berretta, Regina; Moscato, Pablo

    2015-01-01

    Background The joint study of multiple datasets has become a common technique for increasing statistical power in detecting biomarkers obtained from smaller studies. The approach generally followed is based on the fact that as the total number of samples increases, we expect to have greater power to detect associations of interest. This methodology has been applied to genome-wide association and transcriptomic studies due to the availability of datasets in the public domain. While this approach is well established in biostatistics, the introduction of new combinatorial optimization models to address this issue has not been explored in depth. In this study, we introduce a new model for the integration of multiple datasets and we show its application in transcriptomics. Methods We propose a new combinatorial optimization problem that addresses the core issue of biomarker detection in integrated datasets. Optimal solutions for this model deliver a feature selection from a panel of prospective biomarkers. The model we propose is a generalised version of the (α,β)-k-Feature Set problem. We illustrate the performance of this new methodology via a challenging meta-analysis task involving six prostate cancer microarray datasets. The results are then compared to the popular RankProd meta-analysis tool and to what can be obtained by analysing the individual datasets by statistical and combinatorial methods alone. Results Application of the integrated method resulted in a more informative signature than the rank-based meta-analysis or individual dataset results, and overcomes problems arising from real world datasets. The set of genes identified is highly significant in the context of prostate cancer. The method used does not rely on homogenisation or transformation of values to a common scale, and at the same time is able to capture markers associated with subgroups of the disease. PMID:26106884

  1. Importance of double-pole CFS-PML for broadband seismic wave simulation and optimal parameters selection

    NASA Astrophysics Data System (ADS)

    Feng, Haike; Zhang, Wei; Zhang, Jie; Chen, Xiaofei

    2017-02-01

    The Perfectly Matched Layer (PML) is an efficient absorbing technique for numerical wave simulation. The complex frequency-shifted PML (CFS-PML) introduces two additional parameters in the stretching function to make the absorption frequency dependent. This can help to suppress converted evanescent waves from near grazing incident waves, but does not efficiently absorb low frequency waves below the cutoff frequency. To absorb both the evanescent and low frequency waves, the double-pole CFS-PML having two poles in the coordinate stretching function was developed in computational electromagnetism. Several studies have investigated the performance of the double-pole CFS-PML for seismic wave simulations in the case of a narrowband seismic wavelet and did not find significant difference comparing to the CFS-PML. Another difficulty to apply the double-pole CFS-PML for real problems is that a practical strategy to set optimal parameter values has not been established. In this work, we study the performance of the double-pole CFS-PML for broadband seismic wave simulation. We find that when the maximum to minimum frequency ratio is green larger than 16, the CFS-PML will either fail to suppress the converted evanescent waves for grazing incident waves, or produce visible low frequency reflection, depending on the value green of α. In contrast, the double-pole CFS-PML can simultaneously suppress the converted evanescent waves and avoid low frequency reflections with proper parameter values. We analyze the different roles of the double-pole CFS-PML parameters and propose optimal selections of these parameters. Numerical tests show that the double-pole CFS-PML with the optimal parameters can generate satisfactory results for broadband seismic wave simulations.

  2. Structural and mechanical evaluations of a topology optimized titanium interbody fusion cage fabricated by selective laser melting process.

    PubMed

    Lin, Chia-Ying; Wirtz, Tobias; LaMarca, Frank; Hollister, Scott J

    2007-11-01

    A topology optimized lumbar interbody fusion cage was made of Ti-Al6-V4 alloy by the rapid prototyping process of selective laser melting (SLM) to reproduce designed microstructure features. Radiographic characterizations and the mechanical properties were investigated to determine how the structural characteristics of the fabricated cage were reproduced from design characteristics using micro-computed tomography scanning. The mechanical modulus of the designed cage was also measured to compare with tantalum, a widely used porous metal. The designed microstructures can be clearly seen in the micrographs of the micro-CT and scanning electron microscopy examinations, showing the SLM process can reproduce intricate microscopic features from the original designs. No imaging artifacts from micro-CT were found. The average compressive modulus of the tested caged was 2.97+/-0.90 GPa, which is comparable with the reported porous tantalum modulus of 3 GPa and falls between that of cortical bone (15 GPa) and trabecular bone (0.1-0.5 GPa). The new porous Ti-6Al-4V optimal-structure cage fabricated by SLM process gave consistent mechanical properties without artifactual distortion in the imaging modalities and thus it can be a promising alternative as a porous implant for spine fusion.

  3. Optimizing Surveillance Performance of Alpha-Fetoprotein by Selection of Proper Target Population in Chronic Hepatitis B

    PubMed Central

    Chung, Jung Wha; Kim, Beom Hee; Lee, Chung Seop; Kim, Gi Hyun; Sohn, Hyung Rae; Min, Bo Young; Song, Joon Chang; Park, Hyun Kyung; Jang, Eun Sun; Yoon, Hyuk; Kim, Jaihwan; Shin, Cheol Min; Park, Young Soo; Hwang, Jin-Hyeok; Jeong, Sook-Hyang; Kim, Nayoung; Lee, Dong Ho; Lee, Jaebong; Ahn, Soyeon

    2016-01-01

    Although alpha-fetoprotein (AFP) is the most widely used biomarker in hepatocellular carcinoma (HCC) surveillance, disease activity may also increase AFP levels in chronic hepatitis B (CHB). Since nucleos(t)ide analog (NA) therapy may reduce not only HBV viral loads and transaminase levels but also the falsely elevated AFP levels in CHB, we tried to determine whether exposure to NA therapy influences AFP performance and whether selective application can optimize the performance of AFP testing in CHB during HCC surveillance. A retrospective cohort of 6,453 CHB patients who received HCC surveillance was constructed from the electronic clinical data warehouse. Covariates of AFP elevation were determined from 53,137 AFP measurements, and covariate-specific receiver operating characteristics regression analysis revealed that albumin levels and exposure to NA therapy were independent determinants of AFP performance. C statistics were largest in patients with albumin levels ≥ 3.7 g/dL who were followed without NA therapy during study period, whereas AFP performance was poorest when tested in patients with NA therapy during study and albumin levels were < 3.7 g/dL (difference in C statics = 0.35, p < 0.0001). Contrary to expectation, CHB patients with current or recent exposure to NA therapy showed poorer performance of AFP during HCC surveillance. Combination of concomitant albumin levels and status of NA therapy can identify subgroup of CHB patients who will show optimized AFP performance. PMID:27997559

  4. Particle Swarm Optimization Based Feature Enhancement and Feature Selection for Improved Emotion Recognition in Speech and Glottal Signals

    PubMed Central

    Muthusamy, Hariharan; Polat, Kemal; Yaacob, Sazali

    2015-01-01

    In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature. PMID:25799141

  5. Particle swarm optimization based feature enhancement and feature selection for improved emotion recognition in speech and glottal signals.

    PubMed

    Muthusamy, Hariharan; Polat, Kemal; Yaacob, Sazali

    2015-01-01

    In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature.

  6. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  7. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection.

    PubMed

    Doshi, Jimit; Erus, Guray; Ou, Yangming; Resnick, Susan M; Gur, Ruben C; Gur, Raquel E; Satterthwaite, Theodore D; Furth, Susan; Davatzikos, Christos

    2016-02-15

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images.

  8. Selecting an optimal number of binding site waters to improve virtual screening enrichments against the adenosine A2A receptor.

    PubMed

    Lenselink, Eelke B; Beuming, Thijs; Sherman, Woody; van Vlijmen, Herman W T; IJzerman, Adriaan P

    2014-06-23

    A major challenge in structure-based virtual screening (VS) involves the treatment of explicit water molecules during docking in order to improve the enrichment of active compounds over decoys. Here we have investigated this in the context of the adenosine A2A receptor, where water molecules have previously been shown to be important for achieving high enrichment rates with docking, and where the positions of some binding site waters are known from a high-resolution crystal structure. The effect of these waters (both their presence and orientations) on VS enrichment was assessed using a carefully curated set of 299 high affinity A2A antagonists and 17,337 decoys. We show that including certain crystal waters greatly improves VS enrichment and that optimization of water hydrogen positions is needed in order to achieve the best results. We also show that waters derived from a molecular dynamics simulation - without any knowledge of crystallographic waters - can improve enrichments to a similar degree as the crystallographic waters, which makes this strategy applicable to structures without experimental knowledge of water positions. Finally, we used decision trees to select an ensemble of structures with different water molecule positions and orientations that outperforms any single structure with water molecules. The approach presented here is validated against independent test sets of A2A receptor antagonists and decoys from the literature. In general, this water optimization strategy could be applied to any target with waters-mediated protein-ligand interactions.

  9. Optimization of Cu/activated carbon catalyst in low temperature selective catalytic reduction of NO process using response surface methodology.

    PubMed

    Amanpour, Javad; Salari, Dariush; Niaei, Aligholi; Mousavi, Seyed Mahdi; Panahi, Parvaneh Nakhostin

    2013-01-01

    Preparation of Cu/Activated Carbon (Cu/AC) catalyst was optimized for low temperature selective catalytic reduction of NO by using response surface methodology. A central composite design (CCD) was used to investigate the effects of three independent variables, namely pre-oxidization degree (HNO3%), Cu loading (wt.%) and calcination temperature on NO conversion efficiency. The CCD was consisted of 20 different preparation conditions of Cu/AC catalysts. The prepared catalysts were characterized by XRD and SEM techniques. Predicting NO conversion was carried out using a second order model obtained from designed experiments and statistical software Minitab 14. Regression and Pareto graphic analysis showed that all of the chosen parameters and some interactions were effective on the NO conversion. The optimal values were pre-oxidization in 10.2% HNO3, 6.1 wt.% Cu loading and 480°C for calcination temperature. Under the optimum condition, NO conversion (94.3%) was in a good agreement with predicted value (96.12%).

  10. Conservative Extensions of Linkage Disequilibrium Measures from Pairwise to Multi-loci and Algorithms for Optimal Tagging SNP Selection

    NASA Astrophysics Data System (ADS)

    Tarpine, Ryan; Lam, Fumei; Istrail, Sorin

    We present results on two classes of problems. The first result addresses the long standing open problem of finding unifying principles for Linkage Disequilibrium (LD) measures in population genetics (Lewontin 1964 [10], Hedrick 1987 [8], Devlin and Risch 1995 [5]). Two desirable properties have been proposed in the extensive literature on this topic and the mutual consistency between these properties has remained at the heart of statistical and algorithmic difficulties with haplotype and genome-wide association study analysis. The first axiom is (1) The ability to extend LD measures to multiple loci as a conservative extension of pairwise LD. All widely used LD measures are pairwise measures. Despite significant attempts, it is not clear how to naturally extend these measures to multiple loci, leading to a "curse of the pairwise". The second axiom is (2) The Interpretability of Intermediate Values. In this paper, we resolve this mutual consistency problem by introducing a new LD measure, directed informativeness overrightarrow{I} (the directed graph theoretic counterpart of the informativeness measure introduced by Halldorsson et al. [6]) and show that it satisfies both of the above axioms. We also show the maximum informative subset of tagging SNPs based on overrightarrow{I} can be computed exactly in polynomial time for realistic genome-wide data. Furthermore, we present polynomial time algorithms for optimal genome-wide tagging SNPs selection for a number of commonly used LD measures, under the bounded neighborhood assumption for linked pairs of SNPs. One problem in the area is the search for a quality measure for tagging SNPs selection that unifies the LD-based methods such as LD-select (implemented in Tagger, de Bakker et al. 2005 [4], Carlson et al. 2004 [3]) and the information-theoretic ones such as informativeness. We show that the objective function of the LD-select algorithm is the Minimal Dominating Set (MDS) on r 2-SNP graphs and show that we can

  11. Selection of HyspIRI optimal band positions for the earth compositional mapping using HyTES data

    NASA Astrophysics Data System (ADS)

    Ullah, Saleem; Khalid, Noora; Iqbal, Arshad

    2016-07-01

    In near future, NASA/JPL will orbit a new space-borne sensor called HyspIRI (Hyperspectral and Infrared Imager) which will cover the spectral range from 0.4 -14μm. Two instruments will be mounted on HyspIRI platform; one is hyperspectral instrument which can sense earth surface between 0.4-2.5μm with 10 nm intervals and a multispectral TIR sensor will acquire images between 3 to 14μm in 8 (1 in MIR and 7 in TIR) spectral bands. The TIR spectral wavebands will be positioned based on their importance in various applications. This study aimed to find HyspIRI optimal TIR wavebands position for earth compositional mapping. Genetic algorithms coupled with Spectral Angle Mapper (GA-SAM) were used as spectral bands selector. High dimensional HyTES (Hyperspectral Thermal Emission Spectrometer) data comprised of 256 spectral bands of Cuprite and Death Valley regions were used to select meaningful subsets of bands for earth compositional mapping. The GA-SAM was trained for eight mineral classes and the algorithms were run iteratively 40 times. High calibration (> 98 %) and validation (> 96 %) accuracies were achieved with limited numbers (seven) of spectral bands selected by GA-SAM. Knowing the important band positions will help scientist of HyspIRI group to place spectral bands at regions were accuracies of earth compositional mapping can be enhanced.

  12. Combining Crowding Estimation in Objective and Decision Space With Multiple Selection and Search Strategies for Multi-Objective Evolutionary Optimization.

    PubMed

    Xia, Hu; Zhuang, Jian; Yu, Dehong

    2014-03-01

    Many multi-objective evolutionary algorithms (MOEAs) have been successful in approximating the Pareto Front. However, well-distributed solutions in the objective and decision spaces are still required in many real-life applications. In this paper, a novel MOEA is proposed to this problem. Distinct from other MOEAs, the proposed algorithm suggests a framework, which includes two crowding estimation methods, multiple selection methods for mating and search strategies for variation, to improve the MOEA' s searching ability, and the diversity of its solutions. The algorithm emphasizes the importance of using the decision space and the objective space diversities. The objective space crowding and decision space crowding distances are designed using different ideas. To produce new individuals, three different types of mating selections and their respective search strategies are constructed for the main population and the two sparse populations, with the help of the two crowding measurements. Finally, based on the experimental tests on 17 unconstrained multi-objective optimization problems, the proposed algorithm is demonstrated to have better results compared to several state-of-the-art MOEAs. A detailed analysis on the effectiveness and robustness of the framework is also presented.

  13. The backtracking search optimization algorithm for frequency band and time segment selection in motor imagery-based brain-computer interfaces.

    PubMed

    Wei, Zhonghai; Wei, Qingguo

    2016-09-01

    Common spatial pattern (CSP) is a powerful algorithm for extracting discriminative brain patterns in motor imagery-based brain-computer interfaces (BCIs). However, its performance depends largely on the subject-specific frequency band and time segment. Accurate selection of most responsive frequency band and time segment remains a crucial problem. A novel evolutionary algorithm, the backtracking search optimization algorithm is used to find the optimal frequency band and the optimal combination of frequency band and time segment. The former is searched by a frequency window with changing width of which starting and ending points are selected by the backtracking optimization algorithm; the latter is searched by the same frequency window and an additional time window with fixed width. The three parameters, the starting and ending points of frequency window and the starting point of time window, are jointly optimized by the backtracking search optimization algorithm. Based on the chosen frequency band and fixed or chosen time segment, the same feature extraction is conducted by CSP and subsequent classification is carried out by Fisher discriminant analysis. The classification error rate is used as the objective function of the backtracking search optimization algorithm. The two methods, named BSA-F CSP and BSA-FT CSP, were evaluated on data set of BCI competition and compared with traditional wideband (8-30[Formula: see text]Hz) CSP. The classification results showed that backtracking search optimization algorithm can find much effective frequency band for EEG preprocessing compared to traditional broadband, substantially enhancing CSP performance in terms of classification accuracy. On the other hand, the backtracking search optimization algorithm for joint selection of frequency band and time segment can find their optimal combination, and thus can further improve classification rates.

  14. Design of laser pulses for selective vibrational excitation of the N6-H bond of adenine and adenine-thymine base pair using optimal control theory.

    PubMed

    Sharma, Sitansh; Sharma, Purshotam; Singh, Harjinder; Balint-Kurti, Gabriel G

    2009-06-01

    Time dependent quantum dynamics and optimal control theory are used for selective vibrational excitation of the N6-H (amino N-H) bond in free adenine and in the adenine-thymine (A-T) base pair. For the N6-H bond in free adenine we have used a one dimensional model while for the hydrogen bond, N6-H(A)...O4(T), present in the A-T base pair, a two mathematical dimensional model is employed. The conjugate gradient method is used for the optimization of the field dependent cost functional. Optimal laser fields are obtained for selective population transfer in both the model systems, which give virtually 100% excitation probability to preselected vibrational levels. The effect of the optimized laser field on the other hydrogen bond, N1(A)...H-N3(T), present in A-T base pair is also investigated.

  15. H-DROP: an SVM based helical domain linker predictor trained with features optimized by combining random forest and stepwise selection

    NASA Astrophysics Data System (ADS)

    Ebina, Teppei; Suzuki, Ryosuke; Tsuji, Ryotaro; Kuroda, Yutaka

    2014-08-01

    Domain linker prediction is attracting much interest as it can help identifying novel domains suitable for high throughput proteomics analysis. Here, we report H-DROP, an SVM-based Helical Domain linker pRediction using OPtimal features. H-DROP is, to the best of our knowledge, the first predictor for specifically and effectively identifying helical linkers. This was made possible first because a large training dataset became available from IS-Dom, and second because we selected a small number of optimal features from a huge number of potential ones. The training helical linker dataset, which included 261 helical linkers, was constructed by detecting helical residues at the boundary regions of two independent structural domains listed in our previously reported IS-Dom dataset. 45 optimal feature candidates were selected from 3,000 features by random forest, which were further reduced to 26 optimal features by stepwise selection. The prediction sensitivity and precision of H-DROP were 35.2 and 38.8 %, respectively. These values were over 10.7 % higher than those of control methods including our previously developed DROP, which is a coil linker predictor, and PPRODO, which is trained with un-differentiated domain boundary sequences. Overall, these results indicated that helical linkers can be predicted from sequence information alone by using a strictly curated training data set for helical linkers and carefully selected set of optimal features. H-DROP is available at http://domserv.lab.tuat.ac.jp.

  16. Reducing residual stresses and deformations in selective laser melting through multi-level multi-scale optimization of cellular scanning strategy

    NASA Astrophysics Data System (ADS)

    Mohanty, Sankhya; Hattel, Jesper H.

    2016-04-01

    Residual stresses and deformations continue to remain one of the primary challenges towards expanding the scope of selective laser melting as an industrial scale manufacturing process. While process monitoring and feedback-based process control of the process has shown significant potential, there is still dearth of techniques to tackle the issue. Numerical modelling of selective laser melting process has thus been an active area of research in the last few years. However, large computational resource requirements have slowed the usage of these models for optimizing the process. In this paper, a calibrated, fast, multiscale thermal model coupled with a 3D finite element mechanical model is used to simulate residual stress formation and deformations during selective laser melting. The resulting reduction in thermal model computation time allows evolutionary algorithm-based optimization of the process. A multilevel optimization strategy is adopted using a customized genetic algorithm developed for optimizing cellular scanning strategy for selective laser melting, with an objective of reducing residual stresses and deformations. The resulting thermo-mechanically optimized cellular scanning strategies are compared with standard scanning strategies and have been used to manufacture standard samples.

  17. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling

    PubMed Central

    Zhou, Fuqun; Zhang, Aining

    2016-01-01

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data. PMID:27792152

  18. Optimization of Cat's Whiskers Tea (Orthosiphon stamineus) Using Supercritical Carbon Dioxide and Selective Chemotherapeutic Potential against Prostate Cancer Cells

    PubMed Central

    Al-Suede, Fouad Saleih R.; Khadeer Ahamed, Mohamed B.; Abdul Majid, Aman S.; Baharetha, Hussin M.; Hassan, Loiy E. A.; Kadir, Mohd Omar A.; Nassar, Zeyad D.; Abdul Majid, Amin M. S.

    2014-01-01

    Cat's whiskers (Orthosiphon stamineus) leaves extracts were prepared using supercritical CO2 (SC-CO2) with full factorial design to determine the optimum extraction parameters. Nine extracts were obtained by varying pressure, temperature, and time. The extracts were analysed using FTIR, UV-Vis, and GC-MS. Cytotoxicity of the extracts was evaluated on human (colorectal, breast, and prostate) cancer and normal fibroblast cells. Moderate pressure (31.1 MPa) and temperature (60°C) were recorded as optimum extraction conditions with high yield (1.74%) of the extract (B2) at 60 min extraction time. The optimized extract (B2) displayed selective cytotoxicity against prostate cancer (PC3) cells (IC50 28 µg/mL) and significant antioxidant activity (IC50 42.8 µg/mL). Elevated levels of caspases 3/7 and 9 in B2-treated PC3 cells suggest the induction of apoptosis through nuclear and mitochondrial pathways. Hoechst and rhodamine assays confirmed the nuclear condensation and disruption of mitochondrial membrane potential in the cells. B2 also demonstrated inhibitory effects on motility and colonies of PC3 cells at its subcytotoxic concentrations. It is noteworthy that B2 displayed negligible toxicity against the normal cells. Chemometric analysis revealed high content of essential oils, hydrocarbon, fatty acids, esters, and aromatic sesquiterpenes in B2. This study highlights the therapeutic potentials of SC-CO2 extract of cat's whiskers in targeting prostate carcinoma. PMID:25276215

  19. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    PubMed

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  20. Optimization of 5-pyridazin-3-one phenoxypropylamines as potent, selective histamine H₃ receptor antagonists with potent cognition enhancing activity.

    PubMed

    Tao, Ming; Aimone, Lisa D; Huang, Zeqi; Mathiasen, Joanne; Raddatz, Rita; Lyons, Jacquelyn; Hudkins, Robert L

    2012-01-12

    Previous studies have shown that (5-{4-[3-(R)-2-methylpyrrolin-1-yl-propoxy]phenyl}-2H-pyridazin-3-one) 2 had high affinity for both the human (hH(3)R K(i) = 2.8 nM) and rat H(3)Rs (rH(3)R K(i) = 8.5 nM) but displayed low oral bioavailability in the rat. Optimization of the 5-pyridazin-3-one R(2) and R(6) positions to improve the pharmacokinetic properties over 2 led to the identification of 5-{4-[3-(R)-2-methylpyrrolidin-1-yl)propoxy]phenyl}-2-pyridin-2-yl-2H-pyridazin-3-one 29. Compound 29 displayed high affinity for both human and rat H(3)Rs (hH(3)R K(i) = 1.7 nM, rH(3)R K(i) = 3.7 nM) with a greater than 1000-fold selectivity over the other histamine receptor subtypes and favorable pharmacokinetic properties across species (F = 78% rat, 92% dog, 96% monkey). It showed low binding to human plasma proteins, weakly inhibited cytochrome P450 isoforms, and displayed an excellent safety profile for a CNS-active compound. 29 displayed potent H(3)R antagonist activity in the brain in a rat dipsogenia model and demonstrated enhancement of cognitive function in a rat social recognition model at low doses. However, the development of compound 29 was discontinued because of genotoxicity.

  1. Mathematical optimization techniques for managing selective catalytic reduction for a fleet of coal-fired power plants

    NASA Astrophysics Data System (ADS)

    Alanis Pena, Antonio Alejandro

    Major commercial electricity generation is done by burning fossil fuels out of which coal-fired power plants produce a substantial quantity of electricity worldwide. The United States has large reserves of coal, and it is cheaply available, making it a good choice for the generation of electricity on a large scale. However, one major problem associated with using coal for combustion is that it produces a group of pollutants known as nitrogen oxides (NO x). NOx are strong oxidizers and contribute to ozone formation and respiratory illness. The Environmental Protection Agency (EPA) regulates the quantity of NOx emitted to the atmosphere in the United States. One technique coal-fired power plants use to reduce NOx emissions is Selective Catalytic Reduction (SCR). SCR uses layers of catalyst that need to be added or changed to maintain the required performance. Power plants do add or change catalyst layers during temporary shutdowns, but it is expensive. However, many companies do not have only one power plant, but instead they can have a fleet of coal-fired power plants. A fleet of power plants can use EPA cap and trade programs to have an outlet NOx emission below the allowances for the fleet. For that reason, the main aim of this research is to develop an SCR management mathematical optimization methods that, with a given set of scheduled outages for a fleet of power plants, minimizes the total cost of the entire fleet of power plants and also maintain outlet NO x below the desired target for the entire fleet. We use a multi commodity network flow problem (MCFP) that creates edges that represent all the SCR catalyst layers for each plant. This MCFP is relaxed because it does not consider average daily NOx constraint, and it is solved by a binary integer program. After that, we add the average daily NOx constraint to the model with a schedule elimination constraint (MCFPwSEC). The MCFPwSEC eliminates, one by one, the solutions that do not satisfy the average daily

  2. Self-Regulation among Youth in Four Western Cultures: Is There an Adolescence-Specific Structure of the Selection-Optimization-Compensation (SOC) Model?

    ERIC Educational Resources Information Center

    Gestsdottir, Steinunn; Geldhof, G. John; Paus, Tomáš; Freund, Alexandra M.; Adalbjarnardottir, Sigrun; Lerner, Jacqueline V.; Lerner, Richard M.

    2015-01-01

    We address how to conceptualize and measure intentional self-regulation (ISR) among adolescents from four cultures by assessing whether ISR (conceptualized by the SOC model of Selection, Optimization, and Compensation) is represented by three factors (as with adult samples) or as one "adolescence-specific" factor. A total of 4,057 14-…

  3. Selection, Optimization, and Compensation: The Structure, Reliability, and Validity of Forced-Choice versus Likert-Type Measures in a Sample of Late Adolescents

    ERIC Educational Resources Information Center

    Geldhof, G. John; Gestsdottir, Steinunn; Stefansson, Kristjan; Johnson, Sara K.; Bowers, Edmond P.; Lerner, Richard M.

    2015-01-01

    Intentional self-regulation (ISR) undergoes significant development across the life span. However, our understanding of ISR's development and function remains incomplete, in part because the field's conceptualization and measurement of ISR vary greatly. A key sample case involves how Baltes and colleagues' Selection, Optimization,…

  4. Genome-wide characterization and selection of expressed sequence tag simple sequence repeat primers for optimized marker distribution and reliability in peach

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Expressed sequence tag (EST) simple sequence repeats (SSRs) in Prunus were mined, and flanking primers designed and used for genome-wide characterization and selection of primers to optimize marker distribution and reliability. A total of 12,618 contigs were assembled from 84,727 ESTs, along with 34...

  5. Maximizing the Reliability of Genomic Selection by Optimizing the Calibration Set of Reference Individuals: Comparison of Methods in Two Diverse Groups of Maize Inbreds (Zea mays L.)

    PubMed Central

    Rincent, R.; Laloë, D.; Nicolas, S.; Altmann, T.; Brunel, D.; Revilla, P.; Rodríguez, V.M.; Moreno-Gonzalez, J.; Melchinger, A.; Bauer, E.; Schoen, C-C.; Meyer, N.; Giauffret, C.; Bauland, C.; Jamin, P.; Laborde, J.; Monod, H.; Flament, P.; Charcosset, A.; Moreau, L.

    2012-01-01

    Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix–best linear unbiased predictions model (RA–BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request. PMID:22865733

  6. Faraday anomalous dispersion optical tuners

    NASA Technical Reports Server (NTRS)

    Wanninger, P.; Valdez, E. C.; Shay, T. M.

    1992-01-01

    Common methods for frequency stabilizing diode lasers systems employ gratings, etalons, optical electric double feedback, atomic resonance, and a Faraday cell with low magnetic field. Our method, the Faraday Anomalous Dispersion Optical Transmitter (FADOT) laser locking, is much simpler than other schemes. The FADOT uses commercial laser diodes with no antireflection coatings, an atomic Faraday cell with a single polarizer, and an output coupler to form a compound cavity. This method is vibration insensitive, thermal expansion effects are minimal, and the system has a frequency pull in range of 443.2 GHz (9A). Our technique is based on the Faraday anomalous dispersion optical filter. This method has potential applications in optical communication, remote sensing, and pumping laser excited optical filters. We present the first theoretical model for the FADOT and compare the calculations to our experimental results.

  7. A computational strategy to select optimized protein targets for drug development toward the control of cancer diseases.

    PubMed

    Carels, Nicolas; Tilli, Tatiana; Tuszynski, Jack A

    2015-01-01

    In this report, we describe a strategy for the optimized selection of protein targets suitable for drug development against neoplastic diseases taking the particular case of breast cancer as an example. We combined human interactome and transcriptome data from malignant and control cell lines because highly connected proteins that are up-regulated in malignant cell lines are expected to be suitable protein targets for chemotherapy with a lower rate of undesirable side effects. We normalized transcriptome data and applied a statistic treatment to objectively extract the sub-networks of down- and up-regulated genes whose proteins effectively interact. We chose the most connected ones that act as protein hubs, most being in the signaling network. We show that the protein targets effectively identified by the combination of protein connectivity and differential expression are known as suitable targets for the successful chemotherapy of breast cancer. Interestingly, we found additional proteins, not generally targeted by drug treatments, which might justify the extension of existing formulation by addition of inhibitors designed against these proteins with the consequence of improving therapeutic outcomes. The molecular alterations observed in breast cancer cell lines represent either driver events and/or driver pathways that are necessary for breast cancer development or progression. However, it is clear that signaling mechanisms of the luminal A, B and triple negative subtypes are different. Furthermore, the up- and down-regulated networks predicted subtype-specific drug targets and possible compensation circuits between up- and down-regulated genes. We believe these results may have significant clinical implications in the personalized treatment of cancer patients allowing an objective approach to the recycling of the arsenal of available drugs to the specific case of each breast cancer given their distinct qualitative and quantitative molecular traits.

  8. Codon optimization of genes for efficient protein expression in mammalian cells by selection of only preferred human codons.

    PubMed

    Inouye, Satoshi; Sahara-Miura, Yuiko; Sato, Jun-ichi; Suzuki, Takahiro

    2015-05-01

    A simple design method for codon optimization of genes to express a heterologous protein in mammalian cells is described. Codon optimization was performed by choosing only codons preferentially used in humans and with over 60% GC content, and the method was named the "preferred human codon-optimized method." To test our simple rule for codon optimization, the preferred human codon-optimized genes for six proteins containing photoproteins (aequorin and clytin II) and luciferases (Gaussia luciferase, Renilla luciferase, and firefly luciferases from Photinus pyralis and Luciola cruciata) were chemically synthesized and transiently expressed in Chinese hamster ovary-K1 cells. All preferred human codon-optimized genes showed higher luminescence activity than the corresponding wild-type genes. Our simple design method could be used to improve protein expression in mammalian cells efficiently.

  9. Selection of the optimal combination of water vapor absorption lines for detection of temperature in combustion zones of mixing supersonic gas flows by diode laser absorption spectrometry

    NASA Astrophysics Data System (ADS)

    Mironenko, V. R.; Kuritsyn, Yu. A.; Bolshov, M. A.; Liger, V. V.

    2016-12-01

    Determination of a gas medium temperature by diode laser absorption spectrometry (DLAS) is based on the measurement of integral intensities of the absorption lines of a test molecule (generally water vapor molecule). In case of local thermodynamic equilibrium temperature is inferred from the ratio of the integral intensities of two lines with different low energy levels. For the total gas pressure above 1 atm the absorption lines are broadened and one cannot find isolated well resolved water vapor absorption lines within relatively narrow spectral interval of fast diode laser (DL) tuning range (about 3 cm-1). For diagnostics of a gas object in the case of high temperature and pressure DLAS technique can be realized with two diode lasers working in different spectral regions with strong absorption lines. In such situation the criteria of the optimal line selection differs significantly from the case of narrow lines. These criteria are discussed in our work. The software for selection the optimal spectral regions using the HITRAN-2012 and HITEMP data bases is developed. The program selects spectral regions of DL tuning, minimizing the error of temperature determination δT/T, basing on the attainable experimental error of line intensity measurement δS. Two combinations of optimal spectral regions were selected - (1.392 & 1.343 μm) and (1.392 & 1.339 μm). Different algorithms of experimental data processing are discussed.

  10. Pulse-fluence-specified optimal control simulation with applications to molecular orientation and spin-isomer-selective molecular alignment

    SciTech Connect

    Yoshida, Masataka; Nakashima, Kaoru; Ohtsuki, Yukiyoshi

    2015-12-31

    We propose an optimal control simulation with specified pulse fluence and amplitude. The simulation is applied to the orientation control of CO molecules to examine the optimal combination of THz and laser pulses, and to discriminate nuclear-spin isomers of {sup 14}N{sub 2} as spatially anisotropic distributions.

  11. A hybrid approach identifies metabolic signatures of high-producers for chinese hamster ovary clone selection and process optimization.

    PubMed

    Popp, Oliver; Müller, Dirk; Didzus, Katharina; Paul, Wolfgang; Lipsmeier, Florian; Kirchner, Florian; Niklas, Jens; Mauch, Klaus; Beaucamp, Nicola

    2016-09-01

    In-depth characterization of high-producer cell lines and bioprocesses is vital to ensure robust and consistent production of recombinant therapeutic proteins in high quantity and quality for clinical applications. This requires applying appropriate methods during bioprocess development to enable meaningful characterization of CHO clones and processes. Here, we present a novel hybrid approach for supporting comprehensive characterization of metabolic clone performance. The approach combines metabolite profiling with multivariate data analysis and fluxomics to enable a data-driven mechanistic analysis of key metabolic traits associated with desired cell phenotypes. We applied the methodology to quantify and compare metabolic performance in a set of 10 recombinant CHO-K1 producer clones and a host cell line. The comprehensive characterization enabled us to derive an extended set of clone performance criteria that not only captured growth and product formation, but also incorporated information on intracellular clone physiology and on metabolic changes during the process. These criteria served to establish a quantitative clone ranking and allowed us to identify metabolic differences between high-producing CHO-K1 clones yielding comparably high product titers. Through multivariate data analysis of the combined metabolite and flux data we uncovered common metabolic traits characteristic of high-producer clones in the screening setup. This included high intracellular rates of glutamine synthesis, low cysteine uptake, reduced excretion of aspartate and glutamate, and low intracellular degradation rates of branched-chain amino acids and of histidine. Finally, the above approach was integrated into a workflow that enables standardized high-content selection of CHO producer clones in a high-throughput fashion. In conclusion, the combination of quantitative metabolite profiling, multivariate data analysis, and mechanistic network model simulations can identify metabolic

  12. Optimization of Novel Indazoles as Highly Potent and Selective Inhibitors of Phosphoinositide 3-Kinase δ for the Treatment of Respiratory Disease.

    PubMed

    Down, Kenneth; Amour, Augustin; Baldwin, Ian R; Cooper, Anthony W J; Deakin, Angela M; Felton, Leigh M; Guntrip, Stephen B; Hardy, Charlotte; Harrison, Zoë A; Jones, Katherine L; Jones, Paul; Keeling, Suzanne E; Le, Joelle; Livia, Stefano; Lucas, Fiona; Lunniss, Christopher J; Parr, Nigel J; Robinson, Ed; Rowland, Paul; Smith, Sarah; Thomas, Daniel A; Vitulli, Giovanni; Washio, Yoshiaki; Hamblin, J Nicole

    2015-09-24

    Optimization of lead compound 1, through extensive use of structure-based design and a focus on PI3Kδ potency, isoform selectivity, and inhaled PK properties, led to the discovery of clinical candidates 2 (GSK2269557) and 3 (GSK2292767) for the treatment of respiratory indications via inhalation. Compounds 2 and 3 are both highly selective for PI3Kδ over the closely related isoforms and are active in a disease relevant brown Norway rat acute OVA model of Th2-driven lung inflammation.

  13. Reciprocal Recurrent Selection Compared to within-Strain Selection for Increasing Rate of Egg Lay of Tribolium under Optimal and Stress Conditions

    PubMed Central

    Orozco, Fernando; Bell, A. E.

    1974-01-01

    A replicated comparison of reciprocal recurrent selection (rrs) based on crossbred performance and within strain-selection (wss) based on purebred performance was made in three diverse environments over ten generations for the improvement of a heterotic trait, 4-day virgin egg lay of Tribolium castaneum. A selection intensity of 20% based on performance in either an optimum (33°), a mild stress (38°), or a severe stress (28°) environment was applied uniformly. Periodically, the performance of each population was measured in all three environments to provide both direct and correlated responses.—Heritability of egg lay in the base population ranged from 0.36 ± 0.03 in optimum to 0.26 ± 0.03 in severe stress. Estimates of dominance effects assumed significant proportions in severe stress only. Genetic correlations for egg lay in diverse environments were large and positive (.6 to.8).—Only in severe stress did the rrs response significantly exceed that for wss. Quadratic adjustments fitted to response curves revealed that small initial genetic gains under rrs were followed by significantly increasing rates of gain in late generations of selection. The reverse was true for wss. This and evidence from realized heritabilities and genetic correlations suggested that rrs had utilized both additive and dominance effects, but wss response was limited to additive effects.—These results agree with selection theory in demonstrating that purebred selection is more efficient than crossbred selection in utilizing additive gene effects. The latter method has merit when non-additive effects assume significant proportions, and this is the more probable case for severe stress conditions. PMID:17248651

  14. Demonstration optimization analyses of pumping from selected Arapahoe aquifer municipal wells in the west-central Denver Basin, Colorado, 2010–2109

    USGS Publications Warehouse

    Banta, Edward R.; Paschke, Suzanne S.

    2012-01-01

    Declining water levels caused by withdrawals of water from wells in the west-central part of the Denver Basin bedrock-aquifer system have raised concerns with respect to the ability of the aquifer system to sustain production. The Arapahoe aquifer in particular is heavily used in this area. Two optimization analyses were conducted to demonstrate approaches that could be used to evaluate possible future pumping scenarios intended to prolong the productivity of the aquifer and to delay excessive loss of saturated thickness. These analyses were designed as demonstrations only, and were not intended as a comprehensive optimization study. Optimization analyses were based on a groundwater-flow model of the Denver Basin developed as part of a recently published U.S. Geological Survey groundwater-availability study. For each analysis an optimization problem was set up to maximize total withdrawal rate, subject to withdrawal-rate and hydraulic-head constraints, for 119 selected municipal water-supply wells located in 96 model cells. The optimization analyses were based on 50- and 100-year simulations of groundwater withdrawals. The optimized total withdrawal rate for all selected wells for a 50-year simulation time was about 58.8 cubic feet per second. For an analysis in which the simulation time and head-constraint time were extended to 100 years, the optimized total withdrawal rate for all selected wells was about 53.0 cubic feet per second, demonstrating that a reduction in withdrawal rate of about 10 percent may extend the time before the hydraulic-head constraints are violated by 50 years, provided that pumping rates are optimally distributed. Analysis of simulation results showed that initially, the pumping produces water primarily by release of water from storage in the Arapahoe aquifer. However, because confining layers between the Denver and Arapahoe aquifers are thin, in less than 5 years, most of the water removed by managed-flows pumping likely would be supplied

  15. Method for optimizing output in ultrashort-pulse multipass laser amplifiers with selective use of a spectral filter

    DOEpatents

    Backus, Sterling J.; Kapteyn, Henry C.

    2007-07-10

    A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.

  16. Optimized precursor ion selection for labile ions in a linear ion trap mass spectrometer and its impact on quantification using selected reaction monitoring.

    PubMed

    Lee, Hyun-Seok; Shin, Kyong-Oh; Jo, Sung-Chan; Lee, Yong-Moon; Yim, Yong-Hyeon

    2014-12-01

    The fragmentation of fragile ions during the application of an isolation waveform for precursor ion selection and the resulting loss of isolated ion intensity is well-known in ion trap mass spectrometry (ITMS). To obtain adequate ion intensity in the selected reaction monitoring (SRM) of fragile precursor ions, a wider ion isolation width is required. However, the increased isolation width significantly diminishes the selectivity of the channels chosen for SRM, which is a serious problem for samples with complex matrices. The sensitive and selective quantification of many lipid molecules, including ceramides from real biological samples, using a linear ion trap mass spectrometer is also hindered by the same problem because of the ease of water loss from protonated ceramide ions. In this study, a method for the reliable quantification of ceramides using SRM with near unity precursor ion isolation has been developed for ITMS by utilizing alternative precursor ions generated by in-source dissociation. The selected precursor ions allow the isolation of ions with unit mass width and the selective analysis of ceramides using SRM with negligible loss of sensitivity. The quantification of C18:0-, C24:0- and C24:1-ceramides using the present method shows excellent linearity over the concentration ranges from 6 to 100, 25 to 1000 and 25 to 1000 nM, respectively. The limits of detection of C18:0-, C24:0- and C24:1-ceramides were 0.25, 0.25 and 5 fmol, respectively. The developed method was successfully applied to quantify ceramides in fetal bovine serum.

  17. Optimal location selection for the installation of urban green roofs considering honeybee habitats along with socio-economic and environmental effects.

    PubMed

    Gwak, Jae Ha; Lee, Bo Kyeong; Lee, Won Kyung; Sohn, So Young

    2017-03-15

    This study proposes a new framework for the selection of optimal locations for green roofs to achieve a sustainable urban ecosystem. The proposed framework selects building sites that can maximize the benefits of green roofs, based not only on the socio-economic and environmental benefits to urban residents, but also on the provision of urban foraging sites for honeybees. The framework comprises three steps. First, building candidates for green roofs are selected considering the building type. Second, the selected building candidates are ranked in terms of their expected socio-economic and environmental effects. The benefits of green roofs are improved energy efficiency and air quality, reduction of urban flood risk and infrastructure improvement costs, reuse of storm water, and creation of space for education and leisure. Furthermore, the estimated cost of installing green roofs is also considered. We employ spatial data to determine the expected effects of green roofs on each building unit, because the benefits and costs may vary depending on the location of the building. This is due to the heterogeneous spatial conditions. In the third step, the final building sites are proposed by solving the maximal covering location problem (MCLP) to determine the optimal locations for green roofs as urban honeybee foraging sites. As an illustrative example, we apply the proposed framework in Seoul, Korea. This new framework is expected to contribute to sustainable urban ecosystems.

  18. Structural optimization of an aptamer generated from Ligand-Guided Selection (LIGS) resulted in high affinity variant toward mIgM expressed on Burkitt's lymphoma cell lines.

    PubMed

    Zümrüt, Hazan E; Batool, Sana; Van, Nabeela; George, Shanell; Bhandari, Sanam; Mallikaratchy, Prabodhika

    2017-03-28

    Aptamers are synthetic, short nucleic acid molecules capable of specific target recognition. Aptamers are selected using a screening method termed Systematic Evolution of Ligands by Exponential enrichment (SELEX). We recently have introduced a variant of SELEX called "Ligand-Guided-Selection" (LIGS) that allows the identification of specific aptamers against known cell-surface proteins. Utilizing LIGS, we introduced three specific aptamers against membrane-bound IgM (mIgM), which is the hallmark of B cells. Out of the three aptamers selected against mIgM, an aptamer termed R1, in particular, was found to be interesting due to its ability to recognize mIgM on target cells and then block anti-IgM antibodies binding their antigen. We systematically truncated parent aptamer R1 to design shorter variants with enhanced affinity. Importantly, herein we show that the specificity of the most optimized variant of R1 aptamer is similar to that of anti-IgM antibody, indicating that the specificity of the ligand utilized in selective elution of the aptamer determines the specificity of the LIGS-generated aptamer. Furthermore, we report that truncated variants of R1 are able to recognize mIgM-positive human B lymphoma BJAB cells at physiological temperature, demonstrating that LIGS-generated aptamers could be re-optimized into higher affinity variants. Collectively, these findings show the significance of LIGS in generating highly specific aptamers with potential applications in biomedicine.

  19. A multi-objective model for closed-loop supply chain optimization and efficient supplier selection in a competitive environment considering quantity discount policy

    NASA Astrophysics Data System (ADS)

    Jahangoshai Rezaee, Mustafa; Yousefi, Samuel; Hayati, Jamileh

    2016-11-01

    Supplier selection and allocation of optimal order quantity are two of the most important processes in closed-loop supply chain (CLSC) and reverse logistic (RL). So that providing high quality raw material is considered as a basic requirement for a manufacturer to produce popular products, as well as achieve more market shares. On the other hand, considering the existence of competitive environment, suppliers have to offer customers incentives like discounts and enhance the quality of their products in a competition with other manufacturers. Therefore, in this study, a model is presented for CLSC optimization, efficient supplier selection, as well as orders allocation considering quantity discount policy. It is modeled using multi-objective programming based on the integrated simultaneous data envelopment analysis-Nash bargaining game. In this study, maximizing profit and efficiency and minimizing defective and functions of delivery delay rate are taken into accounts. Beside supplier selection, the suggested model selects refurbishing sites, as well as determining the number of products and parts in each network's sector. The suggested model's solution is carried out using global criteria method. Furthermore, based on related studies, a numerical example is examined to validate it.

  20. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    SciTech Connect

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H.

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  1. Salicornia as a crop plant in temperate regions: selection of genetically characterized ecotypes and optimization of their cultivation conditions

    PubMed Central

    Singh, Devesh; Buhmann, Anne K.; Flowers, Tim J.; Seal, Charlotte E.; Papenbrock, Jutta

    2014-01-01

    Rising sea levels and salinization of groundwater due to global climate change result in fast-dwindling sources of freshwater. Therefore, it is important to find alternatives to grow food crops and vegetables. Halophytes are naturally evolved salt-tolerant plants that are adapted to grow in environments that inhibit the growth of most glycophytic crop plants substantially. Members of the Salicornioideae are promising candidates for saline agriculture due to their high tolerance to salinity. Our aim was to develop genetically characterized lines of Salicornia and Sarcocornia for further breeding and to determine optimal cultivation conditions. To obtain a large and diverse genetic pool, seeds were collected from different countries and ecological conditions. The external transcribed spacer (ETS) sequence of 62 Salicornia and Sarcocornia accessions was analysed: ETS sequence data showed a clear distinction between the two genera and between different Salicornia taxa. However, in some cases the ETS was not sufficiently variable to resolve morphologically distinct species. For the determination of optimal cultivation conditions, experiments on germination, seedling establishment and growth to a harvestable size were performed using different accessions of Salicornia spp. Experiments revealed that the percentage germination was greatest at lower salinities and with temperatures of 20/10 °C (day/night). Salicornia spp. produced more harvestable biomass in hydroponic culture than in sand culture, but the nutrient concentration requires optimization as hydroponically grown plants showed symptoms of stress. Salicornia ramosissima produced more harvestable biomass than Salicornia dolichostachya in artificial sea water containing 257 mM NaCl. Based on preliminary tests on ease of cultivation, gain in biomass, morphology and taste, S. dolichostachya was investigated in more detail, and the optimal salinity for seedling establishment was found to be 100 mM. Harvesting of S

  2. Salicornia as a crop plant in temperate regions: selection of genetically characterized ecotypes and optimization of their cultivation conditions.

    PubMed

    Singh, Devesh; Buhmann, Anne K; Flowers, Tim J; Seal, Charlotte E; Papenbrock, Jutta

    2014-11-10

    Rising sea levels and salinization of groundwater due to global climate change result in fast-dwindling sources of freshwater. Therefore, it is important to find alternatives to grow food crops and vegetables. Halophytes are naturally evolved salt-tolerant plants that are adapted to grow in environments that inhibit the growth of most glycophytic crop plants substantially. Members of the Salicornioideae are promising candidates for saline agriculture due to their high tolerance to salinity. Our aim was to develop genetically characterized lines of Salicornia and Sarcocornia for further breeding and to determine optimal cultivation conditions. To obtain a large and diverse genetic pool, seeds were collected from different countries and ecological conditions. The external transcribed spacer (ETS) sequence of 62 Salicornia and Sarcocornia accessions was analysed: ETS sequence data showed a clear distinction between the two genera and between different Salicornia taxa. However, in some cases the ETS was not sufficiently variable to resolve morphologically distinct species. For the determination of optimal cultivation conditions, experiments on germination, seedling establishment and growth to a harvestable size were performed using different accessions of Salicornia spp. Experiments revealed that the percentage germination was greatest at lower salinities and with temperatures of 20/10 °C (day/night). Salicornia spp. produced more harvestable biomass in hydroponic culture than in sand culture, but the nutrient concentration requires optimization as hydroponically grown plants showed symptoms of stress. Salicornia ramosissima produced more harvestable biomass than Salicornia dolichostachya in artificial sea water containing 257 mM NaCl. Based on preliminary tests on ease of cultivation, gain in biomass, morphology and taste, S. dolichostachya was investigated in more detail, and the optimal salinity for seedling establishment was found to be 100 mM. Harvesting of S

  3. Optimization of dispersive liquid-liquid microextraction for the selective determination of trace amounts of palladium by flame atomic absorption spectroscopy.

    PubMed

    Kokya, Taher Ahmadzadeh; Farhadi, Khalil

    2009-09-30

    A new simple and reliable method for rapid and selective extraction and determination of the trace levels of Pd(2+) ion was developed by dispersive liquid-liquid microextraction preconcentration and flame atomic absorption spectrometry detection. In the proposed approach, thioridazine HCl (TRH) was used as a Pd(2+) ion selective complexing agent. The effective parameters on the extraction recovery were studied and optimized utilizing two decent optimization methods; factorial design and central composite design (CCD). Through factorial design the best efficiency of extraction acquired using ethanol and chloroform as dispersive and extraction solvents respectively. CCD optimization resulted in 1.50 mL of dispersive solvent; 0.15 mL of extraction solvent; 0.45 mg of TRH and 250 mg of potassium chloride salt per 5 mL of sample solution. Under the optimum conditions the calibration graph was linear over the range 100-2000 microgL(-1). The average relative standard deviation was 0.7% for five repeated determinations. The limit of detection was 90 microg L(-1). The average enrichment factor and recovery reached 45.7% and 74.2% respectively. The method was successfully applied to the determination of trace amounts of palladium in the real water samples.

  4. Decision Support Model to Select the Optimal Municipal Solid Waste Management Strategy at United States Air Force Installations

    DTIC Science & Technology

    2007-11-02

    The United States Air Force has recently defined three objectives in developing strategies regarding the management of municipal solid waste at the...insight concerning the selection and implementation of a municipal solid waste management policy.

  5. Decision Support Model to Select the Optimal Municipal Solid Waste Management Policy at United States Air Force Installations.

    DTIC Science & Technology

    1997-03-03

    The United States Air Force has recently defined three objectives in developing strategies regarding the management of municipal solid waste at the...insight concerning the selection and implementation of a municipal solid waste management policy.

  6. Optimal Wavelengths Selection Using Hierarchical Evolutionary Algorithm for Prediction of Firmness and Soluble Solids Content in Apples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral scattering is a promising technique for rapid and noninvasive measurement of multiple quality attributes of apple fruit. A hierarchical evolutionary algorithm (HEA) approach, in combination with subspace decomposition and partial least squares (PLS) regression, was proposed to select o...

  7. Selective CB2 receptor agonists. Part 2: Structure-activity relationship studies and optimization of proline-based compounds.

    PubMed

    Riether, Doris; Zindell, Renee; Wu, Lifen; Betageri, Raj; Jenkins, James E; Khor, Someina; Berry, Angela K; Hickey, Eugene R; Ermann, Monika; Albrecht, Claudia; Ceci, Angelo; Gemkow, Mark J; Nagaraja, Nelamangala V; Romig, Helmut; Sauer, Achim; Thomson, David S

    2015-02-01

    Through a ligand-based pharmacophore model (S)-proline based compounds were identified as potent cannabinoid receptor 2 (CB2) agonists with high selectivity over the cannabinoid receptor 1 (CB1). Structure-activity relationship investigations for this compound class lead to oxo-proline compounds 21 and 22 which combine an impressive CB1 selectivity profile with good pharmacokinetic properties. In a streptozotocin induced diabetic neuropathy model, 22 demonstrated a dose-dependent reversal of mechanical hyperalgesia.

  8. Discovery and optimization of new benzimidazole- and benzoxazole-pyrimidone selective PI3Kβ inhibitors for the treatment of phosphatase and TENsin homologue (PTEN)-deficient cancers.

    PubMed

    Certal, Victor; Halley, Frank; Virone-Oddos, Angela; Delorme, Cécile; Karlsson, Andreas; Rak, Alexey; Thompson, Fabienne; Filoche-Rommé, Bruno; El-Ahmad, Youssef; Carry, Jean-Christophe; Abecassis, Pierre-Yves; Lejeune, Pascale; Vincent, Loic; Bonnevaux, Hélène; Nicolas, Jean-Paul; Bertrand, Thomas; Marquette, Jean-Pierre; Michot, Nadine; Benard, Tsiala; Below, Peter; Vade, Isabelle; Chatreaux, Fabienne; Lebourg, Gilles; Pilorge, Fabienne; Angouillant-Boniface, Odile; Louboutin, Audrey; Lengauer, Christoph; Schio, Laurent

    2012-05-24

    Most of the phosphoinositide-3 kinase (PI3K) kinase inhibitors currently in clinical trials for cancer treatment exhibit pan PI3K isoform profiles. Single PI3K isoforms differentially control tumorigenesis, and PI3Kβ has emerged as the isoform involved in the tumorigenicity of PTEN-deficient tumors. Herein we describe the discovery and optimization of a new series of benzimidazole- and benzoxazole-pyrimidones as small molecular mass PI3Kβ-selective inhibitors. Starting with compound 5 obtained from a one-pot reaction via a novel intermediate 1, medicinal chemistry optimization led to the discovery of compound 8, which showed a significant activity and selectivity for PI3Kβ and adequate in vitro pharmacokinetic properties. The X-ray costructure of compound 8 in PI3Kδ showed key interactions and structural features supporting the observed PI3Kβ isoform selectivity. Compound 8 achieved sustained target modulation and tumor growth delay at well tolerated doses when administered orally to SCID mice implanted with PTEN-deficient human tumor xenografts.

  9. Optimization of passively mode-locked Nd:GdVO4 laser with the selectable pulse duration 15-70 ps

    NASA Astrophysics Data System (ADS)

    Frank, Milan; Jelínek, Michal; Vyhlídal, David; Kubeček, Václav

    2016-12-01

    In this paper the optimization of a continuously diode-pumped Nd:GdVO4 laser oscillator in bounce geometry passively mode-locked using semiconductor saturable absorber mirror is presented. In the previous results the Nd:GdVO4 laser system generating 30 ps pulses with the average output power of 6.9 W at the repetition rate of 200 MHz at the wavelength of 1063 nm was reported. Now we are demonstrating up to three times increase of peak power due to the optimization of mode-matching in the laser resonator. Depending on the oscillator configuration we obtained the stable continuously mode-locked operation with pulses having selectable duration from 15 ps to 70 ps with the average output power of 7 W and the repetition rate of 150 MHz.

  10. Discovery and optimization of potent and selective functional antagonists of the human adenosine A2B receptor.

    PubMed

    Bedford, Simon T; Benwell, Karen R; Brooks, Teresa; Chen, Ijen; Comer, Mike; Dugdale, Sarah; Haymes, Tim; Jordan, Allan M; Kennett, Guy A; Knight, Anthony R; Klenke, Burkhard; LeStrat, Loic; Merrett, Angela; Misra, Anil; Lightowler, Sean; Padfield, Anthony; Poullennec, Karine; Reece, Mark; Simmonite, Heather; Wong, Melanie; Yule, Ian A

    2009-10-15

    We herein report the discovery of a novel class of antagonists of the human adenosine A2B receptor. This low molecular weight scaffold has been optimized to offer derivatives with potential utility for the alleviation of conditions associated with this receptor subtype, such as nociception, diabetes, asthma and COPD. Furthermore, preliminary pharmacokinetic analysis has revealed compounds with profiles suitable for either inhaled or systemic routes of administration.

  11. Selection of Optimal Hyper-Parameters for Estimation of Uncertainty in MRI-TRUS Registration of the Prostate

    PubMed Central

    Janoos, Firdaus; Pursley, Jennifer; Fedorov, Andriy; Tempany, Clare; Cormack, Robert A.; Wells, William M.

    2013-01-01

    Transrectal ultrasound (TRUS) facilitates intra-treatment delineation of the prostate gland (PG) to guide insertion of brachytherapy seeds, but the prostate substructure and apex are not always visible which may make the seed placement sub-optimal. Based on an elastic model of the prostate created from MRI, where the prostate substructure and apex are clearly visible, we use a Bayesian approach to estimate the posterior distribution on deformations that aligns the pre-treatment MRI with intra-treatment TRUS. Without apex information in TRUS, the posterior prediction of the location of the prostate boundary, and the prostate apex boundary in particular, is mainly determined by the pseudo stiffness hyper-parameter of the prior distribution. We estimate the optimal value of the stiffness through likelihood maximization that is sensitive to the accuracy as well as the precision of the posterior prediction at the apex boundary. From a data-set of 10 pre- and intra-treatment prostate images with ground truth delineation of the total PG, 4 cases were used to establish an optimal stiffness hyper-parameter when 15% of the prostate delineation was removed to simulate lack of apex information in TRUS, while the remaining 6 cases were used to cross-validate the registration accuracy and uncertainty over the PG and in the apex. PMID:23286120

  12. Highly Selective Bioconversion of Ginsenoside Rb1 to Compound K by the Mycelium of Cordyceps sinensis under Optimized Conditions.

    PubMed

    Wang, Wei-Nan; Yan, Bing-Xiong; Xu, Wen-Di; Qiu, Ye; Guo, Yun-Long; Qiu, Zhi-Dong

    2015-10-23

    Compound K (CK), a highly active and bioavailable derivative obtained from protopanaxadiol ginsenosides, displays a wide variety of pharmacological properties, especially antitumor activity. However, the inadequacy of natural sources limits its application in the pharmaceutical industry. In this study, we firstly discovered that Cordyceps sinensis was a potent biocatalyst for the biotransformation of ginsenoside Rb1 into CK. After a series of investigations on the biotransformation parameters, an optimal composition of the biotransformation culture was found to be lactose, soybean powder and MgSO₄ without controlling the pH. Also, an optimum temperature of 30 °C for the biotransformation process was suggested in a range of 25 °C-50 °C. Then, a biotransformation pathway of Rb1→Rd→F2→CK was established using high performance liquid chromatography/quadrupole time-of-flight mass spectrometry (HPLC-Q-TOF-MS). Our results demonstrated that the molar bioconversion rate of Rb1 to CK was more than 82% and the purity of CK produced by C. sinensis under the optimized conditions was more than 91%. In conclusion, the combination of C. sinensis and the optimized conditions is applicable for the industrial preparation of CK for medicinal purposes.

  13. Differential-Evolution algorithm based optimization for the site selection of groundwater production wells with the consideration of the vulnerability concept

    NASA Astrophysics Data System (ADS)

    Elçi, Alper; Ayvaz, M. Tamer

    2014-04-01

    The objective of this study is to present an optimization approach to determine locations of new groundwater production wells, where groundwater is relatively less susceptible to groundwater contamination (i.e. more likely to obtain clean groundwater), the pumping rate is maximum or the cost of well installation and operation is minimum for a prescribed set of constraints. The approach also finds locations that are in suitable areas for new groundwater exploration with respect to land use. A regional-scale groundwater flow model is coupled with a hybrid optimization model that uses the Differential Evolution (DE) algorithm and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method as the global and local optimizers, respectively. Several constraints such as the depth to the water table, total well length and the restriction of seawater intrusion are considered in the optimization process. The optimization problem can be formulated either as the maximization of the pumping rate or as the minimization of total costs of well installation and pumping operation from existing and new wells. Pumping rates of existing wells that are prone to seawater intrusion are optimized to prevent groundwater flux from the shoreline towards these wells. The proposed simulation-optimization model is demonstrated on an existing groundwater flow model for the Tahtalı watershed in Izmir-Turkey. The model identifies for the demonstration study locations and pumping rates for up to four new wells and one new well in the cost minimization and maximization problem, respectively. All new well locations in the optimized solution coincide with areas of relatively low groundwater vulnerability. Considering all solutions of the demonstration study, groundwater vulnerability indices for new well locations range from 29.64 to 40.48 (on a scale of 0-100, where 100 indicates high vulnerability). All identified wells are located relatively close to each other. This implies that the method pinpoints the

  14. A Study of the Relationship between Cognitive Emotion Regulation, Optimism, and Perceived Stress among Selected Teachers in Lutheran Schools

    ERIC Educational Resources Information Center

    Gliebe, Sudi Kate

    2012-01-01

    Problem: The problem of this study was to determine the relationship between perceived stress, as measured by the Perceived Stress Scale (PSS), and a specific set of predictor variables among selected teachers in Lutheran schools in the United States. These variables were cognitive emotion regulation strategies (positive reappraisal and…

  15. Selection of Learning Tasks Based on Performance and Cognitive Load Scores as a Way To Optimize the Learning Process.

    ERIC Educational Resources Information Center

    Salden, Ron J. C. M.; Paas, Fred; van Merrienboer, Jeroen J. G.

    To attain highly efficient instructional conditions, it is important to adapt instruction to the individual trainee. This so-called personalization of training by dynamic/automatic task selection is the focus of this paper. Recently, cognitive load measures have been proposed as a useful addition to conventional performance measures like speed and…

  16. Development and Optimization of Piperidyl-1,2,3-Triazole Ureas as Selective Chemical Probes of Endocannabinoid Biosynthesis

    PubMed Central

    Hsu, Ku-Lung; Tsuboi, Katsunori; Whitby, Landon R.; Speers, Anna E.; Pugh, Holly; Inloes, Jordon; Cravatt, Benjamin F.

    2014-01-01

    We have previously shown that 1,2,3-triazole ureas (1,2,3-TUs) act as versatile class of irreversible serine hydrolase inhibitors that can be tuned to create selective probes for diverse members of this large enzyme class, including diacylglycerol lipase-β (DAGLβ), a principal biosynthetic enzyme for the endocannabinoid 2-arachidonoylglycerol (2-AG). Here, we provide a detailed account of the discovery, synthesis, and structure-activity relationship (SAR) of (2-substituted)-piperidyl-1,2,3-TUs that selectively inactivate DAGLβ in living systems. Key to success was the use of activity-based protein profiling (ABPP) with broad-spectrum and tailored activity-based probes to guide our medicinal chemistry efforts. We also describe an expanded repertoire of DAGL-tailored activity-based probes that includes biotinylated and alkyne agents for enzyme enrichment coupled with mass spectrometry-based proteomics and assessment of proteome-wide selectivity. Our findings highlight the broad utility of 1,2,3-TUs for serine hydrolase inhibitor development and their application to create selective probes of endocannabinoid biosynthetic pathways. PMID:24152245

  17. iVAX: An integrated toolkit for the selection and optimization of antigens and the design of epitope-driven vaccines

    PubMed Central

    Moise, Leonard; Gutierrez, Andres; Kibria, Farzana; Martin, Rebecca; Tassone, Ryan; Liu, Rui; Terry, Frances; Martin, Bill; De Groot, Anne S

    2015-01-01

    Computational vaccine design, also known as computational vaccinology, encompasses epitope mapping, antigen selection and immunogen design using computational tools. The iVAX toolkit is an integrated set of tools that has been in development since 1998 by De Groot and Martin. It comprises a suite of immunoinformatics algorithms for triaging candidate antigens, selecting immunogenic and conserved T cell epitopes, eliminating regulatory T cell epitopes, and optimizing antigens for immunogenicity and protection against disease. iVAX has been applied to vaccine development programs for emerging infectious diseases, cancer antigens and biodefense targets. Several iVAX vaccine design projects have had success in pre-clinical studies in animal models and are progressing toward clinical studies. The toolkit now incorporates a range of immunoinformatics tools for infectious disease and cancer immunotherapy vaccine design. This article will provide a guide to the iVAX approach to computational vaccinology. PMID:26155959

  18. End-to-end sensor simulation for spectral band selection and optimization with application to the Sentinel-2 mission.

    PubMed

    Segl, Karl; Richter, Rudolf; Küster, Theres; Kaufmann, Hermann

    2012-02-01

    An end-to-end sensor simulation is a proper tool for the prediction of the sensor's performance over a range of conditions that cannot be easily measured. In this study, such a tool has been developed that enables the assessment of the optimum spectral resolution configuration of a sensor based on key applications. It employs the spectral molecular absorption and scattering properties of materials that are used for the identification and determination of the abundances of surface and atmospheric constituents and their interdependence on spatial resolution and signal-to-noise ratio as a basis for the detailed design and consolidation of spectral bands for the future Sentinel-2 sensor. The developed tools allow the computation of synthetic Sentinel-2 spectra that form the frame for the subsequent twofold analysis of bands in the atmospheric absorption and window regions. One part of the study comprises the assessment of optimal spatial and spectral resolution configurations for those bands used for atmospheric correction, optimized with regard to the retrieval of aerosols, water vapor, and the detection of cirrus clouds. The second part of the study presents the optimization of thematic bands, mainly driven by the spectral characteristics of vegetation constituents and minerals. The investigation is performed for different wavelength ranges because most remote sensing applications require the use of specific band combinations rather than single bands. The results from the important "red-edge" and the "short-wave infrared" domains are presented. The recommended optimum spectral design predominantly confirms the sensor parameters given by the European Space Agency. The system is capable of retrieving atmospheric and geobiophysical parameters with enhanced quality compared to existing multispectral sensors. Minor spectral changes of single bands are discussed in the context of typical remote sensing applications, supplemented by the recommendation of a few new bands for

  19. Optimization of an extraction protocol for organic matter from soils and sediments using high resolution mass spectrometry: selectivity and biases

    NASA Astrophysics Data System (ADS)

    Chu, R. K.; Tfaily, M. M.; Tolic, N.; Kyle, J. E.; Robinson, E. R.; Hess, N. J.; Paša-Tolić, L.

    2015-12-01

    Soil organic matter (SOM) is a complex mixture of above and belowground plant litter and microbial residues, and is a key reservoir for carbon (C) and nutrient biogeochemical cycling in different ecosystems. A limited understanding of the molecular composition of SOM prohibits the ability to routinely decipher chemical processes within soil and predict how terrestrial C fluxes will response to changing climatic conditions. Here, we present that the choice of solvent can be used to selectively extract different compositional fractions from SOM to either target a specific class of compounds or gain a better understanding of the entire composition of the soil sample using 12T Fourier transform ion cyclotron resonance mass spectrometry. Specifically, we found that hexane and chloroform were selective for lipid-like compounds with very low O:C ratios; water was selective for carbohydrates with high O:C ratios; acetonitrile preferentially extracts lignin, condensed structures, and tannin polyphenolic compounds with O:C > 0.5; methanol has higher selectivity towards lignin and lipid compounds characterized with relatively low O:C < 0.5. Hexane, chloroform, methanol, acetonitrile and water increase the number and types of organic molecules extracted from soil for a broader range of chemically diverse soil types. Since each solvent extracts a selective group of compounds, using a suite of solvents with varying polarity for analysis results in more comprehensive representation of the diversity of organic molecules present in soil and a better representation of the whole spectrum of available substrates for microorganisms. Moreover, we have developed a sequential extraction protocol that permits sampling diverse classes of organic compounds while minimizing ionization competition during ESI while increasing sample throughput and decreasing sample volume. This allowed us to hypothesize about possible chemical reactions relating classes of organic molecules that reflect abiotic

  20. Optimization of cell line development in the GS-CHO expression system using a high-throughput, single cell-based clone selection system.

    PubMed

    Nakamura, Tsuyoshi; Omasa, Takeshi

    2015-09-01

    Therapeutic antibodies are commonly produced by high-expressing, clonal and recombinant Chinese hamster ovary (CHO) cell lines. Currently, CHO cells dominate as a commercial production host because of their ease of use, established regulatory track record, and safety profile. CHO-K1SV is a suspension, protein-free-adapted CHO-K1-derived cell line employing the glutamine synthetase (GS) gene expression system (GS-CHO expression system). The selection of high-producing mammalian cell lines is a crucial step in process development for the production of therapeutic antibodies. In general, cloning by the limiting dilution method is used to isolate high-producing monoclonal CHO cells. However, the limiting dilution method is time consuming and has a low probability of monoclonality. To minimize the duration and increase the probability of obtaining high-producing clones with high monoclonality, an automated single cell-based clone selector, the ClonePix FL system, is available. In this study, we applied the high-throughput ClonePix FL system for cell line development using CHO-K1SV cells and investigated efficient conditions for single cell-based clone selection. CHO-K1SV cell growth at the pre-picking stage was improved by optimizing the formulation of semi-solid medium. The efficiency of picking and cell growth at the post-picking stage was improved by optimization of the plating time without decreasing the diversity of clones. The conditions for selection, including the medium formulation, were the most important factors for the single cell-based clone selection system to construct a high-producing CHO cell line.

  1. A distributed multichannel demand-adaptive P2P VoD system with optimized caching and neighbor-selection

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Chen, Minghua; Parekh, Abhay; Ramchandran, Kannan

    2011-09-01

    We design a distributed multi-channel P2P Video-on-Demand (VoD) system using "plug-and-play" helpers. Helpers are heterogenous "micro-servers" with limited storage, bandwidth and number of users they can serve simultaneously. Our proposed system has the following salient features: (1) it jointly optimizes over helper-user connection topology, video storage distribution and transmission bandwidth allocation; (2) it minimizes server load, and is adaptable to varying supply and demand patterns across multiple video channels irrespective of video popularity; and (3) it is fully distributed and requires little or no maintenance overhead. The combinatorial nature of the problem and the system demand for distributed algorithms makes the problem uniquely challenging. By utilizing Lagrangian decomposition and Markov chain approximation based arguments, we address this challenge by designing two distributed algorithms running in tandem: a primal-dual storage and bandwidth allocation algorithm and a "soft-worst-neighbor-choking" topology-building algorithm. Our scheme provably converges to a near-optimal solution, and is easy to implement in practice. Packet-level simulation results show that the proposed scheme achieves minimum sever load under highly heterogeneous combinations of supply and demand patterns, and is robust to system dynamics of user/helper churn, user/helper asynchrony, and random delays in the network.

  2. Organometallic approach to polymer-protected antibacterial silver nanoparticles: optimal nanoparticle size-selection for bacteria interaction

    NASA Astrophysics Data System (ADS)

    Crespo, Julian; García-Barrasa, Jorge; López-de-Luzuriaga, José M.; Monge, Miguel; Olmos, M. Elena; Sáenz, Yolanda; Torres, Carmen

    2012-12-01

    The optimal size-specific affinity of silver nanoparticles (Ag NPs) towards E. coli bacteria has been studied. For this purpose, Ag NPs coated with polyvinylpyrrolidone (PVP) and cellulose acetate (CA) have been prepared using an organometallic approach. The complex NBu4[Ag(C6F5)2] has been treated with AgClO4 in a 1:1 molar ratio giving rise to the nanoparticle precursor [Ag(C6F5)] in solution. Addition of an excess of PVP ( 1) or CA ( 2) and 5 h of reflux in tetrahydrofuran (THF) at 66 °C leads to Ag NPs of small size (4.8 ± 3.0 nm for PVP-Ag NPs and 3.0 ± 1.2 nm for CA-Ag NPs) that coexist in both cases with larger nanoparticles between 7 and 25 nm. Both nanomaterials display a high antibacterial effectiveness against E. coli. The TEM analysis of the nanoparticle-bacterial cell membrane interaction shows an optimal size-specific affinity for PVP-Ag NPs of 5.4 ± 0.7 nm in the presence of larger size silver nanoparticles.

  3. Using multi-criteria decision making for selection of the optimal strategy for municipal solid waste management.

    PubMed

    Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica

    2016-09-01

    Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy.

  4. A graph-theoretic approach on optimizing informed-node selection in multi-agent tracking control

    NASA Astrophysics Data System (ADS)

    Shi, Guodong; Sou, Kin Cheong; Sandberg, Henrik; Johansson, Karl Henrik

    2014-01-01

    A graph optimization problem for a multi-agent leader-follower problem is considered. In a multi-agent system with n followers and one leader, each agent’s goal is to track the leader using the information obtained from its neighbors. The neighborhood relationship is defined by a directed communication graph where k agents, designated as informed agents, can become neighbors of the leader. This paper establishes that, for any given strongly connected communication graph with k informed agents, all agents will converge to the leader. In addition, an upper bound and a lower bound of the convergence rate are obtained. These bounds are shown to explicitly depend on the maximal distance from the leader to the followers. The dependence between this distance and the exact convergence rate is verified by empirical studies. Then we show that minimizing the maximal distance problem is a metric k-center problem in classical combinatorial optimization studies, which can be approximately solved. Numerical examples are given to illustrate the properties of the approximate solutions.

  5. Development of functional beverages from blends of Hibiscus sabdariffa extract and selected fruit juices for optimal antioxidant properties.

    PubMed

    Ogundele, Oluwatoyin M A; Awolu, Olugbenga O; Badejo, Adebanjo A; Nwachukwu, Ifeanyi D; Fagbemi, Tayo N

    2016-09-01

    The demand for functional foods and drinks with health benefit is on the increase. The synergistic effect from mixing two or more of such drinks cannot be overemphasized. This study was carried out to formulate and investigate the effects of blends of two or more of pineapple, orange juices, carrot, and Hibiscus sabdariffa extracts (HSE) on the antioxidant properties of the juice formulations in order to obtain a combination with optimal antioxidant properties. Experimental design was carried out using optimal mixture model of response surface methodology which generated twenty experimental runs with antioxidant properties as the responses. The DPPH (1,1-diphenyl-2-picrylhydrazyl) and ABTS [2,2'-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid)] radical scavenging abilities, ferric reducing antioxidant potential (FRAP), vitamin C, total phenolics, and total carotenoids contents of the formulations were evaluated as a test of antioxidant property. In all the mixtures, formulations having HSE as part of the mixture showed the highest antioxidant potential. The statistical analyzes, however, showed that the formulations containing pineapple, carrot, orange, and HSE of 40.00, 16.49, 17.20, and 26.30%, respectively, produced optimum antioxidant potential and was shown to be acceptable to a research laboratory guidance panel, thus making them viable ingredients for the production of functional beverages possessing important antioxidant properties with potential health benefits.

  6. Optimizing liquid waste treatment processing in PWRs: focus on modeling of the variation of ion-exchange resins selectivity coefficients

    SciTech Connect

    Gressier, Frederic; Van der Lee, Jan; Schneider, Helene; Bachet, Martin; Catalette, Hubert

    2007-07-01

    A bibliographic survey has highlighted the essential role of selectivity on resin efficiency, especially the variation of selectivity coefficients in function of the resin saturation state and the operating conditions. This phenomenon has been experimentally confirmed but is not yet implemented into an ion-exchange model specific for resins. This paper reviews the state of the art in predicting sorption capacity of ion-exchange resins. Different models accounting for ions activities inside the resin phase are available. Moreover, a comparison between the values found in the literature and our results has been done. The results of sorption experiments of cobalt chloride on a strong cationic gel type resin used in French PWRs are presented. The graph describing the variation of selectivity coefficient with respect to cobalt equivalent fraction is drawn. The parameters determined by the analysis of this graph are injected in a new physico-chemical law. Implementation of this model in the chemical speciation simulation code CHESS enables to study the overall effect of this approach for the sorption in a batch. (authors)

  7. Solute-solvent interactions in micellar electrokinetic chromatography. 6. Optimization of the selectivity of lithium dodecyl sulfate-lithium perfluorooctanesulfonate mixed micellar buffers.

    PubMed

    Fuguet, Elisabet; Ràfols, Clara; Torres-Lapasió, José Ramón; García-Alvarez-Coque, María Celia; Bosch, Elisabeth; Rosés, Martí

    2002-09-01

    The optimization of the composition of mixed surfactants used as micellar electrokinetic chromatography (MEKC) pseudostationary phases is proposed as an effective method for the separation of complex mixtures of analytes. The solvation parameter model is used to select two surfactants (lithium dodecyl sulfate, LDS, and lithium perfluorooctanesulfonate, LPFOS) with contrasting solvation properties. Combination of these two surfactants allows variations of the solvation properties of MEKC pseudostationary phase along a wide range. Thus, the convenient variation of the proportion of both surfactants allows an effective control of the selectivity in such systems. An algorithm that predicts the overall resolution of a given mixture of compounds is described and applied to optimize the composition of the mixed surfactant for the separation of the mixture. The algorithm is based on the calculation of peak purities on simulated chromatograms as a function of the composition of the mixed LDS/LPFOS micellar buffer from data at several micellar buffer compositions. Successful separations were achieved for mixtures containing up to 20 compounds, in less than 12 min.

  8. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution

    PubMed Central

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M.; Bai, Ruibin

    2016-01-01

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition. PMID:27854324

  9. Seasonal Variation in Frequency of Isolation of Ophiosphaerella korrae from Bermudagrass Roots in Mississippi and Pathogenicity and Optimal Growth of Selected Isolates.

    PubMed

    Perry, D Hunter; Tomaso-Peterson, Maria; Baird, Richard

    2010-05-01

    Isolation frequency of Ophiosphaerella korrae (spring dead spot pathogen) from Cynodon dactylon (bermudagrass) roots at a golf course near West Point, Mississippi, was monitored over a 3-year investigation. Laboratory and greenhouse experiments were conducted to determine optimal temperatures for the growth of selected O. korrae isolates collected from the field study and to evaluate those isolates for pathogenicity potential. Isolation frequencies of the pathogen from naturally infested root samples were significantly higher in the winter and spring and lowest in the fall regardless of cultural, nutrient, and chemical treatments. Annual soil temperatures ranged between 8 and 29 degrees C, and no correlation was observed between temperature and percent isolation of O. korrae. Optimal in vitro growth of selected O. korrae isolates ranged from 21 to 25 degrees C. Root discoloration was significantly greater in the presence of O. korrae compared to non-inoculated roots in greenhouse studies. Results of this study confirm and are the first to document that O. korrae naturally infests roots throughout the bermudagrass growth cycle, but factors other than temperature and management practices may influence O. korrae in situ.

  10. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution.

    PubMed

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin

    2016-11-16

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.

  11. Discovery and optimization of 1,7-disubstituted-2,2-dimethyl-2,3-dihydroquinazolin-4(1H)-ones as potent and selective PKCθ inhibitors.

    PubMed

    Katoh, Taisuke; Takai, Takafumi; Yukawa, Takafumi; Tsukamoto, Tetsuya; Watanabe, Etsurou; Mototani, Hideyuki; Arita, Takeo; Hayashi, Hiroki; Nakagawa, Hideyuki; Klein, Michael G; Zou, Hua; Sang, Bi-Ching; Snell, Gyorgy; Nakada, Yoshihisa

    2016-06-01

    A high-throughput screening campaign helped us to identify an initial lead compound (1) as a protein kinase C-θ (PKCθ) inhibitor. Using the docking model of compound 1 bound to PKCθ as a model, structure-based drug design was employed and two regions were identified that could be explored for further optimization, i.e., (a) a hydrophilic region around Thr442, unique to PKC family, in the inner part of the hinge region, and (b) a lipophilic region at the forefront of the ethyl moiety. Optimization of the hinge binder led us to find 1,3-dihydro-2H-imidazo[4,5-b]pyridin-2-one as a potent and selective hinge binder, which resulted in the discovery of compound 5. Filling the lipophilic region with a suitable lipophilic substituent boosted PKCθ inhibitory activity and led to the identification of compound 10. The co-crystal structure of compound 10 bound to PKCθ confirmed that both the hydrophilic and lipophilic regions were fully utilized. Further optimization of compound 10 led us to compound 14, which demonstrated an improved pharmacokinetic profile and inhibition of IL-2 production in a mouse.

  12. Fast numerical design of spatial-selective rf pulses in MRI using Krotov and quasi-Newton based optimal control methods.

    PubMed

    Vinding, Mads S; Maximov, Ivan I; Tošner, Zdenĕk; Nielsen, Niels Chr

    2012-08-07

    The use of increasingly strong magnetic fields in magnetic resonance imaging (MRI) improves sensitivity, susceptibility contrast, and spatial or spectral resolution for functional and localized spectroscopic imaging applications. However, along with these benefits come the challenges of increasing static field (B(0)) and rf field (B(1)) inhomogeneities induced by radial field susceptibility differences and poorer dielectric properties of objects in the scanner. Increasing fields also impose the need for rf irradiation at higher frequencies which may lead to elevated patient energy absorption, eventually posing a safety risk. These reasons have motivated the use of multidimensional rf pulses and parallel rf transmission, and their combination with tailoring of rf pulses for fast and low-power rf performance. For the latter application, analytical and approximate solutions are well-established in linear regimes, however, with increasing nonlinearities and constraints on the rf pulses, numerical iterative methods become attractive. Among such procedures, optimal control methods have recently demonstrated great potential. Here, we present a Krotov-based optimal control approach which as compared to earlier approaches provides very fast, monotonic convergence even without educated initial guesses. This is essential for in vivo MRI applications. The method is compared to a second-order gradient ascent method relying on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method, and a hybrid scheme Krotov-BFGS is also introduced in this study. These optimal control approaches are demonstrated by the design of a 2D spatial selective rf pulse exciting the letters "JCP" in a water phantom.

  13. Alcohol from whey permeate: strain selection, temperature, and medium optimization. [Candida pseudotropicalis, Kluyveromyces fragilis, and K. lactis

    SciTech Connect

    Vienne, P.; Von Stockar, U.

    1983-01-01

    A comparative study of shaken flask cultures of some yeast strains capable of fermenting lactose showed no significant differences in alcohol yield among the four best strains. Use of whey permeate concentrated three times did not affect the yields. An optimal growth temperature of 38/sup 0/C was determined for K. fragilis NRRL 665. Elemental analysis of both the permeate and the dry cell mass of two strains indicated the possibility of a stoichiometric limitation by nitrogen. Batch cultures in laboratory fermentors confirmed this finding and revealed in addition the presence of a limitation due to growth factors. Both types of limitations could be overcome by adding yeast extract. The maximum productivity of continuous cultures could thus be improved to 5.1 g/l-h. The maximum specific growth rate was of the order of 0.310 h/sup -1/. 15 references, 10 figures, 9 tables.

  14. Physics-based simulation modeling and optimization of microstructural changes induced by machining and selective laser melting processes in titanium and nickel based alloys

    NASA Astrophysics Data System (ADS)

    Arisoy, Yigit Muzaffer

    Manufacturing processes may significantly affect the quality of resultant surfaces and structural integrity of the metal end products. Controlling manufacturing process induced changes to the product's surface integrity may improve the fatigue life and overall reliability of the end product. The goal of this study is to model the phenomena that result in microstructural alterations and improve the surface integrity of the manufactured parts by utilizing physics-based process simulations and other computational methods. Two different (both conventional and advanced) manufacturing processes; i.e. machining of Titanium and Nickel-based alloys and selective laser melting of Nickel-based powder alloys are studied. 3D Finite Element (FE) process simulations are developed and experimental data that validates these process simulation models are generated to compare against predictions. Computational process modeling and optimization have been performed for machining induced microstructure that includes; i) predicting recrystallization and grain size using FE simulations and the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model, ii) predicting microhardness using non-linear regression models and the Random Forests method, and iii) multi-objective machining optimization for minimizing microstructural changes. Experimental analysis and computational process modeling of selective laser melting have been also conducted including; i) microstructural analysis of grain sizes and growth directions using SEM imaging and machine learning algorithms, ii) analysis of thermal imaging for spattering, heating/cooling rates and meltpool size, iii) predicting thermal field, meltpool size, and growth directions via thermal gradients using 3D FE simulations, iv) predicting localized solidification using the Phase Field method. These computational process models and predictive models, once utilized by industry to optimize process parameters, have the ultimate potential to improve performance of

  15. Choosing between good and better: optimal oviposition drives host plant selection when parents and offspring agree on best resources.

    PubMed

    Videla, Martín; Valladares, Graciela R; Salvo, Adriana

    2012-07-01

    Insect preferences for particular plant species might be subjected to trade-offs among several selective forces. Here, we evaluated, through laboratory and field experiments, the feeding and ovipositing preferences of the polyphagous leafminer Liriomyza huidobrensis (Diptera: Agromyzidae) in relation to adult and offspring performance and enemy-free space. Female leafminers preferred laying their eggs on Vicia faba (Fabaceae) over Beta vulgaris var. cicla (Chenopodiaceae), in both laboratory and field choice experiments, although no oviposition preference was observed in no-choice tests. Females fed more often on B. v. var. cicla (no-choice test) or showed no feeding preference (choice test), even when their realized fecundity was remarkably higher on V. faba. Offspring developed faster, tended to survive better, and attained bigger adult size on the preferred host plant. Also, a field experiment showed higher overall parasitism rates for leafminers developing on B. v. var. cicla, with a nonsignificant similar tendency in field surveys. According to these results, host plant selection by L. huidobrensis appears to be driven mainly by variation in host quality. Moreover, the consistent oviposition choices for the best host and the labile feeding preferences observed here, suggest that host plant selection might be driven by maximization of offspring fitness even without a conflict of interest between parents and offspring. Overall, these results highlight the complexity of decisions performed by phytophagous insects regarding their host plants, and the importance of simultaneous evaluation of the various driving forces involved, in order to unravel the adaptive significance of female choices.

  16. Small-Molecule Ligands of Methyl-Lysine Binding Proteins: Optimization of Selectivity for L3MBTL3

    PubMed Central

    James, Lindsey I.; Korboukh, Victoria K.; Krichevsky, Liubov; Baughman, Brandi M.; Herold, J. Martin; Norris, Jacqueline L.; Jin, Jian; Kireev, Dmitri B.; Janzen, William P.; Arrowsmith, Cheryl H.; Frye, Stephen V.

    2013-01-01

    Lysine methylation is a key epigenetic mark, the dysregulation of which is linked to many diseases. Small molecule antagonism of methyl-lysine (Kme) binding proteins that recognize such epigenetic marks can improve our understanding of these regulatory mechanisms and potentially validate Kme binding proteins as drug discovery targets. We previously reported the discovery of 1 (UNC1215), the first potent and selective small molecule chemical probe of a methyl-lysine reader protein, L3MBTL3, which antagonizes the mono- and dimethyl-lysine reading function of L3MBTL3. The design, synthesis, and structure activity relationship studies that led to the discovery of 1 are described herein. These efforts established the requirements for potent L3MBTL3 binding and enabled the design of novel antagonists, such as compound 2 (UNC1679), that maintain in vitro and cellular potency with improved selectivity against other MBT-containing proteins. The antagonists described were also found to effectively interact with unlabeled endogenous L3MBTL3 in cells. PMID:24040942

  17. Small-molecule ligands of methyl-lysine binding proteins: optimization of selectivity for L3MBTL3.

    PubMed

    James, Lindsey I; Korboukh, Victoria K; Krichevsky, Liubov; Baughman, Brandi M; Herold, J Martin; Norris, Jacqueline L; Jin, Jian; Kireev, Dmitri B; Janzen, William P; Arrowsmith, Cheryl H; Frye, Stephen V

    2013-09-26

    Lysine methylation is a key epigenetic mark, the dysregulation of which is linked to many diseases. Small-molecule antagonism of methyl-lysine (Kme) binding proteins that recognize such epigenetic marks can improve our understanding of these regulatory mechanisms and potentially validate Kme binding proteins as drug-discovery targets. We previously reported the discovery of 1 (UNC1215), the first potent and selective small-molecule chemical probe of a methyl-lysine reader protein, L3MBTL3, which antagonizes the mono- and dimethyl-lysine reading function of L3MBTL3. The design, synthesis, and structure-activity relationship studies that led to the discovery of 1 are described herein. These efforts established the requirements for potent L3MBTL3 binding and enabled the design of novel antagonists, such as compound 2 (UNC1679), that maintain in vitro and cellular potency with improved selectivity against other MBT-containing proteins. The antagonists described were also found to effectively interact with unlabeled endogenous L3MBTL3 in cells.

  18. Optimized Energy Harvesting, Cluster-Head Selection and Channel Allocation for IoTs in Smart Cities

    PubMed Central

    Aslam, Saleem; Hasan, Najam Ul; Jang, Ju Wook; Lee, Kyung-Geun

    2016-01-01

    This paper highlights three critical aspects of the internet of things (IoTs), namely (1) energy efficiency, (2) energy balancing and (3) quality of service (QoS) and presents three novel schemes for addressing these aspects. For energy efficiency, a novel radio frequency (RF) energy-harvesting scheme is presented in which each IoT device is associated with the best possible RF source in order to maximize the overall energy that the IoT devices harvest. For energy balancing, the IoT devices in close proximity are clustered together and then an IoT device with the highest residual energy is selected as a cluster head (CH) on a rotational basis. Once the CH is selected, it assigns channels to the IoT devices to report their data using a novel integer linear program (ILP)-based channel allocation scheme by satisfying their desired QoS. To evaluate the presented schemes, exhaustive simulations are carried out by varying different parameters, including the number of IoT devices, the number of harvesting sources, the distance between RF sources and IoT devices and the primary user (PU) activity of different channels. The simulation results demonstrate that our proposed schemes perform better than the existing ones. PMID:27918424

  19. Relay Selection Based Double-Differential Transmission for Cooperative Networks with Multiple Carrier Frequency Offsets: Model, Analysis, and Optimization

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Zhang, Bangning; Pan, Kegang; Liu, Aijun; Guo, Daoxing

    2014-07-01

    Due to the distributed nature, cooperative networks are generally subject to multiple carrier frequency offsets (MCFOs), which make the channels time-varying and drastically degrade the system performance. In this paper, to address the MCFOs problem in detect-andforward (DetF) multi-relay cooperative networks, a robust relay selection (RS) based double-differential (DD) transmission scheme, termed RSDDT, is proposed, where the best relay is selected to forward the source's double-differentially modulated signals to the destination with the DetF protocol. The proposed RSDDT scheme can achieve excellent performance over fading channels in the presence of unknown MCFOs. Considering double-differential multiple phase-shift keying (DDMPSK) is applied, we first derive exact expressions for the outage probability and average bit error rate (BER) of the RSDDT scheme. Then, we look into the high signal-to-noise ratio (SNR) regime and present simple and informative asymptotic outage probability and average BER expressions, which reveal that the proposed scheme can achieve full diversity. Moreover, to further improve the BER performance of the RSDDT scheme, we investigate the optimum power allocation strategy among the source and the relay nodes, and simple analytical solutions are obtained. Numerical results are provided to corroborate the derived analytical expressions and it is demonstrated that the proposed optimum power allocation strategy offers substantial BER performance improvement over the equal power allocation strategy.

  20. Optimized Energy Harvesting, Cluster-Head Selection and Channel Allocation for IoTs in Smart Cities.

    PubMed

    Aslam, Saleem; Hasan, Najam Ul; Jang, Ju Wook; Lee, Kyung-Geun

    2016-12-02

    This paper highlights three critical aspects of the internet of things (IoTs), namely (1) energy efficiency, (2) energy balancing and (3) quality of service (QoS) and presents three novel schemes for addressing these aspects. For energy efficiency, a novel radio frequency (RF) energy-harvesting scheme is presented in which each IoT device is associated with the best possible RF source in order to maximize the overall energy that the IoT devices harvest. For energy balancing, the IoT devices in close proximity are clustered together and then an IoT device with the highest residual energy is selected as a cluster head (CH) on a rotational basis. Once the CH is selected, it assigns channels to the IoT devices to report their data using a novel integer linear program (ILP)-based channel allocation scheme by satisfying their desired QoS. To evaluate the presented schemes, exhaustive simulations are carried out by varying different parameters, including the number of IoT devices, the number of harvesting sources, the distance between RF sources and IoT devices and the primary user (PU) activity of different channels. The simulation results demonstrate that our proposed schemes perform better than the existing ones.