Science.gov

Sample records for optimal tuner selection

  1. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  2. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  3. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  4. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  5. Model-Based Control of an Aircraft Engine using an Optimal Tuner Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Chicatelli, Amy; Garg, Sanjay

    2012-01-01

    This paper covers the development of a model-based engine control (MBEC) method- ology applied to an aircraft turbofan engine. Here, a linear model extracted from the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) at a cruise operating point serves as the engine and the on-board model. The on-board model is up- dated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. MBEC provides the ability for a tighter control bound of thrust over the entire life cycle of the engine that is not achievable using traditional control feedback, which uses engine pressure ratio or fan speed. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC tighter thrust control. In addition, investigations of using the MBEC to provide a surge limit for the controller limit logic are presented that could provide benefits over a simple acceleration schedule that is currently used in engine control architectures.

  6. Model-Based Control of a Nonlinear Aircraft Engine Simulation using an Optimal Tuner Kalman Filter Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob

    2013-01-01

    This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.

  7. Test of a coaxial blade tuner at HTS FNAL

    SciTech Connect

    Pischalnikov, Y.; Barbanotti, S.; Harms, E.; Hocker, A.; Khabiboulline, T.; Schappert, W.; Bosotti, A.; Pagani, C.; Paparella, R.; /LASA, Segrate

    2011-03-01

    A coaxial blade tuner has been selected for the 1.3GHz SRF cavities of the Fermilab SRF Accelerator Test Facility. Results from tuner cold tests in the Fermilab Horizontal Test Stand are presented. Fermilab is constructing the SRF Accelerator Test Facility, a facility for accelerator physics research and development. This facility will contain a total of six cryomodules, each containing eight 1.3 GHz nine-cell elliptical cavities. Each cavity will be equipped with a Slim Blade Tuner designed by INFN Milan. The blade tuner incorporates both a stepper motor and piezo actuators to allow for both slow and fast cavity tuning. The stepper motor allows the cavity frequency to be statically tuned over a range of 500 kHz with an accuracy of several Hz. The piezos provide up to 2 kHz of dynamic tuning for compensation of Lorentz force detuning and variations in the He bath pressure. The first eight blade tuners were built at INFN Milan, but the remainder are being manufactured commercially following the INFN design. To date, more than 40 of the commercial tuners have been delivered.

  8. Electromagnetic SCRF Cavity Tuner

    SciTech Connect

    Kashikhin, V.; Borissov, E.; Foster, G.W.; Makulski, A.; Pischalnikov, Y.; Khabiboulline, T.; /Fermilab

    2009-05-01

    A novel prototype of SCRF cavity tuner is being designed and tested at Fermilab. This is a superconducting C-type iron dominated magnet having a 10 mm gap, axial symmetry, and a 1 Tesla field. Inside the gap is mounted a superconducting coil capable of moving {+-} 1 mm and producing a longitudinal force up to {+-} 1.5 kN. The static force applied to the RF cavity flanges provides a long-term cavity geometry tuning to a nominal frequency. The same coil powered by fast AC current pulse delivers mechanical perturbation for fast cavity tuning. This fast mechanical perturbation could be used to compensate a dynamic RF cavity detuning caused by cavity Lorentz forces and microphonics. A special configuration of magnet system was designed and tested.

  9. LEB tuner made out of titanium alloy

    SciTech Connect

    Goren, Y.; Campbell, B.

    1991-09-01

    A proposed design of a closed shell tuner for the LEB cavity is presented. The tuner is made out of Ti alloy which has a high electrical resistivity as well as very good mechanical strength. Using this alloy results in a substantial reduction in the eddy current heating as well as allowing for faster frequency control. 9 figs.

  10. Inductive tuners for microwave driven discharge lamps

    DOEpatents

    Simpson, James E.

    1999-01-01

    An RF powered electrodeless lamp utilizing an inductive tuner in the waveguide which couples the RF power to the lamp cavity, for reducing reflected RF power and causing the lamp to operate efficiently.

  11. Inductive tuners for microwave driven discharge lamps

    SciTech Connect

    Simpson, J.E.

    1999-11-02

    An RF powered electrodeless lamp utilizing an inductive tuner in the waveguide which couples the RF power to the lamp cavity, for reducing reflected RF power and causing the lamp to operate efficiently.

  12. Enhanced production of electron cyclotron resonance plasma by exciting selective microwave mode on a large-bore electron cyclotron resonance ion source with permanent magnet

    SciTech Connect

    Kimura, Daiju Kurisu, Yosuke; Nozaki, Dai; Yano, Keisuke; Imai, Youta; Kumakura, Sho; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2014-02-15

    We are constructing a tandem type ECRIS. The first stage is large-bore with cylindrically comb-shaped magnet. We optimize the ion beam current and ion saturation current by a mobile plate tuner. They change by the position of the plate tuner for 2.45 GHz, 11–13 GHz, and multi-frequencies. The peak positions of them are close to the position where the microwave mode forms standing wave between the plate tuner and the extractor. The absorbed powers are estimated for each mode. We show a new guiding principle, which the number of efficient microwave mode should be selected to fit to that of multipole of the comb-shaped magnets. We obtained the excitation of the selective modes using new mobile plate tuner to enhance ECR efficiency.

  13. Methods to optimize selective hyperthermia

    NASA Astrophysics Data System (ADS)

    Cowan, Thomas M.; Bailey, Christopher A.; Liu, Hong; Chen, Wei R.

    2003-07-01

    Laser immunotherapy, a novel therapy for breast cancer, utilizes selective photothermal interaction to raise the temperature of tumor tissue above the cell damage threshold. Photothermal interaction is achieved with intratumoral injection of a laser absorbing dye followed by non-invasive laser irradiation. When tumor heating is used in combination with immunoadjuvant to stimulate an immune response, anti-tumor immunity can be achieved. In our study, gelatin phantom simulations were used to optimize therapy parameters such as laser power, laser beam radius, and dye concentration to achieve maximum heating of target tissue with the minimum heating of non-targeted tissue. An 805-nm diode laser and indocyanine green (ICG) were used to achieve selective photothermal interactions in a gelatin phantom. Spherical gelatin phantoms containing ICG were used to simulate the absorption-enhanced target tumors, which were embedded inside gelatin without ICG to simulate surrounding non-targeted tissue. Different laser powers and dye concentrations were used to treat the gelatin phantoms. The temperature distributions in the phantoms were measured, and the data were used to determine the optimal parameters used in selective hyperthermia (laser power and dye concentration for this case). The method involves an optimization coefficient, which is proportional to the difference between temperatures measured in targeted and non-targeted gel. The coefficient is also normalized by the difference between the most heated region of the target gel and the least heated region. A positive optimization coefficient signifies a greater temperature increase in targeted gelatin when compared to non-targeted gelatin, and therefore, greater selectivity. Comparisons were made between the optimization coefficients for varying laser powers in order to demonstrate the effectinvess of this method in finding an optimal parameter set. Our experimental results support the proposed use of an optimization

  14. Superconducting cavity tuner performance at CEBAF

    SciTech Connect

    Marshall, J.; Preble, J.; Schneider, W.

    1993-06-01

    At the Continuous Electron Beam Accelerator Facility (CEBAF), a 4 GeV, multipass CW electron beam is to be accelerated by 338 SRF, 5-cell niobium cavities operating at a resonant frequency of 1497 MHz. Eight cavities arranged as four pairs comprise a cyromodule, a croygenically isolated linac subdivision. The frequency is controlled by a mechanical tune attached to the first and fifth cell of the cavity which elastically deforms the cavity and thereby alters its resonant frequency. The tuner is driven by a stepper motor mounted external to the cryomodule that transfers torque through two rotary feedthroughs. A linear variable differential transducer (LVDT) mounted on the tuner monitors the displacement, and two limit switches interlock the movement beyond a 400 kHz bandwidth. Since the cavity has a loaded Q of 6.6 {center_dot} 10{sup 6}, the control system must maintain the frequency of the cavity to within {plus_minus} 50 Hz of the drive frequency for efficient coupling. This requirement is somewhat difficult to achieve since the difference in thermal contractions of the cavity and the tuner creates a frequency hystersis of approximately 10 kHz. The cavity is also subject to frequency shifts due to pressure fluctuations of the helium bath as well as radiation pressure. This requires that each cavity be characterized in terms of frequency change as a function of applied motor steps to allow proper tuning operations. This paper describes the electrical and mechanical performance of the cavity tuner during the commissioning and operation of the cryomodulus manufactured to date.

  15. Dependence of ion beam current on position of mobile plate tuner in multi-frequencies microwaves electron cyclotron resonance ion source.

    PubMed

    Kurisu, Yosuke; Kiriyama, Ryutaro; Takenaka, Tomoya; Nozaki, Dai; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2012-02-01

    We are constructing a tandem-type electron cyclotron resonance ion source (ECRIS). The first stage of this can supply 2.45 GHz and 11-13 GHz microwaves to plasma chamber individually and simultaneously. We optimize the beam current I(FC) by the mobile plate tuner. The I(FC) is affected by the position of the mobile plate tuner in the chamber as like a circular cavity resonator. We aim to clarify the relation between the I(FC) and the ion saturation current in the ECRIS against the position of the mobile plate tuner. We obtained the result that the variation of the plasma density contributes largely to the variation of the I(FC) when we change the position of the mobile plate tuner. PMID:22380157

  16. Dependence of ion beam current on position of mobile plate tuner in multi-frequencies microwaves electron cyclotron resonance ion source

    SciTech Connect

    Kurisu, Yosuke; Kiriyama, Ryutaro; Takenaka, Tomoya; Nozaki, Dai; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2012-02-15

    We are constructing a tandem-type electron cyclotron resonance ion source (ECRIS). The first stage of this can supply 2.45 GHz and 11-13 GHz microwaves to plasma chamber individually and simultaneously. We optimize the beam current I{sub FC} by the mobile plate tuner. The I{sub FC} is affected by the position of the mobile plate tuner in the chamber as like a circular cavity resonator. We aim to clarify the relation between the I{sub FC} and the ion saturation current in the ECRIS against the position of the mobile plate tuner. We obtained the result that the variation of the plasma density contributes largely to the variation of the I{sub FC} when we change the position of the mobile plate tuner.

  17. Fast Tuner R&D for RIA

    SciTech Connect

    Rusnak, B; Shen, S

    2003-08-19

    The limited cavity beam loading conditions anticipated for the Rare Isotope Accelerator (RIA) create a situation where microphonic-induced cavity detuning dominates radio frequency (RF) coupling and RF system architecture choices in the linac design process. Where most superconducting electron and proton linacs have beam-loaded bandwidths that are comparable to or greater than typical microphonic detuning bandwidths on the cavities, the beam-loaded bandwidths for many heavy-ion species in the RIA driver linac can be as much as a factor of 10 less than the projected 80-150 Hz microphonic control window for the RF structures along the driver, making RF control problematic. System studies indicate that for the low-{beta} driver linac alone, running the cavities with no fast tuner may cost 50% or more than an RF system employing a voltage controlled reactance (VCX) or other type of fast tuner. An update of these system cost studies, along with the status of the VCX work being done at Lawrence Livermore National Lab is presented.

  18. Feedback controlled hybrid fast ferrite tuners

    SciTech Connect

    Remsen, D.B.; Phelps, D.A.; deGrassie, J.S.; Cary, W.P.; Pinsker, R.I.; Moeller, C.P.; Arnold, W.; Martin, S.; Pivit, E.

    1993-09-01

    A low power ANT-Bosch fast ferrite tuner (FFT) was successfully tested into (1) the lumped circuit equivalent of an antenna strap with dynamic plasma loading, and (2) a plasma loaded antenna strap in DIII-D. When the FFT accessible mismatch range was phase-shifted to encompass the plasma-induced variation in reflection coefficient, the 50 {Omega} source was matched (to within the desired 1.4 : 1 voltage standing wave ratio). The time required to achieve this match (i.e., the response time) was typically a few hundred milliseconds, mostly due to a relatively slow network analyzer-computer system. The response time for the active components of the FFT was 10 to 20 msec, or much faster than the present state-of-the-art for dynamic stub tuners. Future FFT tests are planned, that will utilize the DIII-D computer (capable of submillisecond feedback control), as well as several upgrades to the active control circuit, to produce a FFT feedback control system with a response time approaching 1 msec.

  19. Characterization of CNRS Fizeau wedge laser tuner

    NASA Astrophysics Data System (ADS)

    A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom fabricated circuit board which contains a high speed fringe detection and locating circuit. This board includes a dc level discriminator type fringe detector, a counter circuit to determine fringe center, a pulsed laser triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data collection process and interprets the results.

  20. Characterization of CNRS Fizeau wedge laser tuner

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom fabricated circuit board which contains a high speed fringe detection and locating circuit. This board includes a dc level discriminator type fringe detector, a counter circuit to determine fringe center, a pulsed laser triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data collection process and interprets the results.

  1. Optimizing calcium selective fluorimetric nanospheres.

    PubMed

    Kisiel, Anna; Kłucińska, Katarzyna; Gniadek, Marianna; Maksymiuk, Krzysztof; Michalska, Agata

    2015-11-01

    Recently it was shown that optical nanosensors based on alternating polymers e.g. poly(maleic anhydride-alt-1-octadecene) were characterized by a linear dependence of emission intensity on logarithm of concentration over a few of orders of magnitude range. In this work we focus on the material used to prepare calcium selective nanosensors. It is shown that alternating polymer nanosensors offer competitive performance in the absence of calcium ionophore, due to interaction of the nanospheres building blocks with analyte ions. The emission increase corresponds to increase of calcium ions contents in the sample within the range from 10(-4) to 10(-1) M. Further improvement in sensitivity (from 10(-6) to 10(-1) M) and selectivity can be achieved by incorporating calcium ionophore in the nanospheres. The optimal results were obtained for core-shell nanospheres, where the core was prepared from poly(styrene-co-maleic anhydride) and the outer layer from poly(maleic anhydride-alt-1-octadecene). Thus obtained chemosensors were showing linear dependence of emission on logarithm of calcium ions concentration within the range from 10(-7) to 10(-1) M. PMID:26452839

  2. Quasi-optical equivalent of waveguide slide screw tuner

    NASA Technical Reports Server (NTRS)

    Kurpis, G. P.

    1970-01-01

    Tuner utilizes a metal plated dielectric grid inserted into the cross sectional plane of an oversized waveguide. It provides both variable susceptance and variable longitudinal position along the waveguide to provide a wide matching range.

  3. Fast Ferroelectric L-Band Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2011-03-01

    Analysis and modeling is presented for a fast microwave tuner to operate at 700 MHz which incorporates ferroelectric elements whose dielectric permittivity can be rapidly altered by application of an external voltage. This tuner could be used to correct unavoidable fluctuations in the resonant frequency of superconducting cavities in accelerator structures, thereby greatly reducing the RF power needed to drive the cavities. A planar test version of the tuner has been tested at low levels of RF power, but at 1300 MHz to minimize the physical size of the test structure. This test version comprises one-third of the final version. The tests show performance in good agreement with simulations, but with losses in the ferroelectric elements that are too large for practical use, and with issues in bonding of ferroelectric elements to the metal walls of the tuner structure.

  4. A wideband RF amplifier for satellite tuners

    NASA Astrophysics Data System (ADS)

    Xueqing, Hu; Zheng, Gong; Yin, Shi; Foster, Dai Fa

    2011-11-01

    This paper presents the design and measured performance of a wideband amplifier for a direct conversion satellite tuner. It is composed of a wideband low noise amplifier (LNA) and a two-stage RF variable gain amplifier (VGA) with linear gain in dB and temperature compensation schemes. To meet the system linearity requirement, an improved distortion compensation technique and a bypass mode are applied on the LNA to deal with the large input signal. Wideband matching is achieved by resistive feedback and an off-chip LC-ladder matching network. A large gain control range (over 80 dB) is achieved by the VGA with process voltage and temperature compensation and dB linearization. In total, the amplifier consumes up to 26 mA current from a 3.3 V power supply. It is fabricated in a 0.35-μm SiGe BiCMOS technology and occupies a silicon area of 0.25 mm2.

  5. Selected optimal shuttle entry computations

    NASA Technical Reports Server (NTRS)

    Sullivan, H. C.

    1974-01-01

    Parameterization and the Davidon-Fletcher-Powell method are used to study the characteristics of optimal shuttle entry trajectories. Two problems of thermal protective system weight minimization are considered: roll modulation and roll plus an angle-of-attack modulation. Both problems are targeted for the edges of the entry footprint. Results consistent with constraints on loads and control bounds are particularly well-behaved and strongly support 'energy' approximation results obtained for the case of symmetric flight by Kelley and Sullivan (1973). Furthermore, results indicate that optimal shuttle entry trajectories should be easy to duplicate and to analyze by using simple techniques.

  6. Fast Ferroelectric L-Band Tuner for ILC Cavities

    SciTech Connect

    Hirshfield, Jay L

    2010-03-15

    Design, analysis, and low-power tests are described on a 1.3 GHz ferroelectric tuner that could find application in the International Linear Collider or in Project X at Fermi National Accelerator Laboratory. The tuner configuration utilizes a three-deck sandwich imbedded in a WR-650 waveguide, in which ferroelectric bars are clamped between conducting plates that allow the tuning bias voltage to be applied. Use of a reduced one-third structure allowed tests of critical parameters of the configuration, including phase shift, loss, and switching speed. Issues that were revealed that require improvement include reducing loss tangent in the ferroelectric material, development of a reliable means of brazing ferroelectric elements to copper parts of the tuner, and simplification of the mechanical design of the configuration.

  7. Fast Ferroelectric L-Band Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2012-07-03

    Design, analysis, and low-power tests are described on a ferroelectric tuner concept that could be used for controlling external coupling to RF cavities for the superconducting Energy Recovery Linac (ERL) in the electron cooler of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL). The tuner configuration utilizes several small donut-shaped ferroelectric assemblies, which allow the design to be simpler and more flexible, as compared to previous designs. Design parameters for 704 and 1300 MHz versions of the tuner are given. Simulation results point to efficient performance that could reduce by a factor-of-ten the RF power levels required for driving superconducting cavities in the BNL ERL.

  8. Fast 704 MHz Ferroelectric Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2012-04-12

    The Omega-P SBIR project described in this Report has as its goal the development, test, and evaluation of a fast electrically-controlled L-band tuner for BNL Energy Recovery Linac (ERL) in the Electron Ion Collider (EIC) upgrade of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL). The tuner, that employs an electrically-controlled ferroelectric component, is to allow fast compensation to cavity resonance changes. In ERLs, there are several factors which significantly affect the amount of power required from the wall-plug to provide the RF-power level necessary for the operation. When beam loading is small, the power requirements are determined by (i) ohmic losses in cavity walls, (ii) fluctuations in amplitude and/or phase for beam currents, and (iii) microphonics. These factors typically require a substantial change in the coupling between the cavity and the feeding line, which results in an intentional broadening of the cavity bandwidth, which in turn demands a significant amount of additional RF power. If beam loading is not small, there is a variety of beam-drive phase instabilities to be managed, and microphonics will still remain an issue, so there remain requirements for additional power. Moreover ERL performance is sensitive to changes in beam arrival time, since any such change is equivalent to phase instability with its vigorous demands for additional power. In this Report, we describe the new modular coaxial tuner, with specifications suitable for the 704 MHz ERL application. The device would allow changing the RF-coupling during the cavity filling process in order to effect significant RF power savings, and also will provide rapid compensation for beam imbalance and allow for fast stabilization against phase fluctuations caused by microphonics, beam-driven instabilities, etc. The tuner is predicted to allow a reduction of about ten times in the required power from the RF source, as compared to a compensation system

  9. Broadband power amplifier tube: Klystron tube 5K70SK-WBT and step tuner VA-1470S

    NASA Technical Reports Server (NTRS)

    Cox, H. R.; Johnson, J. O.

    1974-01-01

    The design concept, the fabrication, and the acceptance testing of a wide band Klystron tube and remotely controlled step tuner for channel selection are discussed. The equipment was developed for the modification of an existing 20 KW Power Amplifier System which was provided to the contractor as GFE. The replacement Klystron covers a total frequency range of 2025 to 2120 MHz and is tuneable to six (6) each channel with a band width of 22 MHz or greater per channel. A 5 MHz overlap is provided between channels. Channels are selected at the control panel located in the front of the Klystron magnet or from one of three remote control stations connected in parallel with the step tuner. Included in this final report are the results of acceptance tests conducted at the vendor's plant and of the integrated system tests.

  10. Optimizing secondary tailgate support selection

    SciTech Connect

    Harwood, C.; Karmis, M.; Haycocks, C.; Luo, J.

    1996-12-01

    A model was developed to facilitate secondary tailgate support selection based on analysis of over 100 case studies, compiled from two different surveys of operating longwall coal mines in the United States. The ALPS (Analysis of Longwall Pillar Stability) program was used to determine adequacy of pillar design for the successful longwall case histories. A relationship was developed between the secondary support density necessary to maintain a stable tailgate entry during mining and the CMRR (Coal Mine Roof Rating). This relationship defines the lower bound of secondary support density currently used in longwall mines. The model used only successful tailgate case history data, with adequate ALPS SF according to the CMRR for each case. This model facilitates mine design by predicting secondary support density required for a tailgate entry depending on the ALPS SF and CMRR, which can result in significant economic benefits.

  11. Waveguide Stub Tuner Analysis for CEBAF Machine Application

    SciTech Connect

    Haipeng Wang; Michael Tiefenback

    2004-08-01

    Three-stub WR650 waveguide tuners have been used on the CEBAF superconducting cavities for two changes of the external quality factors (Qext): increasing the Qext from 3.4-7.6 x 10{sup 6} to 8 x 10{sup 6}6 on 5-cell cavities to reduce klystron power at operating gradients and decreasing the Qext from 1.7-2.4 x 10{sup 7} to 8 x 10{sup 6} on 7-cell cavities to simplify control of Lorenz Force detuning. To understand the reactive tuning effects in the machine operations with beam current and mechanical tuning, a network analysis model was developed. The S parameters of the stub tuner were simulated by MAFIA and measured on the bench. We used this stub tuner model to study tuning range, sensitivity, and frequency pulling, as well as cold waveguide (WG) and window heating problems. Detailed experimental results are compared against this model. Pros and cons of this stub tuner application are summarized.

  12. Self-extinction through optimizing selection

    PubMed Central

    Parvinen, Kalle; Dieckmann, Ulf

    2013-01-01

    Evolutionary suicide is a process in which selection drives a viable population to extinction. So far, such selection-driven self-extinction has been demonstrated in models with frequency-dependent selection. This is not surprising, since frequency-dependent selection can disconnect individual-level and population-level interests through environmental feedback. Hence it can lead to situations akin to the tragedy of the commons, with adaptations that serve the selfish interests of individuals ultimately ruining a population. For frequency-dependent selection to play such a role, it must not be optimizing. Together, all published studies of evolutionary suicide have created the impression that evolutionary suicide is not possible with optimizing selection. Here we disprove this misconception by presenting and analyzing an example in which optimizing selection causes self-extinction. We then take this line of argument one step further by showing, in a further example, that selection-driven self-extinction can occur even under frequency-independent selection. PMID:23583808

  13. Feature Selection via Modified Gravitational Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2015-03-01

    Feature selection is the process of selecting a subset of relevant and most informative features, which efficiently represents the input data. We proposed a feature selection algorithm based on n-dimensional gravitational optimization algorithm (NGOA), which is based on the principle of gravitational fields. The objective function of optimization algorithm is a non-linear function of variables, which are called masses and defined based on extracted features. The forces between the masses as well as their new locations are calculated using the value of the objective function and the values of masses. We extracted variety of features applying different wavelet transforms and statistical methods on FLAIR and T1-weighted MR brain images. There are two classes of normal and abnormal tissues. Extracted features are divided into groups of five features. The best feature is selected in each group using N-dimensional gravitational optimization algorithm and support vector machine classifier. Then the selected features from each group make several groups of five features again and so on till desired number of features is selected. The advantage of NGOA algorithm is that the possibility of being drawn into a local optimal solution is very low. The experimental results show that our method outperforms some standard feature selection algorithms on both real-data and simulated brain tumor data.

  14. Optimizing Site Selection for HEDS

    NASA Astrophysics Data System (ADS)

    Marshall, J. R.

    1999-01-01

    MSP 2001 will be conducting environmental assessment for the Human exploration and Development of Space (HEDS) Program in order to safeguard future human exploration of the planet, in addition to geological studies being addressed by the APEX payload. In particular, the MECA experiment (see other abstracts, this volume), will address chemical toxicity of the soil, the presence of adhesive or abrasive soil dust components, and the geoelectrical-triboelectrical character of the surface environment. The attempt will be to quantify hazards to humans and machinery structures deriving from compounds that poison, corrode, abrade, invade (lungs or machinery), contaminate, or electrically interfere with the human presence. The DART experiment, will also address the size and electrical nature of airborne dust. Photo-imaging of the local scene with RAC and Pancam will be able to assess dust raising events such as local thermal vorticity-driven dust devils. The need to introduce discussion of HEDS landing site requirements stems from potential conflict, but also potential synergism with other '01 site requirements. In-situ Resource Utilization (ISRU) mission components desire as much solar radiation as possible, with some very limited amount of dust available; the planetary-astrobiology mission component desires sufficient rock abundance without inhibiting rover activities (and an interesting geological niche if available), the radiation component may again have special requirements, as will the engineers concerned with mission safety and mission longevity. The '01 mission affords an excellent opportunity to emphasize HEDS landing site requirements, given the constraint that both recent missions (Pathfinder, Mars '98) and future missions (MSP '03 & '05) have had or will have strong geological science drivers in the site selection process. What type of landing site best facilitates investigation of the physical, chemical, and behavioral properties of soil and dust? There are

  15. Feature Selection via Chaotic Antlion Optimization

    PubMed Central

    Zawbaa, Hossam M.; Emary, E.; Grosan, Crina

    2016-01-01

    Background Selecting a subset of relevant properties from a large set of features that describe a dataset is a challenging machine learning task. In biology, for instance, the advances in the available technologies enable the generation of a very large number of biomarkers that describe the data. Choosing the more informative markers along with performing a high-accuracy classification over the data can be a daunting task, particularly if the data are high dimensional. An often adopted approach is to formulate the feature selection problem as a biobjective optimization problem, with the aim of maximizing the performance of the data analysis model (the quality of the data training fitting) while minimizing the number of features used. Results We propose an optimization approach for the feature selection problem that considers a “chaotic” version of the antlion optimizer method, a nature-inspired algorithm that mimics the hunting mechanism of antlions in nature. The balance between exploration of the search space and exploitation of the best solutions is a challenge in multi-objective optimization. The exploration/exploitation rate is controlled by the parameter I that limits the random walk range of the ants/prey. This variable is increased iteratively in a quasi-linear manner to decrease the exploration rate as the optimization progresses. The quasi-linear decrease in the variable I may lead to immature convergence in some cases and trapping in local minima in other cases. The chaotic system proposed here attempts to improve the tradeoff between exploration and exploitation. The methodology is evaluated using different chaotic maps on a number of feature selection datasets. To ensure generality, we used ten biological datasets, but we also used other types of data from various sources. The results are compared with the particle swarm optimizer and with genetic algorithm variants for feature selection using a set of quality metrics. PMID:26963715

  16. Optimizing Clinical Research Participant Selection with Informatics

    PubMed Central

    Weng, Chunhua

    2015-01-01

    Clinical research participants are often not reflective of the real-world patients due to overly restrictive eligibility criteria. Meanwhile, unselected participants introduce confounding factors and reduce research efficiency. Biomedical Informatics, especially Big Data increasingly made available from electronic health records, offers promising aids to optimize research participant selection through data-driven transparency. PMID:26549161

  17. A SiGe BiCMOS multi-band tuner for mobile TV applications

    NASA Astrophysics Data System (ADS)

    Xueqing, Hu; Zheng, Gong; Jinxin, Zhao; Lei, Wang; Peng, Yu; Yin, Shi

    2012-04-01

    This paper presents the circuit design and measured performance of a multi-band tuner for mobile TV applications. The tuner RFIC is composed of a wideband front-end, an analog baseband, a full integrated fractional-N synthesizer and an I2C digital interface. To meet the stringent adjacent channel rejection (ACR) requirements of mobile TV standards while keeping low power consumption and low cost, direct conversion architecture with a local AGC scheme is adopted in this design. Eighth-order elliptic active-RC filters with large stop band attenuation and a sharp transition band are chosen as the channel select filter to further improve the ACR preference. The chip is fabricated in a 0.35-μm SiGe BiCMOS technology and occupies a silicon area of 5.5 mm2. It draws 50 mA current from a 3.0 V power supply. In CMMB application, it achieves a sensitivity of -97 dBm with 1/2 coding QPSK signal input and over 40 dB ACR.

  18. A hydrogen maser with cavity auto-tuner for timekeeping

    NASA Technical Reports Server (NTRS)

    Lin, C. F.; He, J. W.; Zhai, Z. C.

    1992-01-01

    A hydrogen maser frequency standard for timekeeping was worked on at the Shanghai Observatory. The maser employs a fast cavity auto-tuner, which can detect and compensate the frequency drift of the high-Q resonant cavity with a short time constant by means of a signal injection method, so that the long term frequency stability of the maser standard is greatly improved. The cavity auto-tuning system and some maser data obtained from the atomic time comparison are described.

  19. Optimal probe selection in diagnostic search

    NASA Technical Reports Server (NTRS)

    Bhandari, Inderpal S.; Simon, Herbert A.; Siewiorek, Daniel P.

    1990-01-01

    Probe selection (PS) in machine diagnosis is viewed as a collection of models that apply under specific conditions. This makes it possible for three polynomial-time optimal algorithms to be developed for simplified PS models that allow different probes to have different costs. The work is compared with the research of Simon and Kadane (1975), who developed a collection of models for optimal problem-solving search. The relationship between these models and the three newly developed algorithms for PS is explored. Two of the algorithms are unlike the ones discussed by Simon and Kadane. The third cannot be related to the problem-solving models.

  20. Testing of the new tuner design for the CEBAF 12 GeV upgrade SRF cavities

    SciTech Connect

    Edward Daly; G. Davis; William Hicks

    2005-05-01

    The new tuner design for the 12 GeV Upgrade SRF cavities consists of a coarse mechanical tuner and a fine piezoelectric tuner. The mechanism provides a 30:1 mechanical advantage, is pre-loaded at room temperature and tunes the cavities in tension only. All of the components are located in the insulating vacuum space and attached to the helium vessel, including the motor, harmonic drive and piezoelectric actuators. The requirements and detailed design are presented. Measurements of range and resolution of the coarse tuner are presented and discussed.

  1. Optimal selection theory for superconcurrency. Technical document

    SciTech Connect

    Freund, R.F.

    1989-10-01

    This paper describes a mathematical programming approach to finding an optimal, heterogeneous suite of processors to solve supercomputing problems. This technique, called superconcurrency, works best when the computational requirements are diverse and significant portions of the code are not tightly-coupled. It is also dependent on new methods of benchmarking and code profiling, as well as eventual use of AI techniques for intelligent management of the selected superconcurrent suite.

  2. Optimal remediation policy selection under general conditions

    SciTech Connect

    Wang, M.; Zheng, C.

    1997-09-01

    A new simulation-optimization model has been developed for the optimal design of ground-water remediation systems under a variety of field conditions. The model couples genetic algorithm (GA), a global search technique inspired by biological evolution, with MODFLOW and MT3D, two commonly used ground-water flow and solute transport codes. The model allows for multiple management periods in which optimal pumping/injection rates vary with time to reflect the changes in the flow and transport conditions during the remediation process. The objective function of the model incorporates multiple cost terms including the drilling cost, the installation cost, and the costs to extract and treat the contaminated ground water. The simulation-optimization model is first applied to a typical two-dimensional pump-and-treat example with one and three management periods to demonstrate the effectiveness and robustness of the new model. The model is then applied to a large-scale three-dimensional field problem to determine the minimum pumping needed to contain an existing contaminant plume. The optimal solution as determined in this study is compared with a previous solution based on trial-and-error selection.

  3. Optimal Sensor Selection for Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  4. Separator profile selection for optimal battery performance

    NASA Astrophysics Data System (ADS)

    Whear, J. Kevin

    Battery performance, depending on the application, is normally defined by power delivery, electrical capacity, cycling regime and life in service. In order to meet the various performance goals, the Battery Design Engineer can vary things such as grid alloys, paste formulations, number of plates and methods of construction. Another design option available to optimize the battery performance is the separator profile. The goal of this paper is to demonstrate how separator profile selection can be utilized to optimize battery performance and manufacturing efficiencies. Also time will be given to explore novel separator profiles which may bring even greater benefits in the future. All major lead-acid application will be considered including automotive, motive power and stationary.

  5. Optimal Portfolio Selection Under Concave Price Impact

    SciTech Connect

    Ma Jin; Song Qingshuo; Xu Jing; Zhang Jianfeng

    2013-06-15

    In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solution to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.

  6. DESIGN CONSIDERATIONS FOR THE MECHANICAL TUNER OF THE RHIC ELECTRON COOLER RF CAVITY.

    SciTech Connect

    RANK, J.; BEN-ZVI,I.; HAHN,G.; MCINTYRE,G.; DALY,E.; PREBLE,J.

    2005-05-16

    The ECX Project, Brookhaven Lab's predecessor to the RHIC e-Cooler, includes a prototype RF tuner mechanism capable of both coarse and fast tuning. This tuner concept, adapted originally from a DESY design, has longer stroke and significantly higher loads attributable to the very stiff ECX cavity shape. Structural design, kinematics, controls, thermal and RF issues are discussed and certain improvements are proposed.

  7. Selectively-informed particle swarm optimization.

    PubMed

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-01-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315

  8. Selectively-informed particle swarm optimization

    PubMed Central

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-01-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315

  9. Selectively-informed particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-03-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors.

  10. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2016-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a "Color-Enhanced" sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  11. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2015-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a 'Color-Enhanced' sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  12. Tests of a tuner for a 325 MHz SRF spoke resonator

    SciTech Connect

    Pishchalnikov, Y.; Borissov, E.; Khabiboulline, T.; Madrak, R.; Pilipenko, R.; Ristori, L.; Schappert, W.; /Fermilab

    2011-03-01

    Fermilab is developing 325 MHz SRF spoke cavities for the proposed Project X. A compact fast/slow tuner has been developed for final tuning of the resonance frequency of the cavity after cooling down to operating temperature and to compensate microphonics and Lorentz force detuning [2]. The modified tuner design and results of 4.5K tests of the first prototype are presented. The performance of lever tuners for the SSR1 spoke resonator prototype has been measured during recent CW and pulsed tests in the Fermilab SCTF. The tuner met or exceeded all design goals and has been used to successfully: (1) Bring the cold cavity to the operating frequency; (2) Compensate for dynamic Lorentz force detuning; and (3) Compensate for frequency detuning of the cavity due to changes in the He bath pressure.

  13. Proof-of-principle Experiment of a Ferroelectric Tuner for the 1.3 GHz Cavity

    SciTech Connect

    Choi,E.M.; Hahn, H.; Shchelkunov, S. V.; Hirshfield, J.; Kazakov, S.

    2009-01-01

    A novel tuner has been developed by the Omega-P company to achieve fast control of the accelerator RF cavity frequency. The tuner is based on the ferroelectric property which has a variable dielectric constant as function of applied voltage. Tests using a Brookhaven National Laboratory (BNL) 1.3 GHz electron gun cavity have been carried out for a proof-of-principle experiment of the ferroelectric tuner. Two different methods were used to determine the frequency change achieved with the ferroelectric tuner (FT). The first method is based on a S11 measurement at the tuner port to find the reactive impedance change when the voltage is applied. The reactive impedance change then is used to estimate the cavity frequency shift. The second method is a direct S21 measurement of the frequency shift in the cavity with the tuner connected. The estimated frequency change from the reactive impedance measurement due to 5 kV is in the range between 3.2 kHz and 14 kHz, while 9 kHz is the result from the direct measurement. The two methods are in reasonable agreement. The detail description of the experiment and the analysis are discussed in the paper.

  14. Optimized source selection for intracavitary low dose rate brachytherapy

    SciTech Connect

    Nurushev, T.; Kim, Jinkoo

    2005-05-01

    A procedure has been developed for automating optimal selection of sources from an available inventory for the low dose rate brachytherapy, as a replacement for the conventional trial-and-error approach. The method of optimized constrained ratios was applied for clinical source selection for intracavitary Cs-137 implants using Varian BRACHYVISION software as initial interface. However, this method can be easily extended to another system with isodose scaling and shaping capabilities. Our procedure provides optimal source selection results independent of the user experience and in a short amount of time. This method also generates statistics on frequently requested ideal source strengths aiding in ordering of clinically relevant sources.

  15. On Optimal Input Design and Model Selection for Communication Channels

    SciTech Connect

    Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  16. Design and test of frequency tuner for a CAEP high power THz free-electron laser

    NASA Astrophysics Data System (ADS)

    Mi, Zheng-Hui; Zhao, Dan-Yang; Sun, Yi; Pan, Wei-Min; Lin, Hai-Ying; Lu, Xiang-Yang; Quan, Sheng-Wen; Luo, Xing; Li, Ming; Yang, Xing-Fan; Wang, Guang-Wei; Dai, Jian-Ping; Li, Zhong-Quan; Ma, Qiang; Sha, Peng

    2015-02-01

    Peking University is developing a 1.3 GHz superconducting accelerating section highpower THz free-electron laser for the China Academy of Engineering Physics (CAEP). A compact fast/slow tuner has been developed by the Institute of High Energy Physics (IHEP) for the accelerating section to control Lorentz detuning, compensate for beam loading effect, microphonics and liquid helium pressure fluctuations. The tuner design, warm test and cold test of the first prototype are presented, which has a guiding significance for the manufacture of the formal tuner and cryomodule assembly. Supported by the 500 MHz superconducting cavity electromechanical tuning system (Y190KFEOHD), NSAF (11176003) and National Major Scientific Instrument and Equipment Development projects(2011YQ130018)

  17. Perpendicularly Biased YIG Tuners for the Fermilab Recycler 52.809 MHz Cavities

    SciTech Connect

    Madrak, R.; Kashikhin, V.; Makarov, A.; Wildman, D.

    2013-09-13

    For NOvA and future experiments requiring high intensity proton beams, Fermilab is in the process of upgrading the existing accelerator complex for increased proton production. One such improvement is to reduce the Main Injector cycle time, by performing slip stacking, previously done in the Main Injector, in the now repurposed Recycler Ring. Recycler slip stacking requires new tuneable RF cavities, discussed separately in these proceedings. These are quarter wave cavities resonant at 52.809 MHz with a 10 kHz tuning range. The 10 kHz range is achieved by use of a tuner which has an electrical length of approximately one half wavelength at 52.809 MHz. The tuner is constructed from 3⅛″ diameter rigid coaxial line, with 5 inches of its length containing perpendicularly biased, Al doped Yttrium Iron Garnet (YIG). The tuner design, measurements, and high power test results are presented.

  18. Optimization of ultrasonic transducers for selective guided wave actuation

    NASA Astrophysics Data System (ADS)

    Miszczynski, Mateusz; Packo, Pawel; Zbyrad, Paulina; Stepinski, Tadeusz; Uhl, Tadeusz; Lis, Jerzy; Wiatr, Kazimierz

    2016-04-01

    The application of guided waves using surface-bonded piezoceramic transducers for nondestructive testing (NDT) and Structural Health Monitoring (SHM) have shown great potential. However, due to difficulty in identification of individual wave modes resulting from their dispersive and multi-modal nature, selective mode excitement methods are highly desired. The presented work focuses on an optimization-based approach to design of a piezoelectric transducer for selective guided waves generation. The concept of the presented framework involves a Finite Element Method (FEM) model in the optimization process. The material of the transducer is optimized in topological sense with the aim of tuning piezoelectric properties for actuation of specific guided wave modes.

  19. Optimizing of selective laser sintering method

    SciTech Connect

    Guo Suiyan

    1996-12-31

    In a SLS process, a computer-controlled laser scanner moves laser beam spot on flat powder bed and the laser beam heat the powder to cause sintering in the specific area. A series of such flat planes is linked together to construct a 3D object. SLS is a complex process which involves many process parameters. The laser beam properties, such as laser beam profile, intensity, and wave length, as well as its scanning speed and scanning path, are very important parameters. Laser properties, powder properties and sintering environment work together in a SLS process to determine whether SLS is successful. The objective of SLS is to make a part which has the same size as the CAD data. The accuracy of the final part from SLS is affected by a lot of parameters as mentioned above. How to control these parameters is a key to produce an acceptable final part. Laser parameters, powder material properties and processing environment can all affect the quality of SLS part. A lot of effort has been made in parametric analysis, material properties and processing environment for SLS by other researchers. The focus of this paper is to optimize laser parameters and scanning path to improve quality of SLS part and the processing speed. A scanning method is discussed to improve the quality and speed together.

  20. Digital logic optimization using selection operators

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor); Cameron, Eric G. (Inventor); Gambles, Jody W. (Inventor)

    2004-01-01

    According to the invention, a digital design method for manipulating a digital circuit netlist is disclosed. In one step, a first netlist is loaded. The first netlist is comprised of first basic cells that are comprised of first kernel cells. The first netlist is manipulated to create a second netlist. The second netlist is comprised of second basic cells that are comprised of second kernel cells. A percentage of the first and second kernel cells are selection circuits. There is less chip area consumed in the second basic cells than in the first basic cells. The second netlist is stored. In various embodiments, the percentage could be 2% or more, 5% or more, 10% or more, 20% or more, 30% or more, or 40% or more.

  1. Optimized Selective Coatings for Solar Collectors

    NASA Technical Reports Server (NTRS)

    Mcdonald, G.; Curtis, H. B.

    1967-01-01

    The spectral reflectance properties of black nickel electroplated over stainless steel and of black copper produced by oxidation of copper sheet were measured for various plating times of black nickel and for various lengths of time of oxidation of the copper sheet, and compared to black chrome over nickel and to converted zinc. It was determined that there was an optimum time for both plating of black nickel and for the oxidation of copper black. At this time the solar selective properties show high absorptance in the solar spectrum and low emittance in the infrared. The conditions are compared for production of optimum optical properties for black nickel, black copper, black chrome, and two black zinc conversions which at the same conditions had absorptances of 0.84, 0.90, 0.95, 0.84, and 0.92, respectively, and emittances of 0.18, 0.08, 0.09, 0.10, and 0.08, respectively.

  2. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  3. Opposing selection and environmental variation modify optimal timing of breeding.

    PubMed

    Tarwater, Corey E; Beissinger, Steven R

    2013-09-17

    Studies of evolution in wild populations often find that the heritable phenotypic traits of individuals producing the most offspring do not increase proportionally in the population. This paradox may arise when phenotypic traits influence both fecundity and viability and when there is a tradeoff between these fitness components, leading to opposing selection. Such tradeoffs are the foundation of life history theory, but they are rarely investigated in selection studies. Timing of breeding is a classic example of a heritable trait under directional selection that does not result in an evolutionary response. Using a 22-y study of a tropical parrot, we show that opposing viability and fecundity selection on the timing of breeding is common and affects optimal breeding date, defined by maximization of fitness. After accounting for sampling error, the directions of viability (positive) and fecundity (negative) selection were consistent, but the magnitude of selection fluctuated among years. Environmental conditions (rainfall and breeding density) primarily and breeding experience secondarily modified selection, shifting optimal timing among individuals and years. In contrast to other studies, viability selection was as strong as fecundity selection, late-born juveniles had greater survival than early-born juveniles, and breeding later in the year increased fitness under opposing selection. Our findings provide support for life history tradeoffs influencing selection on phenotypic traits, highlight the need to unify selection and life history theory, and illustrate the importance of monitoring survival as well as reproduction for understanding phenological responses to climate change. PMID:24003118

  4. Selecting optimal partitioning schemes for phylogenomic datasets

    PubMed Central

    2014-01-01

    Background Partitioning involves estimating independent models of molecular evolution for different subsets of sites in a sequence alignment, and has been shown to improve phylogenetic inference. Current methods for estimating best-fit partitioning schemes, however, are only computationally feasible with datasets of fewer than 100 loci. This is a problem because datasets with thousands of loci are increasingly common in phylogenetics. Methods We develop two novel methods for estimating best-fit partitioning schemes on large phylogenomic datasets: strict and relaxed hierarchical clustering. These methods use information from the underlying data to cluster together similar subsets of sites in an alignment, and build on clustering approaches that have been proposed elsewhere. Results We compare the performance of our methods to each other, and to existing methods for selecting partitioning schemes. We demonstrate that while strict hierarchical clustering has the best computational efficiency on very large datasets, relaxed hierarchical clustering provides scalable efficiency and returns dramatically better partitioning schemes as assessed by common criteria such as AICc and BIC scores. Conclusions These two methods provide the best current approaches to inferring partitioning schemes for very large datasets. We provide free open-source implementations of the methods in the PartitionFinder software. We hope that the use of these methods will help to improve the inferences made from large phylogenomic datasets. PMID:24742000

  5. High-power RF testing of a 352-MHZ fast-ferrite RF cavity tuner at the Advanced Photon Source.

    SciTech Connect

    Horan, D.; Cherbak, E.; Accelerator Systems Division

    2006-01-01

    A 352-MHz fast-ferrite rf cavity tuner, manufactured by Advanced Ferrite Technology, was high-power tested on a single-cell copper rf cavity at the Advanced Photon Source. These tests measured the fast-ferrite tuner performance in terms of power handling capability, tuning bandwidth, tuning speed, stability, and rf losses. The test system comprises a single-cell copper rf cavity fitted with two identical coupling loops, one for input rf power and the other for coupling the fast-ferrite tuner to the cavity fields. The fast-ferrite tuner rf circuit consists of a cavity coupling loop, a 6-1/8-inch EIA coaxial line system with directional couplers, and an adjustable 360{sup o} mechanical phase shifter in series with the fast-ferrite tuner. A bipolar DC bias supply, controlled by a low-level rf cavity tuning loop consisting of an rf phase detector and a PID amplifier, is used to provide a variable bias current to the tuner ferrite material to maintain the test cavity at resonance. Losses in the fast-ferrite tuner are calculated from cooling water calorimetry. Test data will be presented.

  6. Optimal Selection of Parameters for Nonuniform Embedding of Chaotic Time Series Using Ant Colony Optimization.

    PubMed

    Shen, Meie; Chen, Wei-Neng; Zhang, Jun; Chung, Henry Shu-Hung; Kaynak, Okyay

    2013-04-01

    The optimal selection of parameters for time-delay embedding is crucial to the analysis and the forecasting of chaotic time series. Although various parameter selection techniques have been developed for conventional uniform embedding methods, the study of parameter selection for nonuniform embedding is progressed at a slow pace. In nonuniform embedding, which enables different dimensions to have different time delays, the selection of time delays for different dimensions presents a difficult optimization problem with combinatorial explosion. To solve this problem efficiently, this paper proposes an ant colony optimization (ACO) approach. Taking advantage of the characteristic of incremental solution construction of the ACO, the proposed ACO for nonuniform embedding (ACO-NE) divides the solution construction procedure into two phases, i.e., selection of embedding dimension and selection of time delays. In this way, both the embedding dimension and the time delays can be optimized, along with the search process of the algorithm. To accelerate search speed, we extract useful information from the original time series to define heuristics to guide the search direction of ants. Three geometry- or model-based criteria are used to test the performance of the algorithm. The optimal embeddings found by the algorithm are also applied in time-series forecasting. Experimental results show that the ACO-NE is able to yield good embedding solutions from both the viewpoints of optimization performance and prediction accuracy. PMID:23144038

  7. Development of a Movable Plunger Tuner for the High Power RF Cavity for the PEP II B Factory

    SciTech Connect

    Schwarz, H.D.; Fant, K.; Neubauer, Mark Stephen; Rimmer, R.A.; /LBL, Berkeley

    2011-08-26

    A 10 cm diameter by 5 cm travel plunger tuner was developed for the PEP-II RF copper cavity system. The single cell cavity including the tuner is designed to operate up to 150 kW of dissipated RF power. Spring finger contacts to protect the bellows from RF power are specially placed 8.5 cm away from the inside wall of the cavity to avoid fundamental and higher order mode resonances. The spring fingers are made of dispersion-strengthened copper to accommodate relatively high heating. The design, alignment, testing and performance of the tuner is described.

  8. Training set optimization under population structure in genomic selection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The optimization of the training set (TRS) in genomic selection (GS) has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the Coefficient of D...

  9. Optimal Financial Aid Policies for a Selective University.

    ERIC Educational Resources Information Center

    Ehrenberg, Ronald G.; Sherman, Daniel R.

    1984-01-01

    This paper provides a model of optimal financial aid policies for a selective university. The model implies that the financial aid package to be offered to each category of admitted applicants depends on the elasticity of the fraction who accept offers of admission with respect to the financial aid package offered them. (Author/SSH)

  10. Multidimensional Adaptive Testing with Optimal Design Criteria for Item Selection

    ERIC Educational Resources Information Center

    Mulder, Joris; van der Linden, Wim J.

    2009-01-01

    Several criteria from the optimal design literature are examined for use with item selection in multidimensional adaptive testing. In particular, it is examined what criteria are appropriate for adaptive testing in which all abilities are intentional, some should be considered as a nuisance, or the interest is in the testing of a composite of the…

  11. Training set optimization under population structure in genomic selection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determ...

  12. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate

  13. Optimizing Ligand Efficiency of Selective Androgen Receptor Modulators (SARMs).

    PubMed

    Handlon, Anthony L; Schaller, Lee T; Leesnitzer, Lisa M; Merrihew, Raymond V; Poole, Chuck; Ulrich, John C; Wilson, Joseph W; Cadilla, Rodolfo; Turnbull, Philip

    2016-01-14

    A series of selective androgen receptor modulators (SARMs) containing the 1-(trifluoromethyl)benzyl alcohol core have been optimized for androgen receptor (AR) potency and drug-like properties. We have taken advantage of the lipophilic ligand efficiency (LLE) parameter as a guide to interpret the effect of structural changes on AR activity. Over the course of optimization efforts the LLE increased over 3 log units leading to a SARM 43 with nanomolar potency, good aqueous kinetic solubility (>700 μM), and high oral bioavailability in rats (83%). PMID:26819671

  14. Improved Clonal Selection Algorithm Combined with Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Gao, Shangce; Wang, Wei; Dai, Hongwei; Li, Fangjia; Tang, Zheng

    Both the clonal selection algorithm (CSA) and the ant colony optimization (ACO) are inspired by natural phenomena and are effective tools for solving complex problems. CSA can exploit and explore the solution space parallely and effectively. However, it can not use enough environment feedback information and thus has to do a large redundancy repeat during search. On the other hand, ACO is based on the concept of indirect cooperative foraging process via secreting pheromones. Its positive feedback ability is nice but its convergence speed is slow because of the little initial pheromones. In this paper, we propose a pheromone-linker to combine these two algorithms. The proposed hybrid clonal selection and ant colony optimization (CSA-ACO) reasonably utilizes the superiorities of both algorithms and also overcomes their inherent disadvantages. Simulation results based on the traveling salesman problems have demonstrated the merit of the proposed algorithm over some traditional techniques.

  15. Monte Carlo optimization for site selection of new chemical plants.

    PubMed

    Cai, Tianxing; Wang, Sujing; Xu, Qiang

    2015-11-01

    Geographic distribution of chemical manufacturing sites has significant impact on the business sustainability of industrial development and regional environmental sustainability as well. The common site selection rules have included the evaluation of the air quality impact of a newly constructed chemical manufacturing site to surrounding communities. In order to achieve this target, the simultaneous consideration should cover the regional background air-quality information, the emissions of new manufacturing site, and statistical pattern of local meteorological conditions. According to the above information, the risk assessment can be conducted for the potential air-quality impacts from candidate locations of a new chemical manufacturing site, and thus the optimization of the final site selection can be achieved by minimizing its air-quality impacts. This paper has provided a systematic methodology for the above purpose. There are total two stages of modeling and optimization work: i) Monte Carlo simulation for the purpose to identify background pollutant concentration based on currently existing emission sources and regional statistical meteorological conditions; and ii) multi-objective (simultaneous minimization of both peak pollutant concentration and standard deviation of pollutant concentration spatial distribution at air-quality concern regions) Monte Carlo optimization for optimal location selection of new chemical manufacturing sites according to their design data of potential emission. This study can be helpful to both determination of the potential air-quality impact for geographic distribution of multiple chemical plants with respect to regional statistical meteorological conditions, and the identification of an optimal site for each new chemical manufacturing site with the minimal environment impact to surrounding communities. The efficacy of the developed methodology has been demonstrated through the case studies. PMID:26283263

  16. Hyperopt: a Python library for model selection and hyperparameter optimization

    NASA Astrophysics Data System (ADS)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  17. A technique for monitoring fast tuner piezoactuator preload forces for superconducting rf cavities

    SciTech Connect

    Pischalnikov, Y.; Branlard, J.; Carcagno, R.; Chase, B.; Edwards, H.; Orris, D.; Makulski, A.; McGee, M.; Nehring, R.; Poloubotko, V.; Sylvester, C.; /Fermilab

    2007-06-01

    The technology for mechanically compensating Lorentz Force detuning in superconducting RF cavities has already been developed at DESY. One technique is based on commercial piezoelectric actuators and was successfully demonstrated on TESLA cavities [1]. Piezo actuators for fast tuners can operate in a frequency range up to several kHz; however, it is very important to maintain a constant static force (preload) on the piezo actuator in the range of 10 to 50% of its specified blocking force. Determining the preload force during cool-down, warm-up, or re-tuning of the cavity is difficult without instrumentation, and exceeding the specified range can permanently damage the piezo stack. A technique based on strain gauge technology for superconducting magnets has been applied to fast tuners for monitoring the preload on the piezoelectric assembly. The design and testing of piezo actuator preload sensor technology is discussed. Results from measurements of preload sensors installed on the tuner of the Capture Cavity II (CCII)[2] tested at FNAL are presented. These results include measurements during cool-down, warmup, and cavity tuning along with dynamic Lorentz force compensation.

  18. State-Selective Excitation of Quantum Systems via Geometrical Optimization.

    PubMed

    Chang, Bo Y; Shin, Seokmin; Sola, Ignacio R

    2015-09-01

    We lay out the foundations of a general method of quantum control via geometrical optimization. We apply the method to state-selective population transfer using ultrashort transform-limited pulses between manifolds of levels that may represent, e.g., state-selective transitions in molecules. Assuming that certain states can be prepared, we develop three implementations: (i) preoptimization, which implies engineering the initial state within the ground manifold or electronic state before the pulse is applied; (ii) postoptimization, which implies engineering the final state within the excited manifold or target electronic state, after the pulse; and (iii) double-time optimization, which uses both types of time-ordered manipulations. We apply the schemes to two important dynamical problems: To prepare arbitrary vibrational superposition states on the target electronic state and to select weakly coupled vibrational states. Whereas full population inversion between the electronic states only requires control at initial time in all of the ground vibrational levels, only very specific superposition states can be prepared with high fidelity by either pre- or postoptimization mechanisms. Full state-selective population inversion requires manipulating the vibrational coherences in the ground electronic state before the optical pulse is applied and in the excited electronic state afterward, but not during all times. PMID:26575896

  19. PDZ Domain Binding Selectivity Is Optimized Across the Mouse Proteome

    PubMed Central

    Stiffler, Michael A.; Chen, Jiunn R.; Grantcharova, Viara P.; Lei, Ying; Fuchs, Daniel; Allen, John E.; Zaslavskaia, Lioudmila A.; MacBeath, Gavin

    2009-01-01

    PDZ domains have long been thought to cluster into discrete functional classes defined by their peptide-binding preferences. We used protein microarrays and quantitative fluorescence polarization to characterize the binding selectivity of 157 mouse PDZ domains with respect to 217 genome-encoded peptides. We then trained a multidomain selectivity model to predict PDZ domain–peptide interactions across the mouse proteome with an accuracy that exceeds many large-scale, experimental investigations of protein-protein interactions. Contrary to the current paradigm, PDZ domains do not fall into discrete classes; instead, they are evenly distributed throughout selectivity space, which suggests that they have been optimized across the proteome to minimize cross-reactivity. We predict that focusing on families of interaction domains, which facilitates the integration of experimentation and modeling, will play an increasingly important role in future investigations of protein function. PMID:17641200

  20. Field of view selection for optimal airborne imaging sensor performance

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.; Barnard, P. Werner; Fildis, Halidun; Erbudak, Mustafa; Senger, Tolga; Alpman, Mehmet E.

    2014-05-01

    The choice of the Field of View (FOV) of imaging sensors used in airborne targeting applications has major impact on the overall performance of the system. Conducting a market survey from published data on sensors used in stabilized airborne targeting systems shows a trend of ever narrowing FOVs housed in smaller and lighter volumes. This approach promotes the ever increasing geometric resolution provided by narrower FOVs, while it seemingly ignores the influences the FOV selection has on the sensor's sensitivity, the effects of diffraction, the influences of sight line jitter and collectively the overall system performance. This paper presents a trade-off methodology to select the optimal FOV for an imaging sensor that is limited in aperture diameter by mechanical constraints (such as space/volume available and window size) by balancing the influences FOV has on sensitivity and resolution and thereby optimizing the system's performance. The methodology may be applied to staring array based imaging sensors across all wavebands from visible/day cameras through to long wave infrared thermal imagers. Some examples of sensor analysis applying the trade-off methodology are given that highlights the performance advantages that can be gained by maximizing the aperture diameters and choosing the optimal FOV for an imaging sensor used in airborne targeting applications.

  1. Some useful upper bounds for the selection of optimal profiles

    NASA Astrophysics Data System (ADS)

    Daripa, Prabir

    2012-08-01

    In enhanced oil recovery by chemical flooding within tertiary oil recovery, it is often necessary to choose optimal viscous profiles of the injected displacing fluids that reduce growth rates of hydrodynamic instabilities the most thereby substantially reducing the well-known fingering problem and improving oil recovery. Within the three-layer Hele-Shaw model, we show in this paper that selection of the optimal monotonic viscous profile of the middle-layer fluid based on well known theoretical upper bound formula [P. Daripa, G. Pasa, A simple derivation of an upper bound in the presence of a viscosity gradient in three-layer Hele-Shaw flows, Journal of Statistical Mechanics (2006) 11. http://dx.doi.org/10.1088/1742-5468/2006/01/P01014] agrees very well with that based on the computation of maximum growth rate of instabilities from solving the linearized stability problem. Thus, this paper proposes a very simple, fast method for selection of the optimal monotonic viscous profiles of the displacing fluids in multi-layer flows.

  2. Implementing stationary-phase optimized selectivity in supercritical fluid chromatography.

    PubMed

    Delahaye, Sander; Lynen, Frédéric

    2014-12-16

    The performance of stationary-phase optimized selectivity liquid chromatography (SOS-LC) for improved separation of complex mixtures has been demonstrated before. A dedicated kit containing column segments of different lengths and packed with different stationary phases is commercially available together with algorithms capable of predicting and ranking isocratic and gradient separations over vast amounts of possible column combinations. Implementation in chromatographic separations involving compressible fluids, as is the case in supercritical fluid chromatography, had thus far not been attempted. The challenge of this approach is the dependency of solute retention with the mobile-phase density, complicating linear extrapolation of retention over longer or shorter columns segments, as is the case in conventional SOS-LC. In this study, the possibilities of performing stationary-phase optimized selectivity supercritical fluid chromatography (SOS-SFC) are demonstrated with typical low density mobile phases (94% CO2). The procedure is optimized with the commercially available column kit and with the classical isocratic SOS-LC algorithm. SOS-SFC appears possible without any density correction, although optimal correspondence between prediction and experiment is obtained when isopycnic conditions are maintained. As also the influence of the segment order appears significantly less relevant than expected, the use of the approach in SFC appears as promising as is the case in HPLC. Next to the classical use of SOS for faster baseline separation of all solutes in a mixture, the benefits of the approach for predicting as wide as possible separation windows around to-be-purified solutes in semipreparative SFC are illustrated, leading to significant production rate improvements in (semi)preparative SFC. PMID:25393519

  3. Optimal Selection of Threshold Value 'r' for Refined Multiscale Entropy.

    PubMed

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar

    2015-12-01

    Refined multiscale entropy (RMSE) technique was introduced to evaluate complexity of a time series over multiple scale factors 't'. Here threshold value 'r' is updated as 0.15 times SD of filtered scaled time series. The use of fixed threshold value 'r' in RMSE sometimes assigns very close resembling entropy values to certain time series at certain temporal scale factors and is unable to distinguish different time series optimally. The present study aims to evaluate RMSE technique by varying threshold value 'r' from 0.05 to 0.25 times SD of filtered scaled time series and finding optimal 'r' values for each scale factor at which different time series can be distinguished more effectively. The proposed RMSE was used to evaluate over HRV time series of normal sinus rhythm subjects, patients suffering from sudden cardiac death, congestive heart failure, healthy adult male, healthy adult female and mid-aged female groups as well as over synthetic simulated database for different datalengths 'N' of 3000, 3500 and 4000. The proposed RMSE results in improved discrimination among different time series. To enhance the computational capability, empirical mathematical equations have been formulated for optimal selection of threshold values 'r' as a function of SD of filtered scaled time series and datalength 'N' for each scale factor 't'. PMID:26577486

  4. Optimizing Hammermill Performance Through Screen Selection and Hammer Design

    SciTech Connect

    Neal A. Yancey; Tyler L. Westover; Christopher T. Wright

    2013-01-01

    Background: Mechanical preprocessing, which includes particle size reduction and mechanical separation, is one of the primary operations in the feedstock supply system for a lignocellulosic biorefinery. It is the means by which raw biomass from the field or forest is mechanically transformed into an on-spec feedstock with characteristics better suited for the fuel conversion process. Results: This work provides a general overview of the objectives and methodologies of mechanical preprocessing and then presents experimental results illustrating (1) improved size reduction via optimization of hammer mill configuration, (2) improved size reduction via pneumatic-assisted hammer milling, and (3) improved control of particle size and particle size distribution through proper selection of grinder process parameters. Conclusion: Optimal grinder configuration for maximal process throughput and efficiency is strongly dependent on feedstock type and properties, such moisture content. Tests conducted using a HG200 hammer grinder indicate that increasing the tip speed, optimizing hammer geometry, and adding pneumatic assist can increase grinder throughput as much as 400%.

  5. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGESBeta

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  6. Optimal subinterval selection approach for power system transient stability simulation

    SciTech Connect

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.

  7. Tuner and radiation shield for planar electron paramagnetic resonance microresonators

    SciTech Connect

    Narkowicz, Ryszard; Suter, Dieter

    2015-02-15

    Planar microresonators provide a large boost of sensitivity for small samples. They can be manufactured lithographically to a wide range of target parameters. The coupler between the resonator and the microwave feedline can be integrated into this design. To optimize the coupling and to compensate manufacturing tolerances, it is sometimes desirable to have a tuning element available that can be adjusted when the resonator is connected to the spectrometer. This paper presents a simple design that allows one to bring undercoupled resonators into the condition for critical coupling. In addition, it also reduces radiation losses and thereby increases the quality factor and the sensitivity of the resonator.

  8. Optimizing Site Selection in Urban Areas in Northern Switzerland

    NASA Astrophysics Data System (ADS)

    Plenkers, K.; Kraft, T.; Bethmann, F.; Husen, S.; Schnellmann, M.

    2012-04-01

    There is a need to observe weak seismic events (M<2) in areas close to potential nuclear-waste repositories or nuclear power plants, in order to analyze the underlying seismo-tectonic processes and estimate their seismic hazard. We are therefore densifying the existing Swiss Digital Seismic Network in northern Switzerland by additional 20 stations. The new network that will be in operation by the end of 2012, aims at observing seismicity in northern Switzerland with a completeness of M_c=1.0 and a location error < 0.5 km in epicenter and < 2 km in focal depth. Monitoring of weak seismic events in this region is challenging, because the area of interest is densely populated and geology is dominated by the Swiss molasse basin. A optimal network-design and a thoughtful choice for station-sites is, therefore, mandatory. To help with decision making we developed a step-wise approach to find the optimum network configuration. Our approach is based on standard network optimization techniques regarding the localization error. As a new feature, our approach uses an ambient noise model to compute expected signal-to-noise ratios for a given site. The ambient noise model uses information on land use and major infrastructures such as highways and train lines. We ran a series of network optimizations with increasing number of stations until the requirements regarding localization error and magnitude of completeness are reached. The resulting network geometry serves as input for the site selection. Site selection is done by using a newly developed multi-step assessment-scheme that takes into account local noise level, geology, infrastructure, and costs necessary to realize the station. The assessment scheme is weighting the different parameters and the most promising sites are identified. In a first step, all potential sites are classified based on information from topographic maps and site inspection. In a second step, local noise conditions are measured at selected sites. We

  9. Multiobjective optimization for model selection in kernel methods in regression.

    PubMed

    You, Di; Benitez-Quiroz, Carlos Fabian; Martinez, Aleix M

    2014-10-01

    Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-versus-variance tradeoff. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a tradeoff between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition, and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared with methods in the state of the art. PMID:25291740

  10. Optimized bioregenerative space diet selection with crew choice

    NASA Technical Reports Server (NTRS)

    Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean

    2003-01-01

    Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.

  11. Multiobjective Optimization for Model Selection in Kernel Methods in Regression

    PubMed Central

    You, Di; Benitez-Quiroz, C. Fabian; Martinez, Aleix M.

    2016-01-01

    Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-vs-variance trade-off. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a trade-off between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared to methods in the state of the art. PMID:25291740

  12. Theoretical Analysis of Triple Liquid Stub Tuner Impedance Matching for ICRH on Tokamaks

    NASA Astrophysics Data System (ADS)

    Du, Dan; Gong, Xueyu; Yin, Lan; Xiang, Dong; Li, Jingchun

    2015-12-01

    The impedance matching is crucial for continuous wave operation of ion cyclotron resonance heating (ICRH) antennae with high power injection into plasmas. A sudden increase in the reflected radio frequency power due to an impedance mismatch of the ICRH system is an issue which must be solved for present-day and future fusion reactors. This paper presents a method for theoretical analysis of ICRH system impedance matching for a triple liquid stub tuner under plasma operational conditions. The relationship of the antenna input impedance with the plasma parameters and operating frequency is first obtained using a global solution. Then, the relations of the plasma parameters and operating frequency with the matching liquid heights are indirectly obtained through numerical simulation according to transmission line theory and matching conditions. The method provides an alternative theoretical method, rather than measurements, to study triple liquid stub tuner impedance matching for ICRH, which may be beneficial for the design of ICRH systems on tokamaks. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2014GB108002, 2013GB107001), National Natural Science Foundation of China (Nos. 11205086, 11205053, 11375085, and 11405082), the Construct Program of Fusion and Plasma Physics Innovation Team in Hunan Province, China (No. NHXTD03), the Natural Science Foundation of Hunan Province, China (No. 2015JJ4044)

  13. ICRF antenna matching system with ferrite tuners for the Alcator C-Mod tokamak

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Binus, A.; Wukitch, S. J.; Koert, P.; Murray, R.; Pfeiffer, A.

    2015-12-01

    Real-time fast ferrite tuning (FFT) has been successfully implemented on the ICRF antennas on Alcator C-Mod. The former prototypical FFT system on the E-port 2-strap antenna has been upgraded using new ferrite tuners that have been designed specifically for the operational parameters of the Alcator C-Mod ICRF system (˜ 80 MHz). Another similar FFT system, with two ferrite tuners and one fixed-length stub, has been installed on the transmission line of the D-port 2-strap antenna. These two systems share a Linux-server-based real-time controller. These FFT systems are able to achieve and maintain the reflected power to the transmitters to less than 1% in real time during the plasma discharges under almost all plasma conditions, and help ensure reliable high power operation of the antennas. The innovative field-aligned (FA) 4-strap antenna on J-port has been found to have an interesting feature of loading insensitivity vs. plasma conditions. This feature allows us to significantly improve the matching for the FA J-port antenna by installing carefully designed stubs on the two transmission lines. The reduction of the RF voltages in the transmission lines has enabled the FA J-port antenna to deliver 3.7 MW RF power to plasmas out of the 4 MW source power in high performance I-mode plasmas.

  14. A proof-of-principle experiment of the ferroelectric tuner for the 1.3 GHz gun cavity

    SciTech Connect

    Hahn,H.; Choi, E.; Shchelkunov, S. V.; Hirshfield, J.; Kazakov, S.; Shschelkunov, S.

    2009-05-04

    A novel ferroelectric frequency tuner was developed by the Ornega-P company and was tested at the Brookhaven National Laboratory on a 1.3 GHz RF cavity at room temperature. The tuner is based on the ferroelectric property of having a permittivity variable with an applied electric field. The achievable frequency tuning range can be estimated from the reactive impedance change due to an applied voltage via a S{sub 11} measurement at the tuner port. The frequency shift can be measured directly with a S{sub 21} measurement across the gun cavity with the tuner connected and activated. The frequency change due to an applied 5 kV obtained from the two methods is in reasonable agreement. The reactive impedance measurement yields a value in the range between 3.2 kHz and 14 kHz, while 9 kHz is the result from the direct measurement. The detail description of the experiment and the analysis will be discussed in the paper.

  15. Applications of Optimal Building Energy System Selection and Operation

    SciTech Connect

    Marnay, Chris; Stadler, Michael; Siddiqui, Afzal; DeForest, Nicholas; Donadee, Jon; Bhattacharya, Prajesh; Lai, Judy

    2011-04-01

    Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated by description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.

  16. Making the optimal decision in selecting protective clothing

    SciTech Connect

    Price, J. Mark

    2007-07-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of employee dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered, including - Protection; - Personnel Contamination; - Cost; - Radwaste; - Comfort; - Convenience; - Logistics/Rad Material Considerations; - Reject Rate of Laundered Clothing; - Durability; - Security; - Personnel Safety including Heat Stress; - Disposition of Gloves and Booties. In addition, over the last several years there has been a trend of nuclear power plants either running trials or switching to Single Use Protective Clothing (SUPC) from traditional protective clothing. In some cases, after trial usage of SUPC, plants have chosen not to switch. In other cases after switching to SUPC for a period of time, some plants have chosen to switch back to laundering. Based on these observations, this paper reviews the 'real' drivers, issues, and interrelating factors regarding the selection and use of protective clothing throughout the nuclear industry. (authors)

  17. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  18. A dual molecular analogue tuner for dissecting protein function in mammalian cells

    PubMed Central

    Brosh, Ran; Hrynyk, Iryna; Shen, Jessalyn; Waghray, Avinash; Zheng, Ning; Lemischka, Ihor R.

    2016-01-01

    Loss-of-function studies are fundamental for dissecting gene function. Yet, methods to rapidly and effectively perturb genes in mammalian cells, and particularly in stem cells, are scarce. Here we present a system for simultaneous conditional regulation of two different proteins in the same mammalian cell. This system harnesses the plant auxin and jasmonate hormone-induced degradation pathways, and is deliverable with only two lentiviral vectors. It combines RNAi-mediated silencing of two endogenous proteins with the expression of two exogenous proteins whose degradation is induced by external ligands in a rapid, reversible, titratable and independent manner. By engineering molecular tuners for NANOG, CHK1, p53 and NOTCH1 in mammalian stem cells, we have validated the applicability of the system and demonstrated its potential to unravel complex biological processes. PMID:27230261

  19. A dual molecular analogue tuner for dissecting protein function in mammalian cells.

    PubMed

    Brosh, Ran; Hrynyk, Iryna; Shen, Jessalyn; Waghray, Avinash; Zheng, Ning; Lemischka, Ihor R

    2016-01-01

    Loss-of-function studies are fundamental for dissecting gene function. Yet, methods to rapidly and effectively perturb genes in mammalian cells, and particularly in stem cells, are scarce. Here we present a system for simultaneous conditional regulation of two different proteins in the same mammalian cell. This system harnesses the plant auxin and jasmonate hormone-induced degradation pathways, and is deliverable with only two lentiviral vectors. It combines RNAi-mediated silencing of two endogenous proteins with the expression of two exogenous proteins whose degradation is induced by external ligands in a rapid, reversible, titratable and independent manner. By engineering molecular tuners for NANOG, CHK1, p53 and NOTCH1 in mammalian stem cells, we have validated the applicability of the system and demonstrated its potential to unravel complex biological processes. PMID:27230261

  20. A quadratic weight selection algorithm. [for optimal flight control

    NASA Technical Reports Server (NTRS)

    Broussard, J. R.

    1981-01-01

    A new numerical algorithm is presented which determines a positive semi-definite state weighting matrix in the linear-quadratic optimal control design problem. The algorithm chooses the weighting matrix by placing closed-loop eigenvalues and eigenvectors near desired locations using optimal feedback gains. A simplified flight control design example is used to illustrate the algorithms capabilities.

  1. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2016-06-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  2. 75 FR 39437 - Optimizing the Security of Biological Select Agents and Toxins in the United States

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ..., 2010. [FR Doc. 2010-16864 Filed 7-7-10; 11:15 am] Billing code 3195-W0-P ... Executive Order 13546--Optimizing the Security of Biological Select Agents and Toxins in the United States... July 2, 2010 Optimizing the Security of Biological Select Agents and Toxins in the United States By...

  3. To Eat or Not to Eat: An Easy Simulation of Optimal Diet Selection in the Classroom

    ERIC Educational Resources Information Center

    Ray, Darrell L.

    2010-01-01

    Optimal diet selection, a component of optimal foraging theory, suggests that animals should select a diet that either maximizes energy or nutrient consumption per unit time or minimizes the foraging time needed to attain required energy or nutrients. In this exercise, students simulate the behavior of foragers that either show no foraging…

  4. Polyhedral Interpolation for Optimal Reaction Control System Jet Selection

    NASA Technical Reports Server (NTRS)

    Gefert, Leon P.; Wright, Theodore

    2014-01-01

    An efficient algorithm is described for interpolating optimal values for spacecraft Reaction Control System jet firing duty cycles. The algorithm uses the symmetrical geometry of the optimal solution to reduce the number of calculations and data storage requirements to a level that enables implementation on the small real time flight control systems used in spacecraft. The process minimizes acceleration direction errors, maximizes control authority, and minimizes fuel consumption.

  5. Age-Related Differences in Goals: Testing Predictions from Selection, Optimization, and Compensation Theory and Socioemotional Selectivity Theory

    ERIC Educational Resources Information Center

    Penningroth, Suzanna L.; Scott, Walter D.

    2012-01-01

    Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two…

  6. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  7. Transferability of optimally-selected climate models in the quantification of climate change impacts on hydrology

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe

    2016-02-01

    Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.

  8. Optimization of Swine Breeding Programs Using Genomic Selection with ZPLAN.

    PubMed

    Lopez, B M; Kang, H S; Kim, T H; Viterbo, V S; Kim, H S; Na, C S; Seo, K S

    2016-05-01

    The objective of this study was to evaluate the present conventional selection program of a swine nucleus farm and compare it with a new selection strategy employing genomic enhanced breeding value (GEBV) as the selection criteria. The ZPLAN+ software was employed to calculate and compare the genetic gain, total cost, return and profit of each selection strategy. The first strategy reflected the current conventional breeding program, which was a progeny test system (CS). The second strategy was a selection scheme based strictly on genomic information (GS1). The third scenario was the same as GS1, but the selection by GEBV was further supplemented by the performance test (GS2). The last scenario was a mixture of genomic information and progeny tests (GS3). The results showed that the accuracy of the selection index of young boars of GS1 was 26% higher than that of CS. On the other hand, both GS2 and GS3 gave 31% higher accuracy than CS for young boars. The annual monetary genetic gain of GS1, GS2 and GS3 was 10%, 12%, and 11% higher, respectively, than that of CS. As expected, the discounted costs of genomic selection strategies were higher than those of CS. The costs of GS1, GS2 and GS3 were 35%, 73%, and 89% higher than those of CS, respectively, assuming a genotyping cost of $120. As a result, the discounted profit per animal of GS1 and GS2 was 8% and 2% higher, respectively, than that of CS while GS3 was 6% lower. Comparison among genomic breeding scenarios revealed that GS1 was more profitable than GS2 and GS3. The genomic selection schemes, especially GS1 and GS2, were clearly superior to the conventional scheme in terms of monetary genetic gain and profit. PMID:26954222

  9. Optimization of Swine Breeding Programs Using Genomic Selection with ZPLAN+

    PubMed Central

    Lopez, B. M.; Kang, H. S.; Kim, T. H.; Viterbo, V. S.; Kim, H. S.; Na, C. S.; Seo, K. S.

    2016-01-01

    The objective of this study was to evaluate the present conventional selection program of a swine nucleus farm and compare it with a new selection strategy employing genomic enhanced breeding value (GEBV) as the selection criteria. The ZPLAN+ software was employed to calculate and compare the genetic gain, total cost, return and profit of each selection strategy. The first strategy reflected the current conventional breeding program, which was a progeny test system (CS). The second strategy was a selection scheme based strictly on genomic information (GS1). The third scenario was the same as GS1, but the selection by GEBV was further supplemented by the performance test (GS2). The last scenario was a mixture of genomic information and progeny tests (GS3). The results showed that the accuracy of the selection index of young boars of GS1 was 26% higher than that of CS. On the other hand, both GS2 and GS3 gave 31% higher accuracy than CS for young boars. The annual monetary genetic gain of GS1, GS2 and GS3 was 10%, 12%, and 11% higher, respectively, than that of CS. As expected, the discounted costs of genomic selection strategies were higher than those of CS. The costs of GS1, GS2 and GS3 were 35%, 73%, and 89% higher than those of CS, respectively, assuming a genotyping cost of $120. As a result, the discounted profit per animal of GS1 and GS2 was 8% and 2% higher, respectively, than that of CS while GS3 was 6% lower. Comparison among genomic breeding scenarios revealed that GS1 was more profitable than GS2 and GS3. The genomic selection schemes, especially GS1 and GS2, were clearly superior to the conventional scheme in terms of monetary genetic gain and profit. PMID:26954222

  10. Optimal design and selection of magneto-rheological brake types based on braking torque and mass

    NASA Astrophysics Data System (ADS)

    Nguyen, Q. H.; Lang, V. T.; Choi, S. B.

    2015-06-01

    In developing magnetorheological brakes (MRBs), it is well known that the braking torque and the mass of the MRBs are important factors that should be considered in the product’s design. This research focuses on the optimal design of different types of MRBs, from which we identify an optimal selection of MRB types, considering braking torque and mass. In the optimization, common types of MRBs such as disc-type, drum-type, hybrid-type, and T-shape types are considered. The optimization problem is to find an optimal MRB structure that can produce the required braking torque while minimizing its mass. After a brief description of the configuration of the MRBs, the MRBs’ braking torque is derived based on the Herschel-Bulkley rheological model of the magnetorheological fluid. Then, the optimal designs of the MRBs are analyzed. The optimization objective is to minimize the mass of the brake while the braking torque is constrained to be greater than a required value. In addition, the power consumption of the MRBs is also considered as a reference parameter in the optimization. A finite element analysis integrated with an optimization tool is used to obtain optimal solutions for the MRBs. Optimal solutions of MRBs with different required braking torque values are obtained based on the proposed optimization procedure. From the results, we discuss the optimal selection of MRB types, considering braking torque and mass.

  11. Self-Selection, Optimal Income Taxation, and Redistribution

    ERIC Educational Resources Information Center

    Amegashie, J. Atsu

    2009-01-01

    The author makes a pedagogical contribution to optimal income taxation. Using a very simple model adapted from George A. Akerlof (1978), he demonstrates a key result in the approach to public economics and welfare economics pioneered by Nobel laureate James Mirrlees. He shows how incomplete information, in addition to the need to preserve…

  12. Optimizing drilling performance using a selected drilling fluid

    DOEpatents

    Judzis, Arnis; Black, Alan D.; Green, Sidney J.; Robertson, Homer A.; Bland, Ronald G.; Curry, David Alexander; Ledgerwood, III, Leroy W.

    2011-04-19

    To improve drilling performance, a drilling fluid is selected based on one or more criteria and to have at least one target characteristic. Drilling equipment is used to drill a wellbore, and the selected drilling fluid is provided into the wellbore during drilling with the drilling equipment. The at least one target characteristic of the drilling fluid includes an ability of the drilling fluid to penetrate into formation cuttings during drilling to weaken the formation cuttings.

  13. Optimizing selection with several constraints in poultry breeding.

    PubMed

    Chapuis, H; Pincent, C; Colleau, J J

    2016-02-01

    Poultry breeding schemes permanently face the need to control the evolution of coancestry and some critical traits, while selecting for a main breeding objective. The main aims of this article are first to present an efficient selection algorithm adapted to this situation and then to measure how the severity of constraints impacted on the degree of loss for the main trait, compared to BLUP selection on the main trait, without any constraint. Broiler dam and sire line schemes were mimicked by simulation over 10 generations and selection was carried out on the main trait under constraints for coancestry and for another trait, antagonistic with the main trait. The selection algorithm was a special simulated annealing (adaptative simulated annealing (ASA)). It was found to be rapid and able to meet constraints very accurately. A constraint on the second trait was found to induce an impact similar to or even greater than the impact of the constraint on coancestry. The family structure of selected poultry populations made it easy to control the evolution of coancestry at a reasonable cost but was not as useful for reducing the cost of controlling evolution of the antagonistic traits. Multiple constraints impacted almost additively on the genetic gain for the main trait. Adding constraints for several traits would therefore be justified in real life breeding schemes, possibly after evaluating their impact through simulated annealing. PMID:26220593

  14. Optimization of Metamaterial Selective Emitters for Use in Thermophotovoltaic Applications

    NASA Astrophysics Data System (ADS)

    Pfiester, Nicole A.

    The increasing costs of fossil fuels, both financial and environmental, has motivated many to look into sustainable energy sources. Thermophotovoltaics (TPVs), specialized photovoltaic cells focused on the infrared range, offer an opportunity to achieve both primary energy capture, similar to traditional photovoltaics, as well as secondary energy capture in the form of waste heat. However, to become a feasible energy source, TPV systems must become more efficient. One way to do this is through the development of selective emitters tailored to the bandgap of the TPV diode in question. This thesis proposes the use of metamaterial emitters as an engineerable, highly selective emitter that can withstand the temperatures required to collect waste heat. Metamaterial devices made of platinum and a dielectric such as alumina or silicon nitride were initially designed and tested as perfect absorbers. High temperature robustness testing demonstrates the device's ability to withstand the rigors of operating as a selective emitter.

  15. Optimal search-based gene subset selection for gene array cancer classification.

    PubMed

    Li, Jiexun; Su, Hua; Chen, Hsinchun; Futscher, Bernard W

    2007-07-01

    High dimensionality has been a major problem for gene array-based cancer classification. It is critical to identify marker genes for cancer diagnoses. We developed a framework of gene selection methods based on previous studies. This paper focuses on optimal search-based subset selection methods because they evaluate the group performance of genes and help to pinpoint global optimal set of marker genes. Notably, this paper is the first to introduce tabu search (TS) to gene selection from high-dimensional gene array data. Our comparative study of gene selection methods demonstrated the effectiveness of optimal search-based gene subset selection to identify cancer marker genes. TS was shown to be a promising tool for gene subset selection. PMID:17674622

  16. Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation

    PubMed Central

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  17. Selection of optimal auxiliary soil nutrient variables for Cokriging interpolation.

    PubMed

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  18. Selecting radiotherapy dose distributions by means of constrained optimization problems.

    PubMed

    Alfonso, J C L; Buttazzo, G; García-Archilla, B; Herrero, M A; Núñez, L

    2014-05-01

    The main steps in planning radiotherapy consist in selecting for any patient diagnosed with a solid tumor (i) a prescribed radiation dose on the tumor, (ii) bounds on the radiation side effects on nearby organs at risk and (iii) a fractionation scheme specifying the number and frequency of therapeutic sessions during treatment. The goal of any radiotherapy treatment is to deliver on the tumor a radiation dose as close as possible to that selected in (i), while at the same time conforming to the constraints prescribed in (ii). To this day, considerable uncertainties remain concerning the best manner in which such issues should be addressed. In particular, the choice of a prescription radiation dose is mostly based on clinical experience accumulated on the particular type of tumor considered, without any direct reference to quantitative radiobiological assessment. Interestingly, mathematical models for the effect of radiation on biological matter have existed for quite some time, and are widely acknowledged by clinicians. However, the difficulty to obtain accurate in vivo measurements of the radiobiological parameters involved has severely restricted their direct application in current clinical practice.In this work, we first propose a mathematical model to select radiation dose distributions as solutions (minimizers) of suitable variational problems, under the assumption that key radiobiological parameters for tumors and organs at risk involved are known. Second, by analyzing the dependence of such solutions on the parameters involved, we then discuss the manner in which the use of those minimizers can improve current decision-making processes to select clinical dosimetries when (as is generally the case) only partial information on model radiosensitivity parameters is available. A comparison of the proposed radiation dose distributions with those actually delivered in a number of clinical cases strongly suggests that solutions of our mathematical model can be

  19. Optimizing selection of decentralized stormwater management strategies in urbanized regions

    NASA Astrophysics Data System (ADS)

    Yu, Z.; Montalto, F.

    2011-12-01

    A variety of decentralized stormwater options are available for implementation in urbanized regions. These strategies, which include bio-retention, porous pavement, green roof etc., vary in terms of cost, ability to reduce runoff, and site applicability. This paper explores the tradeoffs between different types of stormwater control meastures that could be applied in a typical urban study area. A nested optimization strategy first identifies the most cost-effective (e.g. runoff reduction / life cycle cost invested ) options for individual land parcel typologies, and then scales up the results with detailed attention paid to uncertainty in adoption rates, life cycle costs, and hydrologic performance. The study is performed with a custom built stochastic rainfall-runoff model (Monte Carlo techniques are used to quantify uncertainties associated with phased implementation of different strategies and different land parcel typologies under synthetic precipitation ensembles). The results are presented as a comparison of cost-effectiveness over the time span of 30 years, and state an optimized strategy on the cumulative cost-effectiveness over the period.

  20. A high-speed mixed-signal down-scaling circuit for DAB tuners

    NASA Astrophysics Data System (ADS)

    Lu, Tang; Zhigong, Wang; Jiahui, Xuan; Yang, Yang; Jian, Xu; Yong, Xu

    2012-07-01

    A high-speed mixed-signal down-scaling circuit with low power consumption and low phase noise for use in digital audio broadcasting tuners has been realized and characterized. Some new circuit techniques are adopted to improve its performance. A dual-modulus prescaler (DMP) with low phase noise is realized with a kind of improved source-coupled logic (SCL) D-flip-flop (DFF) in the synchronous divider and a kind of improved complementary metal oxide semiconductor master-slave (CMOS MS)-DFF in the asynchronous divider. A new more accurate wire-load model is used to realize the pulse-swallow counter (PS counter). Fabricated in a 0.18-μm CMOS process, the total chip size is 0.6 × 0.2 mm2. The DMP in the proposed down-scaling circuit exhibits a low phase noise of -118.2 dBc/Hz at 10 kHz off the carrier frequency. At a supply voltage of 1.8 V, the power consumption of the down-scaling circuit's core part is only 2.7 mW.

  1. Update on RF System Studies and VCX Fast Tuner Work for the RIA Drive Linac

    SciTech Connect

    Rusnak, B; Shen, S

    2003-05-06

    The limited cavity beam loading conditions anticipated for the Rare Isotope Accelerator (RIA) create a situation where microphonic-induced cavity detuning dominates radio frequency (RF) coupling and RF system architecture choices in the linac design process. Where most superconducting electron and proton linacs have beam-loaded bandwidths that are comparable to or greater than typical microphonic detuning bandwidths on the cavities, the beam-loaded bandwidths for many heavy-ion species in the RIA driver linac can be as much as a factor of 10 less than the projected 80-150 Hz microphonic control window for the RF structures along the driver, making RF control problematic. While simply overcoupling the coupler to the cavity can mitigate this problem to some degree, system studies indicate that for the low-{beta} driver linac alone, this approach may cost 50% or more than an RF system employing a voltage controlled reactance (VCX) fast tuner. An update of these system cost studies, along with the status of the VCX work being done at Lawrence Livermore National Lab is presented here.

  2. Optimization of gene sequences under constant mutational pressure and selection

    NASA Astrophysics Data System (ADS)

    Kowalczuk, M.; Gierlik, A.; Mackiewicz, P.; Cebrat, S.; Dudek, M. R.

    1999-12-01

    We have analyzed the influence of constant mutational pressure and selection on the nucleotide composition of DNA sequences of various size, which were represented by the genes of the Borrelia burgdorferi genome. With the help of MC simulations we have found that longer DNA sequences accumulate much less base substitutions per sequence length than short sequences. This leads us to the conclusion that the accuracy of replication may determine the size of genome.

  3. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2008-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service these research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, defendable sensor suite to address system health assessment requirements.

  4. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2007-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service. These research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, dependable sensor suite to address system health assessment requirements.

  5. Automated selection of appropriate pheromone representations in ant colony optimization.

    PubMed

    Montgomery, James; Randall, Marcus; Hendtlass, Tim

    2005-01-01

    Ant colony optimization (ACO) is a constructive metaheuristic that uses an analogue of ant trail pheromones to learn about good features of solutions. Critically, the pheromone representation for a particular problem is usually chosen intuitively rather than by following any systematic process. In some representations, distinct solutions appear multiple times, increasing the effective size of the search space and potentially misleading ants as to the true learned value of those solutions. In this article, we present a novel system for automatically generating appropriate pheromone representations, based on the characteristics of the problem model that ensures unique pheromone representation of solutions. This is the first stage in the development of a generalized ACO system that could be applied to a wide range of problems with little or no modification. However, the system we propose may be used in the development of any problem-specific ACO algorithm. PMID:16053571

  6. Selection of optimal composition-control parameters for friable materials

    SciTech Connect

    Pak, Yu.N.; Vdovkin, A.V.

    1988-05-01

    A method for composition analysis of coal and minerals is proposed which uses scattered gamma radiation and does away with preliminary sample preparation to ensure homogeneous particle density, surface area, and size. Reduction of the error induced by material heterogeneity has previously been achieved by rotation of the control object during analysis. A further refinement is proposed which addresses the necessity that the contribution of the radiation scattered from each individual surface to the total intensity be the same. This is achieved by providing a constant linear rate of travel for the irradiated spot through back-and-forth motion of the sensor. An analytical expression is given for the laws of motion for the sensor and test tube which provides for uniform irradiated area movement along a path analogous to the Archimedes spiral. The relationships obtained permit optimization of measurement parameters in analyzing friable materials which are not uniform in grain size.

  7. A method to optimize selection on multiple identified quantitative trait loci

    PubMed Central

    Chakraborty, Reena; Moreau, Laurence; Dekkers, Jack CM

    2002-01-01

    A mathematical approach was developed to model and optimize selection on multiple known quantitative trait loci (QTL) and polygenic estimated breeding values in order to maximize a weighted sum of responses to selection over multiple generations. The model allows for linkage between QTL with multiple alleles and arbitrary genetic effects, including dominance, epistasis, and gametic imprinting. Gametic phase disequilibrium between the QTL and between the QTL and polygenes is modeled but polygenic variance is assumed constant. Breeding programs with discrete generations, differential selection of males and females and random mating of selected parents are modeled. Polygenic EBV obtained from best linear unbiased prediction models can be accommodated. The problem was formulated as a multiple-stage optimal control problem and an iterative approach was developed for its solution. The method can be used to develop and evaluate optimal strategies for selection on multiple QTL for a wide range of situations and genetic models. PMID:12081805

  8. Determination of an Optimal Recruiting-Selection Strategy to Fill a Specified Quota of Satisfactory Personnel.

    ERIC Educational Resources Information Center

    Sands, William A.

    Managers of military and civilian personnel systems justifiably demand an estimate of the payoff in dollars and cents, which can be expected to result from the implementation of a proposed selection program. The Cost of Attaining Personnel Requirements (CAPER) Model provides an optimal recruiting-selection strategy for personnel decisions which…

  9. SLOPE—ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION

    PubMed Central

    Bogdan, Małgorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; Candès, Emmanuel J.

    2015-01-01

    We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minb∈ℝp12‖y−Xb‖ℓ22+λ1|b|(1)+λ2|b|(2)+⋯+λp|b|(p),where λ1 ≥ λ2 ≥ … ≥ λp ≥ 0 and |b|(1)≥|b|(2)≥⋯≥|b|(p) are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical ℓ1 procedures such as the Lasso. Here, the regularizer is a sorted ℓ1 norm, which penalizes the regression coefficients according to their rank: the higher the rank—that is, stronger the signal—the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289–300] procedure (BH) which compares more significant p-values with more stringent thresholds. One notable choice of the sequence {λi} is given by the BH critical values λBH(i)=z(1−i⋅q/2p), where q ∈ (0, 1) and z(α) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data. PMID:26709357

  10. Selection for optimal crew performance - Relative impact of selection and training

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.

    1987-01-01

    An empirical study supporting Helmreich's (1986) theoretical work on the distinct manner in which training and selection impact crew coordination is presented. Training is capable of changing attitudes, while selection screens for stable personality characteristics. Training appears least effective for leadership, an area strongly influenced by personality. Selection is least effective for influencing attitudes about personal vulnerability to stress, which appear to be trained in resource management programs. Because personality correlates with attitudes before and after training, it is felt that selection may be necessary even with a leadership-oriented training cirriculum.

  11. Selection of optimal muscle set for 16-channel standing neuroprosthesis

    PubMed Central

    Gartman, Steven J.; Audu, Musa L.; Kirsch, Robert F.; Triolo, Ronald J.

    2009-01-01

    The Case Western Reserve University/Department of Veterans Affairs 8-channel lower-limb neuroprosthesis can restore standing to selected individuals with paraplegia by application of functional electrical stimulation. The second generation of this system will include 16 channels of stimulation and a closed-loop control scheme to provide automatic postural corrections. This study used a musculoskeletal model of the legs and trunk to determine which muscles to target with the new system in order to maximize the range of postures that can be statically maintained, which should increase the system’s ability to provide adequate support to maintain standing when the user’s posture moves away from a neutral stance, either by an external disturbance or a volitional change in posture by the user. The results show that the prime muscle targets should be the medial gastrocnemius, tibialis anterior, vastus lateralis, semimembranosus, gluteus maximus, gluteus medius, adductor magnus, and erector spinae. This set of 16 muscles supports 42 percent of the standing postures that are attainable by the nondisabled model. Coactivation of the lateral gastrocnemius and peroneus longus with the medial gastrocnemius and of the peroneus tertius with the tibialis anterior increased the percentage of feasible postures to 71 percent. PMID:16847793

  12. Optimizing landfill site selection by using land classification maps.

    PubMed

    Eskandari, M; Homaee, M; Mahmoodi, S; Pazira, E; Van Genuchten, M Th

    2015-05-01

    Municipal solid waste disposal is a major environmental concern throughout the world. Proper landfill siting involves many environmental, economic, technical, and sociocultural challenges. In this study, a new quantitative method for landfill siting that reduces the number of evaluation criteria, simplifies siting procedures, and enhances the utility of available land evaluation maps was proposed. The method is demonstrated by selecting a suitable landfill site near the city of Marvdasht in Iran. The approach involves two separate stages. First, necessary criteria for preliminary landfill siting using four constraints and eight factors were obtained from a land classification map initially prepared for irrigation purposes. Thereafter, the criteria were standardized using a rating approach and then weighted to obtain a suitability map for landfill siting, with ratings in a 0-1 domain and divided into five suitability classes. Results were almost identical to those obtained with a more traditional environmental landfill siting approach. Because of far fewer evaluation criteria, the proposed weighting method was much easier to implement while producing a more convincing database for landfill siting. The classification map also considered land productivity. In the second stage, the six best alternative sites were evaluated for final landfill siting using four additional criteria. Sensitivity analyses were furthermore conducted to assess the stability of the obtained ranking. Results indicate that the method provides a precise siting procedure that should convince all pertinent stakeholders. PMID:25666474

  13. Plastic scintillation dosimetry: Optimal selection of scintillating fibers and scintillators

    SciTech Connect

    Archambault, Louis; Arsenault, Jean; Gingras, Luc; Sam Beddar, A.; Roy, Rene; Beaulieu, Luc

    2005-07-15

    Scintillation dosimetry is a promising avenue for evaluating dose patterns delivered by intensity-modulated radiation therapy plans or for the small fields involved in stereotactic radiosurgery. However, the increase in signal has been the goal for many authors. In this paper, a comparison is made between plastic scintillating fibers and plastic scintillator. The collection of scintillation light was measured experimentally for four commercial models of scintillating fibers (BCF-12, BCF-60, SCSF-78, SCSF-3HF) and two models of plastic scintillators (BC-400, BC-408). The emission spectra of all six scintillators were obtained by using an optical spectrum analyzer and they were compared with theoretical behavior. For scintillation in the blue region, the signal intensity of a singly clad scintillating fiber (BCF-12) was 120% of that of the plastic scintillator (BC-400). For the multiclad fiber (SCSF-78), the signal reached 144% of that of the plastic scintillator. The intensity of the green scintillating fibers was lower than that of the plastic scintillator: 47% for the singly clad fiber (BCF-60) and 77% for the multiclad fiber (SCSF-3HF). The collected light was studied as a function of the scintillator length and radius for a cylindrical probe. We found that symmetric detectors with nearly the same spatial resolution in each direction (2 mm in diameter by 3 mm in length) could be made with a signal equivalent to those of the more commonly used asymmetric scintillators. With augmentation of the signal-to-noise ratio in consideration, this paper presents a series of comparisons that should provide insight into selection of a scintillator type and volume for development of a medical dosimeter.

  14. Biomass selection for optimal anaerobic treatment of olive mill wastewater.

    PubMed

    Sabbah, I; Yazbak, A; Haj, J; Saliba, A; Basheer, S

    2005-01-01

    This research was conducted to identify the most efficient biomass out of five different types of biomass sources for anaerobic treatment of Olive Mill Wastewater (OMW). This study was first focused on examining the selected biomass in anaerobic batch systems with sodium acetate solutions (control study). Then, the different types of biomass were tested with raw OMW (water-diluted) and with pretreated OMW by coagulation-flocculation using Poly Aluminum Chloride (PACl) combined with hydrated lime (Ca(OH)2). Two types of biomass from wastewater treatment systems of a citrus juice producing company "PriGat" and from a citric acid manufacturing factory "Gadot", were found to be the most efficient sources of microorganisms to anaerobically treat both sodium acetate solution and OMW. Both types of biomass were examined under different concentration ranges (1-40 g l(-1)) of OMW in order to detect the maximal COD tolerance for the microorganisms. The results show that 70-85% of COD removal was reached using Gadot biomass after 8-10 days when the initial concentration of OMW was up to 5 g l(-1), while a similar removal efficiency was achieved using OMW of initial COD concentration of 10 g l(-1) in 2-4 days of contact time with the PriGat biomass. The physico-chemical pretreatment of OMW was found to enhance the anaerobic activity for the treatment of OMW with initial concentration of 20 g l(-1) using PriGat biomass. This finding is attributed to reducing the concentrations of polyphenols and other toxicants originally present in OMW upon the applied pretreatment process. PMID:15747599

  15. Dose selection for optimal treatment results and avoidance of complications.

    PubMed

    Nagano, Hisato; Nakayama, Satoshi; Shuto, Takashi; Asada, Hiroyuki; Inomori, Shigeo

    2009-01-01

    What is the optimal treatment for metastatic brain tumors (MBTs)? We present our experience with gamma knife (GK) treatments for patients with five or more MBTs. Our new formula for predicting patient survival time (ST), which was derived by combining tumor control probability (TCP) calculated by Colombo's formula and normal tissue complication probability (NTCP) estimated by Flickinger's integrated logistic formula, was also evaluated. ST=a*[(C-NTCP)*TCP]+b; a, b, C: const. Forty-one patients (23 male, 18 female) with more than five MBTs were treated between March 1992 and February 2000. The tumors originated in the lung in 15 cases, in the breast in 8. Four patients had previously undergone whole brain irradiation (WBI). Ten patients were given concomitant WBI. Thirteen patients had additional extracranial metastatic lesions. TCP and NTCP were calculated using Excel add-in software. Cox's proportional hazards model was used to evaluate correlations between certain variables and ST. The independent variables evaluated were patient factors (age in years and performance status), tumor factors (total volume and number of tumors in each patient), treatment factors (TCP, NTCP and marginal dose) and the values of (C-NTCP)*TCP. Total tumor number was 403 (median 7, range 5-56). The median total tumor volume was 9.8 cm3 (range 0.8-111.8 cm3). The marginal dose ranged from 8 to 22 Gy (median 16.0Gy), TCP from 0.0% to 83% (median 15%) and NTCP from 0.0% to 31% (median 6.0%). (0.39-NTCP)*TCP ranged from 0.0 to 0.21 (median 0.055). Follow-up was 0.2 to 26.2 months, with a median of 5.4 months. Multiple-sample tests revealed no differences in STs among patients with MBTs of different origins (p=0.50). The 50% STs of patients with MBTs originating from the breast, lung and other sites were 5.9, 7.8 and 3.5 months, respectively. Only TCP and (0.39-NTCP)*TCP were statistically significant covariates (p=0.014, 0.001, respectively), and the latter was a more important predictor of

  16. Optimal neural network architecture selection: effects on computer-aided detection of mammographic microcalcifications

    NASA Astrophysics Data System (ADS)

    Gurcan, Metin N.; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Petrick, Nicholas; Helvie, Mark A.

    2002-05-01

    We evaluated the effectiveness of an optimal convolution neural network (CNN) architecture selected by simulated annealing for improving the performance of a computer-aided diagnosis (CAD) system designed for the detection of microcalcification clusters on digitized mammograms. The performances of the CAD programs with manually and optimally selected CNNs were compared using an independent test set. This set included 472 mammograms and contained 253 biopsy-proven malignant clusters. Free-response receiver operating characteristic (FROC) analysis was used for evaluation of the detection accuracy. At a false positive (FP) rate of 0.7 per image, the film-based sensitivity was 84.6% with the optimized CNN, in comparison with 77.2% with the manually selected CNN. If clusters having images in both craniocaudal and mediolateral oblique views were analyzed together and a cluster was considered to be detected when it was detected in one or both views, at 0.7 FPs/image, the sensitivity was 93.3% with the optimized CNN and 87.0% with the manually selected CNN. This study indicates that classification of true positive and FP signals is an important step of the CAD program and that the detection accuracy of the program can be considerably improved by optimizing this step with an automated optimization algorithm.

  17. Modeling Network Intrusion Detection System Using Feature Selection and Parameters Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Dong Seong; Park, Jong Sou

    Previous approaches for modeling Intrusion Detection System (IDS) have been on twofold: improving detection model(s) in terms of (i) feature selection of audit data through wrapper and filter methods and (ii) parameters optimization of detection model design, based on classification, clustering algorithms, etc. In this paper, we present three approaches to model IDS in the context of feature selection and parameters optimization: First, we present Fusion of Genetic Algorithm (GA) and Support Vector Machines (SVM) (FuGAS), which employs combinations of GA and SVM through genetic operation and it is capable of building an optimal detection model with only selected important features and optimal parameters value. Second, we present Correlation-based Hybrid Feature Selection (CoHyFS), which utilizes a filter method in conjunction of GA for feature selection in order to reduce long training time. Third, we present Simultaneous Intrinsic Model Identification (SIMI), which adopts Random Forest (RF) and shows better intrusion detection rates and feature selection results, along with no additional computational overheads. We show the experimental results and analysis of three approaches on KDD 1999 intrusion detection datasets.

  18. Debris Selection and Optimal Path Planning for Debris Removal on the SSO: Impulsive-Thrust Option

    NASA Astrophysics Data System (ADS)

    Olympio, J. T.; Frouvelle, N.

    2013-08-01

    The current paper deals with the mission design of a generic active space debris removal spacecraft. Considered debris are all on a sun-synchronous orbit. A perturbed Lambert's problem, modelling the transfer between two debris, is devised to take into account J2 perturbation, and to quickly evaluate mission scenarios. A robust approach, using techniques of global optimisation, is followed to find optimal debris sequence and mission strategy. Manoeuvres optimization is then performed to refine the selected trajectory scenarii.

  19. selectSNP – An R package for selecting SNPs optimal for genetic evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    There has been a huge increase in the number of SNPs in the public repositories. This has made it a challenge to design low and medium density SNP panels, which requires careful selection of available SNPs considering many criteria, such as map position, allelic frequency, possible biological functi...

  20. Optimal band selection for high dimensional remote sensing data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfeng; Sun, Quan; Li, Jonathan

    2009-06-01

    A 'fused' method may not be suitable for reducing the dimensionality of data and a band/feature selection method needs to be used for selecting an optimal subset of original data bands. This study examined the efficiency of GA in band selection for remote sensing classification. A GA-based algorithm for band selection was designed deliberately in which a Bhattacharyya distance index that indicates separability between classes of interest is used as fitness function. A binary string chromosome is designed in which each gene location has a value of 1 representing a feature being included or 0 representing a band being not included. The algorithm was implemented in MATLAB programming environment, and a band selection task for lithologic classification in the Chocolate Mountain area (California) was used to test the proposed algorithm. The proposed feature selection algorithm can be useful in multi-source remote sensing data preprocessing, especially in hyperspectral dimensionality reduction.

  1. Ant Colony Optimization Based Feature Selection Method for QEEG Data Classification

    PubMed Central

    Ozekes, Serhat; Gultekin, Selahattin; Tarhan, Nevzat

    2014-01-01

    Objective Many applications such as biomedical signals require selecting a subset of the input features in order to represent the whole set of features. A feature selection algorithm has recently been proposed as a new approach for feature subset selection. Methods Feature selection process using ant colony optimization (ACO) for 6 channel pre-treatment electroencephalogram (EEG) data from theta and delta frequency bands is combined with back propagation neural network (BPNN) classification method for 147 major depressive disorder (MDD) subjects. Results BPNN classified R subjects with 91.83% overall accuracy and 95.55% subjects detection sensitivity. Area under ROC curve (AUC) value after feature selection increased from 0.8531 to 0.911. The features selected by the optimization algorithm were Fp1, Fp2, F7, F8, F3 for theta frequency band and eliminated 7 features from 12 to 5 feature subset. Conclusion ACO feature selection algorithm improves the classification accuracy of BPNN. Using other feature selection algorithms or classifiers to compare the performance for each approach is important to underline the validity and versatility of the designed combination. PMID:25110496

  2. Quantum-behaved particle swarm optimization: analysis of individual particle behavior and parameter selection.

    PubMed

    Sun, Jun; Fang, Wei; Wu, Xiaojun; Palade, Vasile; Xu, Wenbo

    2012-01-01

    Quantum-behaved particle swarm optimization (QPSO), motivated by concepts from quantum mechanics and particle swarm optimization (PSO), is a probabilistic optimization algorithm belonging to the bare-bones PSO family. Although it has been shown to perform well in finding the optimal solutions for many optimization problems, there has so far been little analysis on how it works in detail. This paper presents a comprehensive analysis of the QPSO algorithm. In the theoretical analysis, we analyze the behavior of a single particle in QPSO in terms of probability measure. Since the particle's behavior is influenced by the contraction-expansion (CE) coefficient, which is the most important parameter of the algorithm, the goal of the theoretical analysis is to find out the upper bound of the CE coefficient, within which the value of the CE coefficient selected can guarantee the convergence or boundedness of the particle's position. In the experimental analysis, the theoretical results are first validated by stochastic simulations for the particle's behavior. Then, based on the derived upper bound of the CE coefficient, we perform empirical studies on a suite of well-known benchmark functions to show how to control and select the value of the CE coefficient, in order to obtain generally good algorithmic performance in real world applications. Finally, a further performance comparison between QPSO and other variants of PSO on the benchmarks is made to show the efficiency of the QPSO algorithm with the proposed parameter control and selection methods. PMID:21905841

  3. [The Near Infrared Spectral Bands Optimal Selection in the Application of Liquor Fermented Grains Composition Analysis].

    PubMed

    Xiong, Ya-ting; Li, Zong-peng; Wang, Jian; Zhang, Ying; Wang, Shu-jun; Yin, Jian-jun; Song, Quan-hou

    2016-01-01

    In order to improve the technical level of the rapid detection of liquor fermented grains, in this paper, use near infrared spectroscopy technology to quantitative analysis moisture, starch, acidity and alcohol of liquor fermented grains. Using CARS, iPLS and no information variable elimination method (UVE), realize the characteristics of spectral band selection. And use the multiple scattering correction (MSC), derivative and standard normal variable transformation (SNV) pretreatment method to optimize the models. Establish models of quantitative analysis of fermented grains by PLS, and in order to select the best modeling method, using R2, RMSEP and optimal number of main factors to evaluate models. The results showed that the band selection is vital to optimize the model and CARS is the best optimization of the most significant effect. The calculation results showed that R2 of moisture, starch, acidity and alcohol were 0.885, 0.915, 0.951, 0.885 respectively and RMSEP of moisture, starch, acidity and alcohol were 0.630, 0.519, 0.228, 0.234 respectively. After optimization, the model prediction effect is good, the models can satisfy the requirement of the rapid detection of liquor fermented grains, which has certain reference value in the practical. PMID:27228746

  4. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme

    PubMed Central

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    Purpose: Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. Methods: An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Results: Among these four methods, SFFS has highest efficacy, which takes 3%–5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC

  5. Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant features

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Jutzi, B.; Mallet, C.

    2014-08-01

    3D scene analysis by automatically assigning 3D points a semantic label has become an issue of major interest in recent years. Whereas the tasks of feature extraction and classification have been in the focus of research, the idea of using only relevant and more distinctive features extracted from optimal 3D neighborhoods has only rarely been addressed in 3D lidar data processing. In this paper, we focus on the interleaved issue of extracting relevant, but not redundant features and increasing their distinctiveness by considering the respective optimal 3D neighborhood of each individual 3D point. We present a new, fully automatic and versatile framework consisting of four successive steps: (i) optimal neighborhood size selection, (ii) feature extraction, (iii) feature selection, and (iv) classification. In a detailed evaluation which involves 5 different neighborhood definitions, 21 features, 6 approaches for feature subset selection and 2 different classifiers, we demonstrate that optimal neighborhoods for individual 3D points significantly improve the results of scene interpretation and that the selection of adequate feature subsets may even further increase the quality of the derived results.

  6. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems

    ERIC Educational Resources Information Center

    Robinson, Stephanie A.; Rickenbach, Elizabeth H.; Lachman, Margie E.

    2016-01-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to…

  7. Subjective Career Success and Emotional Well-Being: Longitudinal Predictive Power of Selection, Optimization, and Compensation.

    ERIC Educational Resources Information Center

    Wiese, Bettina S.; Freund, Alexandra M.; Baltes, Paul B.

    2002-01-01

    A 3-year study of 82 young professionals found that work-related well-being was predicted by selection (commitment to personal goals), optimization (application of goal-related skills), and compensation (maintaining goals in the face of loss). The degree of compensation predicted emotional well-being and job satisfaction 3 years later. (Contains…

  8. Optimization of a series of potent and selective ketone histone deacetylase inhibitors.

    PubMed

    Pescatore, Giovanna; Kinzel, Olaf; Attenni, Barbara; Cecchetti, Ottavia; Fiore, Fabrizio; Fonsi, Massimiliano; Rowley, Michael; Schultz-Fademrecht, Carsten; Serafini, Sergio; Steinkühler, Christian; Jones, Philip

    2008-10-15

    Histone deacetylase (HDAC) inhibitors offer a promising strategy for cancer therapy and the first generation HDAC inhibitors are currently in the clinic. Herein we describe the optimization of a series of ketone small molecule HDAC inhibitors leading to potent and selective class I HDAC inhibitors with good dog PK. PMID:18809328

  9. Selection, Optimization, and Compensation: An Action-Related Approach to Work and Partnership.

    ERIC Educational Resources Information Center

    Wiese, Bettina S.; Baltes, Paul B.; Freund, Alexandra M.

    2000-01-01

    Data from German professionals (n=206) were used to test selective optimization with compensation (SOC)--goal setting in career and partnership domains and use of means to achieve goals. A positive relationship was found between SOC behaviors and successful life management; it was more predictive for the partnership domain. (Contains 82…

  10. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    NASA Astrophysics Data System (ADS)

    Mohanty, Sankhya; Hattel, Jesper H.

    2015-03-01

    Selective laser melting is yet to become a standardized industrial manufacturing technique. The process continues to suffer from defects such as distortions, residual stresses, localized deformations and warpage caused primarily due to the localized heating, rapid cooling and high temperature gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge. In this paper, a methodology for generating reliable, optimized scanning paths and process parameters for selective laser melting of a standard sample is introduced. The processing of the sample is simulated by sequentially coupling a calibrated 3D pseudo-analytical thermal model with a 3D finite element mechanical model. The optimized processing parameters are subjected to a Monte Carlo method based uncertainty and reliability analysis. The reliability of the scanning paths are established using cumulative probability distribution functions for process output criteria such as sample density, thermal homogeneity, etc. A customized genetic algorithm is used along with the simulation model to generate optimized cellular scanning strategies and processing parameters, with an objective of reducing thermal asymmetries and mechanical deformations. The optimized scanning strategies are used for selective laser melting of the standard samples, and experimental and numerical results are compared.

  11. Optimization of a Dibenzodiazepine Hit to a Potent and Selective Allosteric PAK1 Inhibitor

    PubMed Central

    2015-01-01

    The discovery of inhibitors targeting novel allosteric kinase sites is very challenging. Such compounds, however, once identified could offer exquisite levels of selectivity across the kinome. Herein we report our structure-based optimization strategy of a dibenzodiazepine hit 1, discovered in a fragment-based screen, yielding highly potent and selective inhibitors of PAK1 such as 2 and 3. Compound 2 was cocrystallized with PAK1 to confirm binding to an allosteric site and to reveal novel key interactions. Compound 3 modulated PAK1 at the cellular level and due to its selectivity enabled valuable research to interrogate biological functions of the PAK1 kinase. PMID:26191365

  12. Exploring the optimal performances of irreversible single resonance energy selective electron refrigerators

    NASA Astrophysics Data System (ADS)

    Zhou, Junle; Chen, Lingen; Ding, Zemin; Sun, Fengrui

    2016-05-01

    Applying finite-time thermodynamics (FTT) and electronic transport theory, the optimal performances of irreversible single resonance energy selective electron (ESE) refrigerator are analyzed. The effects of heat leakage between two electron reservoirs on optimal performances are discussed. The influences of system operating parameters on cooling load, coefficient of performance (COP), figure of merit and ecological function are demonstrated using numerical examples. Comparative performance analyses among different objective functions show that performance characteristics at maximum ecological function and maximum figure of merit are of great practical significance. Combining the two optimization objectives of maximum ecological function and maximum figure of merit together, more specific optimal ranges of cooling load and COP are obtained. The results can provide some advices to the design of practical electronic machine systems.

  13. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  14. The effect of genomic information on optimal contribution selection in livestock breeding programs

    PubMed Central

    2013-01-01

    Background Long-term benefits in animal breeding programs require that increases in genetic merit be balanced with the need to maintain diversity (lost due to inbreeding). This can be achieved by using optimal contribution selection. The availability of high-density DNA marker information enables the incorporation of genomic data into optimal contribution selection but this raises the question about how this information affects the balance between genetic merit and diversity. Methods The effect of using genomic information in optimal contribution selection was examined based on simulated and real data on dairy bulls. We compared the genetic merit of selected animals at various levels of co-ancestry restrictions when using estimated breeding values based on parent average, genomic or progeny test information. Furthermore, we estimated the proportion of variation in estimated breeding values that is due to within-family differences. Results Optimal selection on genomic estimated breeding values increased genetic gain. Genetic merit was further increased using genomic rather than pedigree-based measures of co-ancestry under an inbreeding restriction policy. Using genomic instead of pedigree relationships to restrict inbreeding had a significant effect only when the population consisted of many large full-sib families; with a half-sib family structure, no difference was observed. In real data from dairy bulls, optimal contribution selection based on genomic estimated breeding values allowed for additional improvements in genetic merit at low to moderate inbreeding levels. Genomic estimated breeding values were more accurate and showed more within-family variation than parent average breeding values; for genomic estimated breeding values, 30 to 40% of the variation was due to within-family differences. Finally, there was no difference between constraining inbreeding via pedigree or genomic relationships in the real data. Conclusions The use of genomic estimated breeding

  15. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization

    SciTech Connect

    Kim, Minsun Stewart, Robert D.; Phillips, Mark H.

    2015-11-15

    Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T{sub d}), and the size and location of tumor target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D{sub mean} ≤ 45 Gy), lungs (D{sub mean} ≤ 20 Gy), cord (D{sub max} ≤ 45 Gy), esophagus (D{sub max} ≤ 63 Gy), and unspecified tissues (D{sub 05} ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D{sub 95} of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T{sub d} (3–100 days), tumor lag-time (T{sub k} = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D{sub 95} were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T{sub d} and T{sub k} used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating

  16. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-01

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/. PMID:25083512

  17. Space debris selection and optimal guidance for removal in the SSO with low-thrust propulsion

    NASA Astrophysics Data System (ADS)

    Olympio, J. T.; Frouvelle, N.

    2014-06-01

    The current paper deals with the mission design of a generic active space debris removal spacecraft. Considered space debris are all on sun-synchronous orbits. A perturbed Lambert's problem, modelling the transfer between two space debris is devised to take into account J2 perturbation, and to quickly evaluate mission scenarios. A robust approach, using techniques of global optimisation, is followed to find the optimal space debris sequence and mission strategy. Low-thrust optimisation is then performed to turn bi-impulse transfers into optimal low-thrust transfers, and refine the selected scenarios.

  18. Collimator Width Optimization in X-Ray Luminescent Computed Tomography (XLCT) with Selective Excitation Scheme

    PubMed Central

    Mishra, S.; Kappiyoor, R.

    2015-01-01

    X-ray luminescent computed tomography (XLCT) is a promising new functional imaging modality based on computed tomography (CT). This imaging technique uses X-ray excitable nanophosphors to illuminate objects of interest in the visible spectrum. Though there are several validations of the underlying technology, none of them have addressed the issues of performance optimality for a given design of the imaging system. This study addresses the issue of obtaining best image quality through optimizing collimator width to balance the signal to noise ratio (SNR) and resolution. The results can be generalized as to any XLCT system employing a selective excitation scheme. PMID:25642356

  19. Optimum selection of mechanism type for heavy manipulators based on particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Chen, Genliang; Wang, Hao; Lin, Zhongqin

    2013-07-01

    The mechanism type plays a decisive role in the mechanical performance of robotic manipulators. Feasible mechanism types can be obtained by applying appropriate type synthesis theory, but there is still a lack of effective and efficient methods for the optimum selection among different types of mechanism candidates. This paper presents a new strategy for the purpose of optimum mechanism type selection based on the modified particle swarm optimization method. The concept of sub-swarm is introduced to represent the different mechanisms generated by the type synthesis, and a competitive mechanism is employed between the sub-swarms to reassign their population size according to the relative performances of the mechanism candidates to implement the optimization. Combining with a modular modeling approach for fast calculation of the performance index of the potential candidates, the proposed method is applied to determine the optimum mechanism type among the potential candidates for the desired manipulator. The effectiveness and efficiency of the proposed method is demonstrated through a case study on the optimum selection of mechanism type of a heavy manipulator where six feasible candidates are considered with force capability as the specific performance index. The optimization result shows that the fitness of the optimum mechanism type for the considered heavy manipulator can be up to 0.578 5. This research provides the instruction in optimum selection of mechanism types for robotic manipulators.

  20. Natural selection fails to optimize mutation rates for long-term adaptation on rugged fitness landscapes.

    PubMed

    Clune, Jeff; Misevic, Dusan; Ofria, Charles; Lenski, Richard E; Elena, Santiago F; Sanjuán, Rafael

    2008-01-01

    The rate of mutation is central to evolution. Mutations are required for adaptation, yet most mutations with phenotypic effects are deleterious. As a consequence, the mutation rate that maximizes adaptation will be some intermediate value. Here, we used digital organisms to investigate the ability of natural selection to adjust and optimize mutation rates. We assessed the optimal mutation rate by empirically determining what mutation rate produced the highest rate of adaptation. Then, we allowed mutation rates to evolve, and we evaluated the proximity to the optimum. Although we chose conditions favorable for mutation rate optimization, the evolved rates were invariably far below the optimum across a wide range of experimental parameter settings. We hypothesized that the reason that mutation rates evolved to be suboptimal was the ruggedness of fitness landscapes. To test this hypothesis, we created a simplified landscape without any fitness valleys and found that, in such conditions, populations evolved near-optimal mutation rates. In contrast, when fitness valleys were added to this simple landscape, the ability of evolving populations to find the optimal mutation rate was lost. We conclude that rugged fitness landscapes can prevent the evolution of mutation rates that are optimal for long-term adaptation. This finding has important implications for applied evolutionary research in both biological and computational realms. PMID:18818724

  1. Natural Selection Fails to Optimize Mutation Rates for Long-Term Adaptation on Rugged Fitness Landscapes

    PubMed Central

    Clune, Jeff; Misevic, Dusan; Ofria, Charles; Lenski, Richard E.; Elena, Santiago F.; Sanjuán, Rafael

    2008-01-01

    The rate of mutation is central to evolution. Mutations are required for adaptation, yet most mutations with phenotypic effects are deleterious. As a consequence, the mutation rate that maximizes adaptation will be some intermediate value. Here, we used digital organisms to investigate the ability of natural selection to adjust and optimize mutation rates. We assessed the optimal mutation rate by empirically determining what mutation rate produced the highest rate of adaptation. Then, we allowed mutation rates to evolve, and we evaluated the proximity to the optimum. Although we chose conditions favorable for mutation rate optimization, the evolved rates were invariably far below the optimum across a wide range of experimental parameter settings. We hypothesized that the reason that mutation rates evolved to be suboptimal was the ruggedness of fitness landscapes. To test this hypothesis, we created a simplified landscape without any fitness valleys and found that, in such conditions, populations evolved near-optimal mutation rates. In contrast, when fitness valleys were added to this simple landscape, the ability of evolving populations to find the optimal mutation rate was lost. We conclude that rugged fitness landscapes can prevent the evolution of mutation rates that are optimal for long-term adaptation. This finding has important implications for applied evolutionary research in both biological and computational realms. PMID:18818724

  2. Optoelectronic optimization of mode selective converter based on liquid crystal on silicon

    NASA Astrophysics Data System (ADS)

    Wang, Yongjiao; Liang, Lei; Yu, Dawei; Fu, Songnian

    2016-03-01

    We carry out comprehensive optoelectronic optimization of mode selective converter used for the mode division multiplexing, based on liquid crystal on silicon (LCOS) in binary mode. The conversion error of digital-to-analog (DAC) is investigated quantitatively for the purpose of driving the LCOS in the application of mode selective conversion. Results indicate the DAC must have a resolution of 8-bit, in order to achieve high mode extinction ratio (MER) of 28 dB. On the other hand, both the fast axis position error of half-wave-plate (HWP) and rotation angle error of Faraday rotator (FR) have negative influence on the performance of mode selective conversion. However, the commercial products provide enough angle error tolerance for the LCOS-based mode selective converter, taking both of insertion loss (IL) and MER into account.

  3. In Vitro Selection of Optimal DNA Substrates for Ligation by a Water-Soluble Carbodiimide

    NASA Technical Reports Server (NTRS)

    Harada, Kazuo; Orgel, Leslie E.

    1994-01-01

    We have used in vitro selection to investigate the sequence requirements for efficient template-directed ligation of oligonucleotides at 0 deg C using a water-soluble carbodiimide as condensing agent. We find that only 2 bp at each side of the ligation junction are needed. We also studied chemical ligation of substrate ensembles that we have previously selected as optimal by RNA ligase or by DNA ligase. As anticipated, we find that substrates selected with DNA ligase ligate efficiently with a chemical ligating agent, and vice versa. Substrates selected using RNA ligase are not ligated by the chemical condensing agent and vice versa. The implications of these results for prebiotic chemistry are discussed.

  4. A fully integrated direct-conversion digital satellite tuner in 0.18 μm CMOS

    NASA Astrophysics Data System (ADS)

    Si, Chen; Zengwang, Yang; Mingliang, Gu

    2011-04-01

    A fully integrated direct-conversion digital satellite tuner for DVB-S/S2 and ABS-S applications is presented. A broadband noise-canceling Balun-LNA and passive quadrature mixers provided a high-linearity low noise RF front-end, while the synthesizer integrated the loop filter to reduce the solution cost and system debug time. Fabricated in 0.18 μm CMOS, the chip achieves a less than 7.6 dB noise figure over a 900-2150 MHz L-band, while the measured sensitivity for 4.42 MS/s QPSK-3/4 mode is -91 dBm at the PCB connector. The fully integrated integer-N synthesizer operating from 2150 to 4350 MHz achieves less than 1 °C integrated phase error. The chip consumes about 145 mA at a 3.3 V supply with internal integrated LDOs.

  5. Optimal precursor ion selection for LC-MALDI MS/MS

    PubMed Central

    2013-01-01

    Background Liquid chromatography mass spectrometry (LC-MS) maps in shotgun proteomics are often too complex to select every detected peptide signal for fragmentation by tandem mass spectrometry (MS/MS). Standard methods for precursor ion selection, commonly based on data dependent acquisition, select highly abundant peptide signals in each spectrum. However, these approaches produce redundant information and are biased towards high-abundance proteins. Results We present two algorithms for inclusion list creation that formulate precursor ion selection as an optimization problem. Given an LC-MS map, the first approach maximizes the number of selected precursors given constraints such as a limited number of acquisitions per RT fraction. Second, we introduce a protein sequence-based inclusion list that can be used to monitor proteins of interest. Given only the protein sequences, we create an inclusion list that optimally covers the whole protein set. Additionally, we propose an iterative precursor ion selection that aims at reducing the redundancy obtained with data dependent LC-MS/MS. We overcome the risk of erroneous assignments by including methods for retention time and proteotypicity predictions. We show that our method identifies a set of proteins requiring fewer precursors than standard approaches. Thus, it is well suited for precursor ion selection in experiments with limited sample amount or analysis time. Conclusions We present three approaches to precursor ion selection with LC-MALDI MS/MS. Using a well-defined protein standard and a complex human cell lysate, we demonstrate that our methods outperform standard approaches. Our algorithms are implemented as part of OpenMS and are available under http://www.openms.de. PMID:23418672

  6. Parallel medicinal chemistry approaches to selective HDAC1/HDAC2 inhibitor (SHI-1:2) optimization.

    PubMed

    Kattar, Solomon D; Surdi, Laura M; Zabierek, Anna; Methot, Joey L; Middleton, Richard E; Hughes, Bethany; Szewczak, Alexander A; Dahlberg, William K; Kral, Astrid M; Ozerova, Nicole; Fleming, Judith C; Wang, Hongmei; Secrist, Paul; Harsch, Andreas; Hamill, Julie E; Cruz, Jonathan C; Kenific, Candia M; Chenard, Melissa; Miller, Thomas A; Berk, Scott C; Tempest, Paul

    2009-02-15

    The successful application of both solid and solution phase library synthesis, combined with tight integration into the medicinal chemistry effort, resulted in the efficient optimization of a novel structural series of selective HDAC1/HDAC2 inhibitors by the MRL-Boston Parallel Medicinal Chemistry group. An initial lead from a small parallel library was found to be potent and selective in biochemical assays. Advanced compounds were the culmination of iterative library design and possess excellent biochemical and cellular potency, as well as acceptable PK and efficacy in animal models. PMID:19138845

  7. Discovery of GSK2656157: An Optimized PERK Inhibitor Selected for Preclinical Development.

    PubMed

    Axten, Jeffrey M; Romeril, Stuart P; Shu, Arthur; Ralph, Jeffrey; Medina, Jesús R; Feng, Yanhong; Li, William Hoi Hong; Grant, Seth W; Heerding, Dirk A; Minthorn, Elisabeth; Mencken, Thomas; Gaul, Nathan; Goetz, Aaron; Stanley, Thomas; Hassell, Annie M; Gampe, Robert T; Atkins, Charity; Kumar, Rakesh

    2013-10-10

    We recently reported the discovery of GSK2606414 (1), a selective first in class inhibitor of protein kinase R (PKR)-like endoplasmic reticulum kinase (PERK), which inhibited PERK activation in cells and demonstrated tumor growth inhibition in a human tumor xenograft in mice. In continuation of our drug discovery program, we applied a strategy to decrease inhibitor lipophilicity as a means to improve physical properties and pharmacokinetics. This report describes our medicinal chemistry optimization culminating in the discovery of the PERK inhibitor GSK2656157 (6), which was selected for advancement to preclinical development. PMID:24900593

  8. Analysis of double stub tuner control stability in a many element phased array antenna with strong cross-coupling

    NASA Astrophysics Data System (ADS)

    Wallace, G. M.; Fitzgerald, E.; Hillairet, J.; Johnson, D. K.; Kanojia, A. D.; Koert, P.; Lin, Y.; Murray, R.; Shiraiwa, S.; Terry, D. R.; Wukitch, S. J.

    2014-02-01

    Active stub tuning with a fast ferrite tuner (FFT) allows for the system to respond dynamically to changes in the plasma impedance such as during the L-H transition or edge localized modes (ELMs), and has greatly increased the effectiveness of fusion ion cyclotron range of frequency systems. A high power waveguide double-stub tuner is under development for use with the Alcator C-Mod lower hybrid current drive (LHCD) system. Exact impedance matching with a double-stub is possible for a single radiating element under most load conditions, with the reflection coefficient reduced from Γ to Γ2 in the "forbidden region." The relative phase shift between adjacent columns of a LHCD antenna is critical for control of the launched n∥ spectrum. Adding a double-stub tuning network will perturb the phase of the forward wave particularly if the unmatched reflection coefficient is high. This effect can be compensated by adjusting the phase of the low power microwave drive for each klystron amplifier. Cross-coupling of the reflected power between columns of the launcher must also be considered. The problem is simulated by cascading a scattering matrix for the plasma provided by a linear coupling model with the measured launcher scattering matrix and that of the FFTs. The solution is advanced in an iterative manner similar to the time-dependent behavior of the real system. System performance is presented under a range of edge density conditions from under-dense to over-dense and a range of launched n∥.

  9. Analysis of double stub tuner control stability in a many element phased array antenna with strong cross-coupling

    SciTech Connect

    Wallace, G. M.; Fitzgerald, E.; Johnson, D. K.; Kanojia, A. D.; Koert, P.; Lin, Y.; Murray, R.; Shiraiwa, S.; Terry, D. R.; Wukitch, S. J.; Hillairet, J.

    2014-02-12

    Active stub tuning with a fast ferrite tuner (FFT) allows for the system to respond dynamically to changes in the plasma impedance such as during the L-H transition or edge localized modes (ELMs), and has greatly increased the effectiveness of fusion ion cyclotron range of frequency systems. A high power waveguide double-stub tuner is under development for use with the Alcator C-Mod lower hybrid current drive (LHCD) system. Exact impedance matching with a double-stub is possible for a single radiating element under most load conditions, with the reflection coefficient reduced from Γ to Γ{sup 2} in the “forbidden region.” The relative phase shift between adjacent columns of a LHCD antenna is critical for control of the launched n{sub ∥} spectrum. Adding a double-stub tuning network will perturb the phase of the forward wave particularly if the unmatched reflection coefficient is high. This effect can be compensated by adjusting the phase of the low power microwave drive for each klystron amplifier. Cross-coupling of the reflected power between columns of the launcher must also be considered. The problem is simulated by cascading a scattering matrix for the plasma provided by a linear coupling model with the measured launcher scattering matrix and that of the FFTs. The solution is advanced in an iterative manner similar to the time-dependent behavior of the real system. System performance is presented under a range of edge density conditions from under-dense to over-dense and a range of launched n{sub ∥}.

  10. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  11. On the non-stationarity of financial time series: impact on optimal portfolio selection

    NASA Astrophysics Data System (ADS)

    Livan, Giacomo; Inoue, Jun-ichi; Scalas, Enrico

    2012-07-01

    We investigate the possible drawbacks of employing the standard Pearson estimator to measure correlation coefficients between financial stocks in the presence of non-stationary behavior, and we provide empirical evidence against the well-established common knowledge that using longer price time series provides better, more accurate, correlation estimates. Then, we investigate the possible consequences of instabilities in empirical correlation coefficient measurements on optimal portfolio selection. We rely on previously published works which provide a framework allowing us to take into account possible risk underestimations due to the non-optimality of the portfolio weights being used in order to distinguish such non-optimality effects from risk underestimations genuinely due to non-stationarities. We interpret such results in terms of instabilities in some spectral properties of portfolio correlation matrices.

  12. A new approach to optimal selection of services in health care organizations.

    PubMed

    Adolphson, D L; Baird, M L; Lawrence, K D

    1991-01-01

    A new reimbursement policy adopted by Medicare in 1983 caused financial difficulties for many hospitals and health care organizations. Several organizations responded to these difficulties by developing systems to carefully measure their costs of providing services. The purpose of such systems was to provide relevant information about the profitability of hospital services. This paper presents a new method of making hospital service selection decisions: it is based on an optimization model that avoids arbitrary cost allocations as a basis for computing the costs of offering a given service. The new method provides more reliable information about which services are profitable or unprofitable, and it provides an accurate measure of the degree to which a service is profitable or unprofitable. The new method also provides useful information about the sensitivity of the optimal decision to changes in costs and revenues. Specialized algorithms for the optimization model lead to very efficient implementation of the method, even for the largest health care organizations. PMID:10111676

  13. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  14. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    NASA Astrophysics Data System (ADS)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  15. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors.

    PubMed

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-01-01

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor. PMID:26151212

  16. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors

    PubMed Central

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-01-01

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor. PMID:26151212

  17. Imaging multicellular specimens with real-time optimized tiling light-sheet selective plane illumination microscopy

    PubMed Central

    Fu, Qinyi; Martin, Benjamin L.; Matus, David Q.; Gao, Liang

    2016-01-01

    Despite the progress made in selective plane illumination microscopy, high-resolution 3D live imaging of multicellular specimens remains challenging. Tiling light-sheet selective plane illumination microscopy (TLS-SPIM) with real-time light-sheet optimization was developed to respond to the challenge. It improves the 3D imaging ability of SPIM in resolving complex structures and optimizes SPIM live imaging performance by using a real-time adjustable tiling light sheet and creating a flexible compromise between spatial and temporal resolution. We demonstrate the 3D live imaging ability of TLS-SPIM by imaging cellular and subcellular behaviours in live C. elegans and zebrafish embryos, and show how TLS-SPIM can facilitate cell biology research in multicellular specimens by studying left-right symmetry breaking behaviour of C. elegans embryos. PMID:27004937

  18. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1993-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  19. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1992-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  20. Optimal Needle Grasp Selection for Automatic Execution of Suturing Tasks in Robotic Minimally Invasive Surgery

    PubMed Central

    Liu, Taoming; Çavuşoğlu, M. Cenk

    2015-01-01

    This paper presents algorithms for optimal selection of needle grasp, for autonomous robotic execution of the minimally invasive surgical suturing task. In order to minimize the tissue trauma during the suturing motion, the best practices of needle path planning that are used by surgeons are applied for autonomous robotic surgical suturing tasks. Once an optimal needle trajectory in a well-defined suturing scenario is chosen, another critical issue for suturing is the choice of needle grasp for the robotic system. Inappropriate needle grasp increases operating time requiring multiple re-grasps to complete the desired task. The proposed methods use manipulability, dexterity and torque metrics for needle grasp selection. A simulation demonstrates the proposed methods and recommends a variety of grasps. Then a realistic demonstration compares the performances of the manipulator using different grasps. PMID:26413382

  1. Enhanced selectivity and search speed for method development using one-segment-per-component optimization strategies.

    PubMed

    Tyteca, Eva; Vanderlinden, Kim; Favier, Maxime; Clicq, David; Cabooter, Deirdre; Desmet, Gert

    2014-09-01

    Linear gradient programs are very frequently used in reversed phase liquid chromatography to enhance the selectivity compared to isocratic separations. Multi-linear gradient programs on the other hand are only scarcely used, despite their intrinsically larger separation power. Because the gradient-conformity of the latest generation of instruments has greatly improved, a renewed interest in more complex multi-segment gradient liquid chromatography can be expected in the future, raising the need for better performing gradient design algorithms. We explored the possibilities of a new type of multi-segment gradient optimization algorithm, the so-called "one-segment-per-group-of-components" optimization strategy. In this gradient design strategy, the slope is adjusted after the elution of each individual component of the sample, letting the retention properties of the different analytes auto-guide the course of the gradient profile. Applying this method experimentally to four randomly selected test samples, the separation time could on average be reduced with about 40% compared to the best single linear gradient. Moreover, the newly proposed approach performed equally well or better than the multi-segment optimization mode of a commercial software package. Carrying out an extensive in silico study, the experimentally observed advantage could also be generalized over a statistically significant amount of different 10 and 20 component samples. In addition, the newly proposed gradient optimization approach enables much faster searches than the traditional multi-step gradient design methods. PMID:25039066

  2. Ant-cuckoo colony optimization for feature selection in digital mammogram.

    PubMed

    Jona, J B; Nagaveni, N

    2014-01-15

    Digital mammogram is the only effective screening method to detect the breast cancer. Gray Level Co-occurrence Matrix (GLCM) textural features are extracted from the mammogram. All the features are not essential to detect the mammogram. Therefore identifying the relevant feature is the aim of this work. Feature selection improves the classification rate and accuracy of any classifier. In this study, a new hybrid metaheuristic named Ant-Cuckoo Colony Optimization a hybrid of Ant Colony Optimization (ACO) and Cuckoo Search (CS) is proposed for feature selection in Digital Mammogram. ACO is a good metaheuristic optimization technique but the drawback of this algorithm is that the ant will walk through the path where the pheromone density is high which makes the whole process slow hence CS is employed to carry out the local search of ACO. Support Vector Machine (SVM) classifier with Radial Basis Kernal Function (RBF) is done along with the ACO to classify the normal mammogram from the abnormal mammogram. Experiments are conducted in miniMIAS database. The performance of the new hybrid algorithm is compared with the ACO and PSO algorithm. The results show that the hybrid Ant-Cuckoo Colony Optimization algorithm is more accurate than the other techniques. PMID:24783812

  3. Selection of optimal threshold to construct recurrence plot for structural operational vibration measurements

    NASA Astrophysics Data System (ADS)

    Yang, Dong; Ren, Wei-Xin; Hu, Yi-Ding; Li, Dan

    2015-08-01

    The structural health monitoring (SHM) involves the sampled operational vibration measurements over time so that the structural features can be extracted accordingly. The recurrence plot (RP) and corresponding recurrence quantification analysis (RQA) have become a useful tool in various fields due to its efficiency. The threshold selection is one of key issues to make sure that the constructed recurrence plot contains enough recurrence points. Different signals have in nature different threshold values. This paper is aiming at presenting an approach to determine the optimal threshold for the operational vibration measurements of civil engineering structures. The surrogate technique and Taguchi loss function are proposed to generate reliable data and to achieve the optimal discrimination power point where the threshold is optimum. The impact of selecting recurrence thresholds on different signals is discussed. It is demonstrated that the proposed method to identify the optimal threshold is applicable to the operational vibration measurements. The proposed method provides a way to find the optimal threshold for the best RP construction of structural vibration measurements under operational conditions.

  4. Pipe degradation investigations for optimization of flow-accelerated corrosion inspection location selection

    SciTech Connect

    Chandra, S.; Habicht, P.; Chexal, B.; Mahini, R.; McBrine, W.; Esselman, T.; Horowitz, J.

    1995-12-01

    A large amount of piping in a typical nuclear power plant is susceptible to Flow-Accelerated Corrosion (FAC) wall thinning to varying degrees. A typical PAC monitoring program includes the wall thickness measurement of a select number of components in order to judge the structural integrity of entire systems. In order to appropriately allocate resources and maintain an adequate FAC program, it is necessary to optimize the selection of components for inspection by focusing on those components which provide the best indication of system susceptibility to FAC. A better understanding of system FAC predictability and the types of FAC damage encountered can provide some of the insight needed to better focus and optimize the inspection plan for an upcoming refueling outage. Laboratory examination of FAC damaged components removed from service at Northeast Utilities` (NU) nuclear power plants provides a better understanding of the damage mechanisms involved and contributing causes. Selected results of this ongoing study are presented with specific conclusions which will help NU to better focus inspections and thus optimize the ongoing FAC inspection program.

  5. A Topography Analysis Incorporated Optimization Method for the Selection and Placement of Best Management Practices

    PubMed Central

    Shen, Zhenyao; Chen, Lei; Xu, Liang

    2013-01-01

    Best Management Practices (BMPs) are one of the most effective methods to control nonpoint source (NPS) pollution at a watershed scale. In this paper, the use of a topography analysis incorporated optimization method (TAIOM) was proposed, which integrates topography analysis with cost-effective optimization. The surface status, slope and the type of land use were evaluated as inputs for the optimization engine. A genetic algorithm program was coded to obtain the final optimization. The TAIOM was validated in conjunction with the Soil and Water Assessment Tool (SWAT) in the Yulin watershed in Southwestern China. The results showed that the TAIOM was more cost-effective than traditional optimization methods. The distribution of selected BMPs throughout landscapes comprising relatively flat plains and gentle slopes, suggests the need for a more operationally effective scheme, such as the TAIOM, to determine the practicability of BMPs before widespread adoption. The TAIOM developed in this study can easily be extended to other watersheds to help decision makers control NPS pollution. PMID:23349917

  6. Near optimal energy selective x-ray imaging system performance with simple detectors

    SciTech Connect

    Alvarez, Robert E.

    2010-02-15

    Purpose: This article describes a method to achieve near optimal performance with low energy resolution detectors. Tapiovaara and Wagner [Phys. Med. Biol. 30, 519-529 (1985)] showed that an energy selective x-ray system using a broad spectrum source can produce images with a larger signal to noise ratio (SNR) than conventional systems using energy integrating or photon counting detectors. They showed that there is an upper limit to the SNR and that it can be achieved by measuring full spectrum information and then using an optimal energy dependent weighting. Methods: A performance measure is derived by applying statistical detection theory to an abstract vector space of the line integrals of the basis set coefficients of the two function approximation to the x-ray attenuation coefficient. The approach produces optimal results that utilize all the available energy dependent data. The method can be used with any energy selective detector and is applied not only to detectors using pulse height analysis (PHA) but also to a detector that simultaneously measures the total photon number and integrated energy, as discussed by Roessl et al. [Med. Phys. 34, 959-966 (2007)]. A generalization of this detector that improves the performance is introduced. A method is described to compute images with the optimal SNR using projections in a ''whitened'' vector space transformed so the noise is uncorrelated and has unit variance in both coordinates. Material canceled images with optimal SNR can also be computed by projections in this space. Results: The performance measure is validated by showing that it provides the Tapiovaara-Wagner optimal results for a detector with full energy information and also a conventional detector. The performance with different types of detectors is compared to the ideal SNR as a function of x-ray tube voltage and subject thickness. A detector that combines two bin PHA with a simultaneous measurement of integrated photon energy provides near ideal

  7. Selection of optimal complexity for ENSO-EMR model by minimum description length principle

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.

    2012-12-01

    One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression

  8. New approach for automatic recognition of melanoma in profilometry: optimized feature selection using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Handels, Heinz; Ross, Th; Kreusch, J.; Wolff, H. H.; Poeppl, S. J.

    1998-06-01

    A new approach to computer supported recognition of melanoma and naevocytic naevi based on high resolution skin surface profiles is presented. Profiles are generated by sampling an area of 4 X 4 mm2 at a resolution of 125 sample points per mm with a laser profilometer at a vertical resolution of 0.1 micrometers . With image analysis algorithms Haralick's texture parameters, Fourier features and features based on fractal analysis are extracted. In order to improve classification performance, a subsequent feature selection process is applied to determine the best possible subset of features. Genetic algorithms are optimized for the feature selection process, and results of different approaches are compared. As quality measure for feature subsets, the error rate of the nearest neighbor classifier estimated with the leaving-one-out method is used. In comparison to heuristic strategies and greedy algorithms, genetic algorithms show the best results for the feature selection problem. After feature selection, several architectures of feed forward neural networks with error back-propagation are evaluated. Classification performance of the neural classifier is optimized using different topologies, learning parameters and pruning algorithms. The best neural classifier achieved an error rate of 4.5% and was found after network pruning. The best result in all with an error rate of 2.3% was obtained with the nearest neighbor classifier.

  9. Optimal Selection of Predictor Variables in Statistical Downscaling Models of Precipitation

    NASA Astrophysics Data System (ADS)

    Goly, A.; Teegavarapu, R. S. V.

    2014-12-01

    Statistical downscaling models developed for precipitation rely heavily on predictors chosen and on accurate relationships between regional scale predictand and GCM-scale predictor for providing future precipitation projections at different spatial and temporal scales. This study provides two new screening methods for selection of predictor variables for use in downscaling methods based on predictand-predictors relationships. Methods to characterize predictand-predictors relationships via rigid and flexible functional relationships using mixed integer nonlinear programming (MINLP) model with binary variables and artificial neural network (ANN) models respectively are developed and evaluated in this study. In addition to these two methods, a stepwise regression (SWR) and two models that do not use any pre-screening of variables are also evaluated. A two-step process is used to downscale precipitation data with optimal selection of predictors and using them in a statistical downscaling model based on support vector machine (SVM) approach. Experiments with the proposed two new methods and three additional methods based on correlation between predictors and predictand and the other based on principal component analysis are evaluated in this study. Results suggest that optimal selection of variables using MINLP albeit with linear relationship and ANN method provided improved performance and error measures compared to two other models that did not use these methods for screening the variables. Of all the three screening methods tested in this study, SWR method selected the least number of variables and also ranked lowest based on several performance measures.

  10. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  11. Temporal artifact minimization in sonoelastography through optimal selection of imaging parameters.

    PubMed

    Torres, Gabriela; Chau, Gustavo R; Parker, Kevin J; Castaneda, Benjamin; Lavarello, Roberto J

    2016-07-01

    Sonoelastography is an ultrasonic technique that uses Kasai's autocorrelation algorithms to generate qualitative images of tissue elasticity using external mechanical vibrations. In the absence of synchronization between the mechanical vibration device and the ultrasound system, the random initial phase and finite ensemble length of the data packets result in temporal artifacts in the sonoelastography frames and, consequently, in degraded image quality. In this work, the analytic derivation of an optimal selection of acquisition parameters (i.e., pulse repetition frequency, vibration frequency, and ensemble length) is developed in order to minimize these artifacts, thereby eliminating the need for complex device synchronization. The proposed rule was verified through experiments with heterogeneous phantoms, where the use of optimally selected parameters increased the average contrast-to-noise ratio (CNR) by more than 200% and reduced the CNR standard deviation by 400% when compared to the use of arbitrarily selected imaging parameters. Therefore, the results suggest that the rule for specific selection of acquisition parameters becomes an important tool for producing high quality sonoelastography images. PMID:27475192

  12. Optimal part and module selection for synthetic gene circuit design automation.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2014-08-15

    An integral challenge in synthetic circuit design is the selection of optimal parts to populate a given circuit topology, so that the resulting circuit behavior best approximates the desired one. In some cases, it is also possible to reuse multipart constructs or modules that have been already built and experimentally characterized. Efficient part and module selection algorithms are essential to systematically search the solution space, and their significance will only increase in the following years due to the projected explosion in part libraries and circuit complexity. Here, we address this problem by introducing a structured abstraction methodology and a dynamic programming-based algorithm that guaranties optimal part selection. In addition, we provide three extensions that are based on symmetry check, information look-ahead and branch-and-bound techniques, to reduce the running time and space requirements. We have evaluated the proposed methodology with a benchmark of 11 circuits, a database of 73 parts and 304 experimentally constructed modules with encouraging results. This work represents a fundamental departure from traditional heuristic-based methods for part and module selection and is a step toward maximizing efficiency in synthetic circuit design and construction. PMID:24933033

  13. Selection of optimal artificial boundary condition (ABC) frequencies for structural damage identification

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Lu, Yong

    2016-07-01

    In this paper, the sensitivities of artificial boundary condition (ABC) frequencies to the damages are investigated, and the optimal sensors are selected to provide the reliable structural damage identification. The sensitivity expressions for one-pin and two-pin ABC frequencies, which are the natural frequencies from structures with one and two additional constraints to its original boundary condition, respectively, are proposed. Based on the expressions, the contributions of the underlying mode shapes in the ABC frequencies can be calculated and used to select more sensitive ABC frequencies. Selection criteria are then defined for different conditions, and their performance in structural damage identification is examined with numerical studies. From the findings, conclusions are given.

  14. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    PubMed

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness. PMID:23988713

  15. Achieving diverse and monoallelic olfactory receptor selection through dual-objective optimization design.

    PubMed

    Tian, Xiao-Jun; Zhang, Hang; Sannerud, Jens; Xing, Jianhua

    2016-05-24

    Multiple-objective optimization is common in biological systems. In the mammalian olfactory system, each sensory neuron stochastically expresses only one out of up to thousands of olfactory receptor (OR) gene alleles; at the organism level, the types of expressed ORs need to be maximized. Existing models focus only on monoallele activation, and cannot explain recent observations in mutants, especially the reduced global diversity of expressed ORs in G9a/GLP knockouts. In this work we integrated existing information on OR expression, and constructed a comprehensive model that has all its components based on physical interactions. Analyzing the model reveals an evolutionarily optimized three-layer regulation mechanism, which includes zonal segregation, epigenetic barrier crossing coupled to a negative feedback loop that mechanistically differs from previous theoretical proposals, and a previously unidentified enhancer competition step. This model not only recapitulates monoallelic OR expression, but also elucidates how the olfactory system maximizes and maintains the diversity of OR expression, and has multiple predictions validated by existing experimental results. Through making an analogy to a physical system with thermally activated barrier crossing and comparative reverse engineering analyses, the study reveals that the olfactory receptor selection system is optimally designed, and particularly underscores cooperativity and synergy as a general design principle for multiobjective optimization in biology. PMID:27162367

  16. Dynamic nuclear polarization and optimal control spatial-selective 13C MRI and MRS

    NASA Astrophysics Data System (ADS)

    Vinding, Mads S.; Laustsen, Christoffer; Maximov, Ivan I.; Søgaard, Lise Vejby; Ardenkjær-Larsen, Jan H.; Nielsen, Niels Chr.

    2013-02-01

    Aimed at 13C metabolic magnetic resonance imaging (MRI) and spectroscopy (MRS) applications, we demonstrate that dynamic nuclear polarization (DNP) may be combined with optimal control 2D spatial selection to simultaneously obtain high sensitivity and well-defined spatial restriction. This is achieved through the development of spatial-selective single-shot spiral-readout MRI and MRS experiments combined with dynamic nuclear polarization hyperpolarized [1-13C]pyruvate on a 4.7 T pre-clinical MR scanner. The method stands out from related techniques by facilitating anatomic shaped region-of-interest (ROI) single metabolite signals available for higher image resolution or single-peak spectra. The 2D spatial-selective rf pulses were designed using a novel Krotov-based optimal control approach capable of iteratively fast providing successful pulse sequences in the absence of qualified initial guesses. The technique may be important for early detection of abnormal metabolism, monitoring disease progression, and drug research.

  17. Tabu search and binary particle swarm optimization for feature selection using microarray data.

    PubMed

    Chuang, Li-Yeh; Yang, Cheng-Huei; Yang, Cheng-Hong

    2009-12-01

    Gene expression profiles have great potential as a medical diagnosis tool because they represent the state of a cell at the molecular level. In the classification of cancer type research, available training datasets generally have a fairly small sample size compared to the number of genes involved. This fact poses an unprecedented challenge to some classification methodologies due to training data limitations. Therefore, a good selection method for genes relevant for sample classification is needed to improve the predictive accuracy, and to avoid incomprehensibility due to the large number of genes investigated. In this article, we propose to combine tabu search (TS) and binary particle swarm optimization (BPSO) for feature selection. BPSO acts as a local optimizer each time the TS has been run for a single generation. The K-nearest neighbor method with leave-one-out cross-validation and support vector machine with one-versus-rest serve as evaluators of the TS and BPSO. The proposed method is applied and compared to the 11 classification problems taken from the literature. Experimental results show that our method simplifies features effectively and either obtains higher classification accuracy or uses fewer features compared to other feature selection methods. PMID:20047491

  18. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  19. New indicator for optimal preprocessing and wavelength selection of near-infrared spectra.

    PubMed

    Skibsted, E T S; Boelens, H F M; Westerhuis, J A; Witte, D T; Smilde, A K

    2004-03-01

    Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing techniques and selecting different wavelengths is to compare prediction statistics computed with an independent set of data not used to make the actual calibration model. When the errors of reference value are large, no such values are available at all, or only a limited number of samples are available, other methods exist to evaluate the preprocessing method and wavelength selection. In this work we present a new indicator (SE) that only requires blank sample spectra, i.e., spectra of samples that are mixtures of the interfering constituents (everything except the analyte), a pure analyte spectrum, or alternatively, a sample spectrum where the analyte is present. The indicator is based on computing the net analyte signal of the analyte and the total error, i.e., instrumental noise and bias. By comparing the indicator values when different preprocessing techniques and wavelength selections are applied to the spectra, the optimal preprocessing technique and the optimal wavelength selection can be determined without knowledge of reference values, i.e., it minimizes the non-related spectral variation. The SE indicator is compared to two other indicators that also use net analyte signal computations. To demonstrate the feasibility of the SE indicator, two near-infrared spectral data sets from the pharmaceutical industry were used, i.e., diffuse reflectance spectra of powder samples and transmission spectra of tablets. Especially in pharmaceutical spectroscopic applications, it is expected beforehand that the non-related spectral variation is rather large and it is important to remove it. The indicator gave excellent results with respect to wavelength selection and optimal

  20. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  1. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  2. Impact of cultivar selection and process optimization on ethanol yield from different varieties of sugarcane

    PubMed Central

    2014-01-01

    Background The development of ‘energycane’ varieties of sugarcane is underway, targeting the use of both sugar juice and bagasse for ethanol production. The current study evaluated a selection of such ‘energycane’ cultivars for the combined ethanol yields from juice and bagasse, by optimization of dilute acid pretreatment optimization of bagasse for sugar yields. Method A central composite design under response surface methodology was used to investigate the effects of dilute acid pretreatment parameters followed by enzymatic hydrolysis on the combined sugar yield of bagasse samples. The pressed slurry generated from optimum pretreatment conditions (maximum combined sugar yield) was used as the substrate during batch and fed-batch simultaneous saccharification and fermentation (SSF) processes at different solid loadings and enzyme dosages, aiming to reach an ethanol concentration of at least 40 g/L. Results Significant variations were observed in sugar yields (xylose, glucose and combined sugar yield) from pretreatment-hydrolysis of bagasse from different cultivars of sugarcane. Up to 33% difference in combined sugar yield between best performing varieties and industrial bagasse was observed at optimal pretreatment-hydrolysis conditions. Significant improvement in overall ethanol yield after SSF of the pretreated bagasse was also observed from the best performing varieties (84.5 to 85.6%) compared to industrial bagasse (74.5%). The ethanol concentration showed inverse correlation with lignin content and the ratio of xylose to arabinose, but it showed positive correlation with glucose yield from pretreatment-hydrolysis. The overall assessment of the cultivars showed greater improvement in the final ethanol concentration (26.9 to 33.9%) and combined ethanol yields per hectare (83 to 94%) for the best performing varieties with respect to industrial sugarcane. Conclusions These results suggest that the selection of sugarcane variety to optimize ethanol

  3. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    NASA Astrophysics Data System (ADS)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model

  4. Optimal selection of space transportation fleet to meet multi-mission space program needs

    NASA Technical Reports Server (NTRS)

    Morgenthaler, George W.; Montoya, Alex J.

    1989-01-01

    A space program that spans several decades will be comprised of a collection of missions such as low earth orbital space station, a polar platform, geosynchronous space station, lunar base, Mars astronaut mission, and Mars base. The optimal selection of a fleet of several recoverable and expendable launch vehicles, upper stages, and interplanetary spacecraft necessary to logistically establish and support these space missions can be examined by means of a linear integer programming optimization model. Such a selection must be made because the economies of scale which comes from producing large quantities of a few standard vehicle types, rather than many, will be needed to provide learning curve effects to reduce the overall cost of space transportation if these future missions are to be affordable. Optimization model inputs come from data and from vehicle designs. Each launch vehicle currently in existence has a launch history, giving rise to statistical estimates of launch reliability. For future, not-yet-developed launch vehicles, theoretical reliabilities corresponding to the maturity of the launch vehicles' technology and the degree of design redundancy must be estimated. Also, each such launch vehicle has a certain historical or estimated development cost, tooling cost, and a variable cost. The cost of a launch used in this paper includes the variable cost plus an amortized portion of the fixed and development costs. The integer linear programming model will have several constraint equations based on assumptions of mission mass requirements, volume requirements, and number of astronauts needed. The model will minimize launch vehicle logistic support cost and will select the most desirable launch vehicle fleet.

  5. Optimal Spectral Domain Selection for Maximizing Archaeological Signatures: Italy Case Studies

    PubMed Central

    Cavalli, Rosa Maria; Pascucci, Simone; Pignatti, Stefano

    2009-01-01

    Different landscape elements, including archaeological remains, can be automatically classified when their spectral characteristics are different, but major difficulties occur when extracting and classifying archaeological spectral features, as archaeological remains do not have unique shape or spectral characteristics. The spectral anomaly characteristics due to buried remains depend strongly on vegetation cover and/or soil types, which can make feature extraction more complicated. For crop areas, such as the test sites selected for this study, soil and moisture changes within near-surface archaeological deposits can influence surface vegetation patterns creating spectral anomalies of various kinds. In this context, this paper analyzes the usefulness of hyperspectral imagery, in the 0.4 to 12.8 μm spectral region, to identify the optimal spectral range for archaeological prospection as a function of the dominant land cover. MIVIS airborne hyperspectral imagery acquired in five different archaeological areas located in Italy has been used. Within these archaeological areas, 97 test sites with homogenous land cover and characterized by a statistically significant number of pixels related to the buried remains have been selected. The archaeological detection potential for all MIVIS bands has been assessed by applying a Separability Index on each spectral anomaly-background system of the test sites. A scatterplot analysis of the SI values vs. the dominant land cover fractional abundances, as retrieved by spectral mixture analysis, was performed to derive the optimal spectral ranges maximizing the archaeological detection. This work demonstrates that whenever we know the dominant land cover fractional abundances in archaeological sites, we can a priori select the optimal spectral range to improve the efficiency of archaeological observations performed by remote sensing data. PMID:22573985

  6. Selection of a site for the DUMAND detector with optimal water transparency

    NASA Astrophysics Data System (ADS)

    Karabashev, G. S.; Kuleshov, A. F.

    1989-06-01

    With reference to selecting a site for the DUMAND detector with optimal water transparency, measurements were made of the spectral distribution of light attenuation coefficients in samples from diffearent bodies of water, including Lake Baikal, the Atlantic Ocean, and the Mediterranean Sea. Results on the detection efficiency of Cerenkov radiation by the DUMAND detector indicate that not only the abyssal waters of the open ocean but also depressions of land-locked seas have the optical properties suitable for the operation of the DUMAND detector.

  7. Screening and selection of synthetic peptides for a novel and optimized endotoxin detection method.

    PubMed

    Mujika, M; Zuzuarregui, A; Sánchez-Gómez, S; Martínez de Tejada, G; Arana, S; Pérez-Lorenzo, E

    2014-09-30

    The current validated endotoxin detection methods, in spite of being highly sensitive, present several drawbacks in terms of reproducibility, handling and cost. Therefore novel approaches are being carried out in the scientific community to overcome these difficulties. Remarkable efforts are focused on the development of endotoxin-specific biosensors. The key feature of these solutions relies on the proper definition of the capture protocol, especially of the bio-receptor or ligand. The aim of the presented work is the screening and selection of a synthetic peptide specifically designed for LPS detection, as well as the optimization of a procedure for its immobilization onto gold substrates for further application to biosensors. PMID:25034430

  8. Hyperspectral band selection based on parallel particle swarm optimization and impurity function band prioritization schemes

    NASA Astrophysics Data System (ADS)

    Chang, Yang-Lang; Liu, Jin-Nan; Chen, Yen-Lin; Chang, Wen-Yen; Hsieh, Tung-Ju; Huang, Bormin

    2014-01-01

    In recent years, satellite imaging technologies have resulted in an increased number of bands acquired by hyperspectral sensors, greatly advancing the field of remote sensing. Accordingly, owing to the increasing number of bands, band selection in hyperspectral imagery for dimension reduction is important. This paper presents a framework for band selection in hyperspectral imagery that uses two techniques, referred to as particle swarm optimization (PSO) band selection and the impurity function band prioritization (IFBP) method. With the PSO band selection algorithm, highly correlated bands of hyperspectral imagery can first be grouped into modules to coarsely reduce high-dimensional datasets. Then, these highly correlated band modules are analyzed with the IFBP method to finely select the most important feature bands from the hyperspectral imagery dataset. However, PSO band selection is a time-consuming procedure when the number of hyperspectral bands is very large. Hence, this paper proposes a parallel computing version of PSO, namely parallel PSO (PPSO), using a modern graphics processing unit (GPU) architecture with NVIDIA's compute unified device architecture technology to improve the computational speed of PSO processes. The natural parallelism of the proposed PPSO lies in the fact that each particle can be regarded as an independent agent. Parallel computation benefits the algorithm by providing each agent with a parallel processor. The intrinsic parallel characteristics embedded in PPSO are, therefore, suitable for parallel computation. The effectiveness of the proposed PPSO is evaluated through the use of airborne visible/infrared imaging spectrometer hyperspectral images. The performance of PPSO is validated using the supervised K-nearest neighbor classifier. The experimental results demonstrate that the proposed PPSO/IFBP band selection method can not only improve computational speed, but also offer a satisfactory classification performance.

  9. Optimization of the excitation light sheet in selective plane illumination microscopy.

    PubMed

    Gao, Liang

    2015-03-01

    Selective plane illumination microscopy (SPIM) allows rapid 3D live fluorescence imaging on biological specimens with high 3D spatial resolution, good optical sectioning capability and minimal photobleaching and phototoxic effect. SPIM gains its advantage by confining the excitation light near the detection focal plane, and its performance is determined by the ability to create a thin, large and uniform excitation light sheet. Several methods have been developed to create such an excitation light sheet for SPIM. However, each method has its own strengths and weaknesses, and tradeoffs must be made among different aspects in SPIM imaging. In this work, we present a strategy to select the excitation light sheet among the latest SPIM techniques, and to optimize its geometry based on spatial resolution, field of view, optical sectioning capability, and the sample to be imaged. Besides the light sheets discussed in this work, the proposed strategy is also applicable to estimate the SPIM performance using other excitation light sheets. PMID:25798312

  10. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  11. Strategies to optimize shock wave lithotripsy outcome: Patient selection and treatment parameters

    PubMed Central

    Semins, Michelle Jo; Matlaga, Brian R

    2015-01-01

    Shock wave lithotripsy (SWL) was introduced in 1980, modernizing the treatment of upper urinary tract stones, and quickly became the most commonly utilized technique to treat kidney stones. Over the past 5-10 years, however, use of SWL has been declining because it is not as reliably effective as more modern technology. SWL success rates vary considerably and there is abundant literature predicting outcome based on patient- and stone-specific parameters. Herein we discuss the ways to optimize SWL outcomes by reviewing proper patient selection utilizing stone characteristics and patient features. Stone size, number, location, density, composition, and patient body habitus and renal anatomy are all discussed. We also review the technical parameters during SWL that can be controlled to improve results further, including type of anesthesia, coupling, shock wave rate, focal zones, pressures, and active monitoring. Following these basic principles and selection criteria will help maximize success rate. PMID:25949936

  12. Induction motor fault diagnosis based on the k-NN and optimal feature selection

    NASA Astrophysics Data System (ADS)

    Nguyen, Ngoc-Tu; Lee, Hong-Hee

    2010-09-01

    The k-nearest neighbour (k-NN) rule is applied to diagnose the conditions of induction motors. The features are extracted from the time vibration signals while the optimal features are selected by a genetic algorithm based on a distance criterion. A weight value is assigned to each feature to help select the best quality features. To improve the classification performance of the k-NN rule, each of the k neighbours are evaluated by a weight factor based on the distance to the test pattern. The proposed k-NN is compared to the conventional k-NN and support vector machine classification to verify the performance of an induction motor fault diagnosis.

  13. Optimization of 1,2,5-Thiadiazole Carbamates as Potent and Selective ABHD6 Inhibitors #

    PubMed Central

    Patel, Jayendra Z.; Nevalainen, Tapio J.; Savinainen, Juha R.; Adams, Yahaya; Laitinen, Tuomo; Runyon, Robert S.; Vaara, Miia; Ahenkorah, Stephen; Kaczor, Agnieszka A.; Navia-Paldanius, Dina; Gynther, Mikko; Aaltonen, Niina; Joharapurkar, Amit A.; Jain, Mukul R.; Haka, Abigail S.; Maxfield, Frederick R.; Laitinen, Jarmo T.; Parkkari, Teija

    2015-01-01

    At present, inhibitors of α/β-hydrolase domain 6 (ABHD6) are viewed as a promising approach to treat inflammation and metabolic disorders. This article describes the optimization of 1,2,5-thiadiazole carbamates as ABHD6 inhibitors. Altogether, 34 compounds were synthesized and their inhibitory activity was tested using lysates of HEK293 cells transiently expressing human ABHD6 (hABHD6). Among the compound series, 4-morpholino-1,2,5-thiadiazol-3-yl cyclooctyl(methyl)carbamate (JZP-430, 55) potently and irreversibly inhibited hABHD6 (IC50 44 nM) and showed good selectivity (∼230 fold) over fatty acid amide hydrolase (FAAH) and lysosomal acid lipase (LAL), the main off-targets of related compounds. Additionally, activity-based protein profiling (ABPP) indicated that compound 55 (JZP-430) displayed good selectivity among the serine hydrolases of mouse brain membrane proteome. PMID:25504894

  14. Contrast based band selection for optimized weathered oil detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Levaux, Florian; Bostater, Charles R., Jr.; Neyt, Xavier

    2012-09-01

    Hyperspectral imagery offers unique benefits for detection of land and water features due to the information contained in reflectance signatures such as the bi-directional reflectance distribution function or BRDF. The reflectance signature directly shows the relative absorption and backscattering features of targets. These features can be very useful in shoreline monitoring or surveillance applications, for example to detect weathered oil. In real-time detection applications, processing of hyperspectral data can be an important tool and Optimal band selection is thus important in real time applications in order to select the essential bands using the absorption and backscatter information. In the present paper, band selection is based upon the optimization of target detection using contrast algorithms. The common definition of the contrast (using only one band out of all possible combinations available within a hyperspectral image) is generalized in order to consider all the possible combinations of wavelength dependent contrasts using hyperspectral images. The inflection (defined here as an approximation of the second derivative) is also used in order to enhance the variations in the reflectance spectra as well as in the contrast spectrua in order to assist in optimal band selection. The results of the selection in term of target detection (false alarms and missed detection) are also compared with a previous method to perform feature detection, namely the matched filter. In this paper, imagery is acquired using a pushbroom hyperspectral sensor mounted at the bow of a small vessel. The sensor is mechanically rotated using an optical rotation stage. This opto-mechanical scanning system produces hyperspectral images with pixel sizes on the order of mm to cm scales, depending upon the distance between the sensor and the shoreline being monitored. The motion of the platform during the acquisition induces distortions in the collected HSI imagery. It is therefore

  15. An experimental and theoretical investigation of a fuel system tuner for the suppression of combustion driven oscillations

    NASA Astrophysics Data System (ADS)

    Scarborough, David E.

    Manufacturers of commercial, power-generating, gas turbine engines continue to develop combustors that produce lower emissions of nitrogen oxides (NO x) in order to meet the environmental standards of governments around the world. Lean, premixed combustion technology is one technique used to reduce NOx emissions in many current power and energy generating systems. However, lean, premixed combustors are susceptible to thermo-acoustic oscillations, which are pressure and heat-release fluctuations that occur because of a coupling between the combustion process and the natural acoustic modes of the system. These pressure oscillations lead to premature failure of system components, resulting in very costly maintenance and downtime. Therefore, a great deal of work has gone into developing methods to prevent or eliminate these combustion instabilities. This dissertation presents the results of a theoretical and experimental investigation of a novel Fuel System Tuner (FST) used to damp detrimental combustion oscillations in a gas turbine combustor by changing the fuel supply system impedance, which controls the amplitude and phase of the fuel flowrate. When the FST is properly tuned, the heat release oscillations resulting from the fuel-air ratio oscillations damp, rather than drive, the combustor acoustic pressure oscillations. A feasibility study was conducted to prove the validity of the basic idea and to develop some basic guidelines for designing the FST. Acoustic models for the subcomponents of the FST were developed, and these models were experimentally verified using a two-microphone impedance tube. Models useful for designing, analyzing, and predicting the performance of the FST were developed and used to demonstrate the effectiveness of the FST. Experimental tests showed that the FST reduced the acoustic pressure amplitude of an unstable, model, gas-turbine combustor over a wide range of operating conditions and combustor configurations. Finally, combustor

  16. An Ant Colony Optimization Based Feature Selection for Web Page Classification

    PubMed Central

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods. PMID:25136678

  17. An ant colony optimization based feature selection for web page classification.

    PubMed

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods. PMID:25136678

  18. Small sample training and test selection method for optimized anomaly detection algorithms in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2012-01-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques provide an avenue to select robust settings capable of operating consistently across a large variety of image scenes. Many researchers in this area are faced with a paucity of data. Unfortunately, there are no data splitting methods for model validation of datasets with small sample sizes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research has developed a framework for optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. We have developed method for selecting hyperspectral image training and test subsets that yields consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. The small sample training and test selection method is contrasted with randomly selected training sets as well as training sets chosen from the CADEX and DUPLEX algorithms for the well known Reed-Xiaoli anomaly detector.

  19. Optimal sequence selection in proteins of known structure by simulated evolution.

    PubMed Central

    Hellinga, H W; Richards, F M

    1994-01-01

    Rational design of protein structure requires the identification of optimal sequences to carry out a particular function within a given backbone structure. A general solution to this problem requires that a potential function describing the energy of the system as a function of its atomic coordinates be minimized simultaneously over all available sequences and their three-dimensional atomic configurations. Here we present a method that explicitly minimizes a semiempirical potential function simultaneously in these two spaces, using a simulated annealing approach. The method takes the fixed three-dimensional coordinates of a protein backbone and stochastically generates possible sequences through the introduction of random mutations. The corresponding three-dimensional coordinates are constructed for each sequence by "redecorating" the backbone coordinates of the original structure with the corresponding side chains. These are then allowed to vary in their structure by random rotations around free torsional angles to generate a stochastic walk in configurational space. We have named this method protein simulated evolution, because, in loose analogy with natural selection, it randomly selects for allowed solutions in the sequence of a protein subject to the "selective pressure" of a potential function. Energies predicted by this method for sequences of a small group of residues in the hydrophobic core of the phage lambda cI repressor correlate well with experimentally determined biological activities. This "genetic selection by computer" approach has potential applications in protein engineering, rational protein design, and structure-based drug discovery. PMID:8016069

  20. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection

    PubMed Central

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme

    2016-01-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  1. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection.

    PubMed

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J; Baufreton, Jérôme

    2016-07-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes' equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called 'prototypic' and 'arkypallidal' neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  2. Discovery of a potent class I selective ketone histone deacetylase inhibitor with antitumor activity in vivo and optimized pharmacokinetic properties.

    PubMed

    Kinzel, Olaf; Llauger-Bufi, Laura; Pescatore, Giovanna; Rowley, Michael; Schultz-Fademrecht, Carsten; Monteagudo, Edith; Fonsi, Massimiliano; Gonzalez Paz, Odalys; Fiore, Fabrizio; Steinkühler, Christian; Jones, Philip

    2009-06-11

    The optimization of a potent, class I selective ketone HDAC inhibitor is shown. It possesses optimized pharmacokinetic properties in preclinical species, has a clean off-target profile, and is negative in a microbial mutagenicity (Ames) test. In a mouse xenograft model it shows efficacy comparable to that of vorinostat at a 10-fold reduced dose. PMID:19441846

  3. Optimization of the Dutch Matrix Test by Random Selection of Sentences From a Preselected Subset

    PubMed Central

    Dreschler, Wouter A.

    2015-01-01

    Matrix tests are available for speech recognition testing in many languages. For an accurate measurement, a steep psychometric function of the speech materials is required. For existing tests, it would be beneficial if it were possible to further optimize the available materials by increasing the function’s steepness. The objective is to show if the steepness of the psychometric function of an existing matrix test can be increased by selecting a homogeneous subset of recordings with the steepest sentence-based psychometric functions. We took data from a previous multicenter evaluation of the Dutch matrix test (45 normal-hearing listeners). Based on half of the data set, first the sentences (140 out of 311) with a similar speech reception threshold and with the steepest psychometric function (≥9.7%/dB) were selected. Subsequently, the steepness of the psychometric function for this selection was calculated from the remaining (unused) second half of the data set. The calculation showed that the slope increased from 10.2%/dB to 13.7%/dB. The resulting subset did not allow the construction of enough balanced test lists. Therefore, the measurement procedure was changed to randomly select the sentences during testing. Random selection may interfere with a representative occurrence of phonemes. However, in our material, the median phonemic occurrence remained close to that of the original test. This finding indicates that phonemic occurrence is not a critical factor. The work highlights the possibility that existing speech tests might be improved by selecting sentences with a steep psychometric function. PMID:25964195

  4. Optimizing the StackSlide setup and data selection for continuous-gravitational-wave searches in realistic detector data

    NASA Astrophysics Data System (ADS)

    Shaltev, M.

    2016-02-01

    The search for continuous gravitational waves in a wide parameter space at a fixed computing cost is most efficiently done with semicoherent methods, e.g., StackSlide, due to the prohibitive computing cost of the fully coherent search strategies. Prix and Shaltev [Phys. Rev. D 85, 084010 (2012)] have developed a semianalytic method for finding optimal StackSlide parameters at a fixed computing cost under ideal data conditions, i.e., gapless data and a constant noise floor. In this work, we consider more realistic conditions by allowing for gaps in the data and changes in the noise level. We show how the sensitivity optimization can be decoupled from the data selection problem. To find optimal semicoherent search parameters, we apply a numerical optimization using as an example the semicoherent StackSlide search. We also describe three different data selection algorithms. Thus, the outcome of the numerical optimization consists of the optimal search parameters and the selected data set. We first test the numerical optimization procedure under ideal conditions and show that we can reproduce the results of the analytical method. Then we gradually relax the conditions on the data and find that a compact data selection algorithm yields higher sensitivity compared to a greedy data selection procedure.

  5. Pareto archived dynamically dimensioned search with hypervolume-based selection for multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Asadzadeh, Masoud; Tolson, Bryan

    2013-12-01

    Pareto archived dynamically dimensioned search (PA-DDS) is a parsimonious multi-objective optimization algorithm with only one parameter to diminish the user's effort for fine-tuning algorithm parameters. This study demonstrates that hypervolume contribution (HVC) is a very effective selection metric for PA-DDS and Monte Carlo sampling-based HVC is very effective for higher dimensional problems (five objectives in this study). PA-DDS with HVC performs comparably to algorithms commonly applied to water resources problems (ɛ-NSGAII and AMALGAM under recommended parameter values). Comparisons on the CEC09 competition show that with sufficient computational budget, PA-DDS with HVC performs comparably to 13 benchmark algorithms and shows improved relative performance as the number of objectives increases. Lastly, it is empirically demonstrated that the total optimization runtime of PA-DDS with HVC is dominated (90% or higher) by solution evaluation runtime whenever evaluation exceeds 10 seconds/solution. Therefore, optimization algorithm runtime associated with the unbounded archive of PA-DDS is negligible in solving computationally intensive problems.

  6. Optimization strategies for fast detection of positive selection on phylogenetic trees

    PubMed Central

    Valle, Mario; Schabauer, Hannes; Pacher, Christoph; Stockinger, Heinz; Stamatakis, Alexandros; Robinson-Rechavi, Marc; Salamin, Nicolas

    2014-01-01

    Motivation: The detection of positive selection is widely used to study gene and genome evolution, but its application remains limited by the high computational cost of existing implementations. We present a series of computational optimizations for more efficient estimation of the likelihood function on large-scale phylogenetic problems. We illustrate our approach using the branch-site model of codon evolution. Results: We introduce novel optimization techniques that substantially outperform both CodeML from the PAML package and our previously optimized sequential version SlimCodeML. These techniques can also be applied to other likelihood-based phylogeny software. Our implementation scales well for large numbers of codons and/or species. It can therefore analyse substantially larger datasets than CodeML. We evaluated FastCodeML on different platforms and measured average sequential speedups of FastCodeML (single-threaded) versus CodeML of up to 5.8, average speedups of FastCodeML (multi-threaded) versus CodeML on a single node (shared memory) of up to 36.9 for 12 CPU cores, and average speedups of the distributed FastCodeML versus CodeML of up to 170.9 on eight nodes (96 CPU cores in total). Availability and implementation: ftp://ftp.vital-it.ch/tools/FastCodeML/. Contact: selectome@unil.ch or nicolas.salamin@unil.ch PMID:24389654

  7. Selection of optimal oligonucleotide probes for microarrays using multiple criteria, global alignment and parameter estimation

    PubMed Central

    Li, Xingyuan; He, Zhili; Zhou, Jizhong

    2005-01-01

    The oligonucleotide specificity for microarray hybridization can be predicted by its sequence identity to non-targets, continuous stretch to non-targets, and/or binding free energy to non-targets. Most currently available programs only use one or two of these criteria, which may choose ‘false’ specific oligonucleotides or miss ‘true’ optimal probes in a considerable proportion. We have developed a software tool, called CommOligo using new algorithms and all three criteria for selection of optimal oligonucleotide probes. A series of filters, including sequence identity, free energy, continuous stretch, GC content, self-annealing, distance to the 3′-untranslated region (3′-UTR) and melting temperature (Tm), are used to check each possible oligonucleotide. A sequence identity is calculated based on gapped global alignments. A traversal algorithm is used to generate alignments for free energy calculation. The optimal Tm interval is determined based on probe candidates that have passed all other filters. Final probes are picked using a combination of user-configurable piece-wise linear functions and an iterative process. The thresholds for identity, stretch and free energy filters are automatically determined from experimental data by an accessory software tool, CommOligo_PE (CommOligo Parameter Estimator). The program was used to design probes for both whole-genome and highly homologous sequence data. CommOligo and CommOligo_PE are freely available to academic users upon request. PMID:16246912

  8. Design-Optimization and Material Selection for a Proximal Radius Fracture-Fixation Implant

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Xie, X.; Arakere, G.; Grujicic, A.; Wagner, D. W.; Vallejo, A.

    2010-11-01

    The problem of optimal size, shape, and placement of a proximal radius-fracture fixation-plate is addressed computationally using a combined finite-element/design-optimization procedure. To expand the set of physiological loading conditions experienced by the implant during normal everyday activities of the patient, beyond those typically covered by the pre-clinical implant-evaluation testing procedures, the case of a wheel-chair push exertion is considered. Toward that end, a musculoskeletal multi-body inverse-dynamics analysis is carried out of a human propelling a wheelchair. The results obtained are used as input to a finite-element structural analysis for evaluation of the maximum stress and fatigue life of the parametrically defined implant design. While optimizing the design of the radius-fracture fixation-plate, realistic functional requirements pertaining to the attainment of the required level of the devise safety factor and longevity/lifecycle were considered. It is argued that the type of analyses employed in the present work should be: (a) used to complement the standard experimental pre-clinical implant-evaluation tests (the tests which normally include a limited number of daily-living physiological loading conditions and which rely on single pass/fail outcomes/decisions with respect to a set of lower-bound implant-performance criteria) and (b) integrated early in the implant design and material/manufacturing-route selection process.

  9. Feature selection and classifier parameters estimation for EEG signals peak detection using particle swarm optimization.

    PubMed

    Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236

  10. Feature Selection and Classifier Parameters Estimation for EEG Signals Peak Detection Using Particle Swarm Optimization

    PubMed Central

    Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236

  11. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  12. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  13. Optimal energy window selection of a CZT-based small-animal SPECT for quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Park, Su-Jin; Yu, A. Ram; Choi, Yun Young; Kim, Kyeong Min; Kim, Hee-Joung

    2015-05-01

    Cadmium zinc telluride (CZT)-based small-animal single-photon emission computed tomography (SPECT) has desirable characteristics such as superior energy resolution, but data acquisition for SPECT imaging has been widely performed with a conventional energy window. The aim of this study was to determine the optimal energy window settings for technetium-99 m (99mTc) and thallium-201 (201Tl), the most commonly used isotopes in SPECT imaging, using CZT-based small-animal SPECT for quantitative accuracy. We experimentally investigated quantitative measurements with respect to primary count rate, contrast-to-noise ratio (CNR), and scatter fraction (SF) within various energy window settings using Triumph X-SPECT. The two ways of energy window settings were considered: an on-peak window and an off-peak window. In the on-peak window setting, energy centers were set on the photopeaks. In the off-peak window setting, the ratios of energy differences between the photopeak from the lower- and higher-threshold varied from 4:6 to 3:7. In addition, the energy-window width for 99mTc varied from 5% to 20%, and that for 201Tl varied from 10% to 30%. The results of this study enabled us to determine the optimal energy windows for each isotope in terms of primary count rate, CNR, and SF. We selected the optimal energy window that increases the primary count rate and CNR while decreasing SF. For 99mTc SPECT imaging, the energy window of 138-145 keV with a 5% width and off-peak ratio of 3:7 was determined to be the optimal energy window. For 201Tl SPECT imaging, the energy window of 64-85 keV with a 30% width and off-peak ratio of 3:7 was selected as the optimal energy window. Our results demonstrated that the proper energy window should be carefully chosen based on quantitative measurements in order to take advantage of desirable characteristics of CZT-based small-animal SPECT. These results provided valuable reference information for the establishment of new protocol for CZT

  14. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    PubMed

    Read, Mark N; Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-09-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  15. Algorithm for selection of optimized EPR distance restraints for de novo protein structure determination

    PubMed Central

    Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.

    2010-01-01

    A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624

  16. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods

    NASA Astrophysics Data System (ADS)

    Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.

  17. Resonance Raman enhancement optimization in the visible range by selecting different excitation wavelengths

    NASA Astrophysics Data System (ADS)

    Wang, Zhong; Li, Yuee

    2015-09-01

    Resonance enhancement of Raman spectroscopy (RS) has been used to significantly improve the sensitivity and selectivity of detection for specific components in complicated environments. Resonance RS gives more insight into the biochemical structure and reactivity. In this field, selecting a proper excitation wavelength to achieve optimal resonance enhancement is vital for the study of an individual chemical/biological ingredient with a particular absorption characteristic. Raman spectra of three azo derivatives with absorption spectra in the visible range are studied under the same experimental conditions at 488, 532, and 633 nm excitations. Universal laws in the visible range have been concluded by analyzing resonance Raman (RR) spectra of samples. The long wavelength edge of the absorption spectrum is a better choice for intense enhancement and the integrity of a Raman signal. The obtained results are valuable for applying RR for the selective detection of biochemical constituents whose electronic transitions take place at energies corresponding to the visible spectra, which is much friendlier to biologial samples compared to ultraviolet.

  18. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods.

    PubMed

    Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community. PMID:25863895

  19. Optimization methods for selecting founder individuals for captive breeding or reintroduction of endangered species.

    PubMed

    Miller, Webb; Wright, Stephen J; Zhang, Yu; Schuster, Stephan C; Hayes, Vanessa M

    2010-01-01

    Methods from genetics and genomics can be employed to help save endangered species. One potential use is to provide a rational strategy for selecting a population of founders for a captive breeding program. The hope is to capture most of the available genetic diversity that remains in the wild population, to provide a safe haven where representatives of the species can be bred, and eventually to release the progeny back into the wild. However, the founders are often selected based on a random-sampling strategy whose validity is based on unrealistic assumptions. Here we outline an approach that starts by using cutting-edge genome sequencing and genotyping technologies to objectively assess the available genetic diversity. We show how combinatorial optimization methods can be applied to these data to guide the selection of the founder population. In particular, we develop a mixed-integer linear programming technique that identifies a set of animals whose genetic profile is as close as possible to specified abundances of alleles (i.e., genetic variants), subject to constraints on the number of founders and their genders and ages. PMID:19908356

  20. Energetic optimization of ion conduction rate by the K+ selectivity filter

    NASA Astrophysics Data System (ADS)

    Morais-Cabral, João H.; Zhou, Yufeng; MacKinnon, Roderick

    2001-11-01

    The K+ selectivity filter catalyses the dehydration, transfer and rehydration of a K+ ion in about ten nanoseconds. This physical process is central to the production of electrical signals in biology. Here we show how nearly diffusion-limited rates are achieved, by analysing ion conduction and the corresponding crystallographic ion distribution in the selectivity filter of the KcsA K+ channel. Measurements with K+ and its slightly larger analogue, Rb+, lead us to conclude that the selectivity filter usually contains two K+ ions separated by one water molecule. The two ions move in a concerted fashion between two configurations, K+-water-K+-water (1,3 configuration) and water-K+-water-K+ (2,4 configuration), until a third ion enters, displacing the ion on the opposite side of the queue. For K+, the energy difference between the 1,3 and 2,4 configurations is close to zero, the condition of maximum conduction rate. The energetic balance between these configurations is a clear example of evolutionary optimization of protein function.

  1. Model selection based on FDR-thresholding optimizing the area under the ROC-curve.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2009-01-01

    We evaluate variable selection by multiple tests controlling the false discovery rate (FDR) to build a linear score for prediction of clinical outcome in high-dimensional data. Quality of prediction is assessed by the receiver operating characteristic curve (ROC) for prediction in independent patients. Thus we try to combine both goals: prediction and controlled structure estimation. We show that the FDR-threshold which provides the ROC-curve with the largest area under the curve (AUC) varies largely over the different parameter constellations not known in advance. Hence, we investigated a new cross validation procedure based on the maximum rank correlation estimator to determine the optimal selection threshold. This procedure (i) allows choosing an appropriate selection criterion, (ii) provides an estimate of the FDR close to the true FDR and (iii) is simple and computationally feasible for rather moderate to small sample sizes. Low estimates of the cross validated AUC (the estimates generally being positively biased) and large estimates of the cross validated FDR may indicate a lack of sufficiently prognostic variables and/or too small sample sizes. The method is applied to an oncology dataset. PMID:19572830

  2. Optimal feature selection for automated classification of FDG-PET in patients with suspected dementia

    NASA Astrophysics Data System (ADS)

    Serag, Ahmed; Wenzel, Fabian; Thiele, Frank; Buchert, Ralph; Young, Stewart

    2009-02-01

    FDG-PET is increasingly used for the evaluation of dementia patients, as major neurodegenerative disorders, such as Alzheimer's disease (AD), Lewy body dementia (LBD), and Frontotemporal dementia (FTD), have been shown to induce specific patterns of regional hypo-metabolism. However, the interpretation of FDG-PET images of patients with suspected dementia is not straightforward, since patients are imaged at different stages of progression of neurodegenerative disease, and the indications of reduced metabolism due to neurodegenerative disease appear slowly over time. Furthermore, different diseases can cause rather similar patterns of hypo-metabolism. Therefore, classification of FDG-PET images of patients with suspected dementia may lead to misdiagnosis. This work aims to find an optimal subset of features for automated classification, in order to improve classification accuracy of FDG-PET images in patients with suspected dementia. A novel feature selection method is proposed, and performance is compared to existing methods. The proposed approach adopts a combination of balanced class distributions and feature selection methods. This is demonstrated to provide high classification accuracy for classification of FDG-PET brain images of normal controls and dementia patients, comparable with alternative approaches, and provides a compact set of features selected.

  3. Evaluation of the selection methods used in the exIWO algorithm based on the optimization of multidimensional functions

    NASA Astrophysics Data System (ADS)

    Kostrzewa, Daniel; Josiński, Henryk

    2016-06-01

    The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.

  4. X-ray backscatter imaging for radiography by selective detection and snapshot: Evolution, development, and optimization

    NASA Astrophysics Data System (ADS)

    Shedlock, Daniel

    Compton backscatter imaging (CBI) is a single-sided imaging technique that uses the penetrating power of radiation and unique interaction properties of radiation with matter to image subsurface features. CBI has a variety of applications that include non-destructive interrogation, medical imaging, security and military applications. Radiography by selective detection (RSD), lateral migration radiography (LMR) and shadow aperture backscatter radiography (SABR) are different CBI techniques that are being optimized and developed. Radiography by selective detection (RSD) is a pencil beam Compton backscatter imaging technique that falls between highly collimated and uncollimated techniques. Radiography by selective detection uses a combination of single- and multiple-scatter photons from a projected area below a collimation plane to generate an image. As a result, the image has a combination of first- and multiple-scatter components. RSD techniques offer greater subsurface resolution than uncollimated techniques, at speeds at least an order of magnitude faster than highly collimated techniques. RSD scanning systems have evolved from a prototype into near market-ready scanning devices for use in a variety of single-sided imaging applications. The design has changed to incorporate state-of-the-art detectors and electronics optimized for backscatter imaging with an emphasis on versatility, efficiency and speed. The RSD system has become more stable, about 4 times faster, and 60% lighter while maintaining or improving image quality and contrast over the past 3 years. A new snapshot backscatter radiography (SBR) CBI technique, shadow aperture backscatter radiography (SABR), has been developed from concept and proof-of-principle to a functional laboratory prototype. SABR radiography uses digital detection media and shaded aperture configurations to generate near-surface Compton backscatter images without scanning, similar to how transmission radiographs are taken. Finally, a

  5. [Study on optimal selection of structure of vaneless centrifugal blood pump with constraints on blood perfusion and on blood damage indexes].

    PubMed

    Hu, Zhaoyan; Pan, Youlian; Chen, Zhenglong; Zhang, Tianyi; Lu, Lijun

    2012-12-01

    This paper is aimed to study the optimal selection of structure of vaneless centrifugal blood pump. The optimal objective is determined according to requirements of clinical use. Possible schemes are generally worked out based on structural feature of vaneless centrifugal blood pump. The optimal structure is selected from possible schemes with constraints on blood perfusion and blood damage indexes. Using an optimal selection method one can find the optimum structure scheme from possible schemes effectively. The results of numerical simulation of optimal blood pump showed that the method of constraints of blood perfusion and blood damage is competent for the requirements of selection of the optimal blood pumps. PMID:23469557

  6. Field trials for corrosion inhibitor selection and optimization, using a new generation of electrical resistance probes

    SciTech Connect

    Ridd, B.; Blakset, T.J.; Queen, D.

    1998-12-31

    Even with today`s availability of corrosion resistant alloys, carbon steels protected by corrosion inhibitors still dominate the material selection for pipework in the oil and gas production. Even though laboratory screening tests of corrosion inhibitor performance provides valuable data, the real performance of the chemical can only be studied through field trials which provide the ultimate test to evaluate the effectiveness of an inhibitor under actual operating conditions. A new generation of electrical resistance probe has been developed, allowing highly sensitive and immediate response to changes in corrosion rates on the internal environment of production pipework. Because of the high sensitivity, the probe responds to small changes in the corrosion rate, and it provides the corrosion engineer with a highly effective method of optimizing the use of inhibitor chemicals resulting in confidence in corrosion control and minimizing detrimental environmental effects.

  7. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  8. Testability requirement uncertainty analysis in the sensor selection and optimization model for PHM

    NASA Astrophysics Data System (ADS)

    Yang, S. M.; Qiu, J.; Liu, G. J.; Yang, P.; Zhang, Y.

    2012-05-01

    Prognostics and health management (PHM) has been an important part to guarantee the reliability and safety of complex systems. Design for testability (DFT) developed concurrently with system design is considered as a fundamental way to improve PHM performance, and sensor selection and optimization (SSO) is one of the important parts in DFT. To address the problem that testability requirement analysis in the existing SSO models does not take test uncertainty in actual scenario into account, fault detection uncertainty is analyzed from the view of fault attributes, sensor attributes and fault-sensor matching attributes qualitatively. And then, quantitative uncertainty analysis is given, which assigns a rational confidence level to fault size. A case is presented to demonstrate the proposed methodology for an electromechanical servo-controlled system, and application results show the proposed approach is reasonable and feasible.

  9. Bone Mineral Density and Fracture Risk Assessment to Optimize Prosthesis Selection in Total Hip Replacement

    PubMed Central

    Pétursson, Þröstur; Edmunds, Kyle Joseph; Gíslason, Magnús Kjartan; Magnússon, Benedikt; Magnúsdóttir, Gígja; Halldórsson, Grétar; Jónsson, Halldór; Gargiulo, Paolo

    2015-01-01

    The variability in patient outcome and propensity for surgical complications in total hip replacement (THR) necessitates the development of a comprehensive, quantitative methodology for prescribing the optimal type of prosthetic stem: cemented or cementless. The objective of the research presented herein was to describe a novel approach to this problem as a first step towards creating a patient-specific, presurgical application for determining the optimal prosthesis procedure. Finite element analysis (FEA) and bone mineral density (BMD) calculations were performed with ten voluntary primary THR patients to estimate the status of their operative femurs before surgery. A compilation model of the press-fitting procedure was generated to define a fracture risk index (FRI) from incurred forces on the periprosthetic femoral head. Comparing these values to patient age, sex, and gender elicited a high degree of variability between patients grouped by implant procedure, reinforcing the notion that age and gender alone are poor indicators for prescribing prosthesis type. Additionally, correlating FRI and BMD measurements indicated that at least two of the ten patients may have received nonideal implants. This investigation highlights the utility of our model as a foundation for presurgical software applications to assist orthopedic surgeons with selecting THR prostheses. PMID:26417376

  10. Bone Mineral Density and Fracture Risk Assessment to Optimize Prosthesis Selection in Total Hip Replacement.

    PubMed

    Pétursson, Þröstur; Edmunds, Kyle Joseph; Gíslason, Magnús Kjartan; Magnússon, Benedikt; Magnúsdóttir, Gígja; Halldórsson, Grétar; Jónsson, Halldór; Gargiulo, Paolo

    2015-01-01

    The variability in patient outcome and propensity for surgical complications in total hip replacement (THR) necessitates the development of a comprehensive, quantitative methodology for prescribing the optimal type of prosthetic stem: cemented or cementless. The objective of the research presented herein was to describe a novel approach to this problem as a first step towards creating a patient-specific, presurgical application for determining the optimal prosthesis procedure. Finite element analysis (FEA) and bone mineral density (BMD) calculations were performed with ten voluntary primary THR patients to estimate the status of their operative femurs before surgery. A compilation model of the press-fitting procedure was generated to define a fracture risk index (FRI) from incurred forces on the periprosthetic femoral head. Comparing these values to patient age, sex, and gender elicited a high degree of variability between patients grouped by implant procedure, reinforcing the notion that age and gender alone are poor indicators for prescribing prosthesis type. Additionally, correlating FRI and BMD measurements indicated that at least two of the ten patients may have received nonideal implants. This investigation highlights the utility of our model as a foundation for presurgical software applications to assist orthopedic surgeons with selecting THR prostheses. PMID:26417376

  11. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.

  12. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Solikhin

    2016-06-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.

  13. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    NASA Astrophysics Data System (ADS)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  14. Discovery and optimization of sulfonyl acrylonitriles as selective, covalent inhibitors of protein phosphatase methylesterase-1.

    PubMed

    Bachovchin, Daniel A; Zuhl, Andrea M; Speers, Anna E; Wolfe, Monique R; Weerapana, Eranthie; Brown, Steven J; Rosen, Hugh; Cravatt, Benjamin F

    2011-07-28

    The serine hydrolase protein phosphatase methylesterase-1 (PME-1) regulates the methylesterification state of protein phosphatase 2A (PP2A) and has been implicated in cancer and Alzheimer's disease. We recently reported a fluorescence polarization-activity-based protein profiling (fluopol-ABPP) high-throughput screen for PME-1 that uncovered a remarkably potent and selective class of aza-β-lactam (ABL) PME-1 inhibitors. Here, we describe a distinct set of sulfonyl acrylonitrile inhibitors that also emerged from this screen. The optimized compound, 28 (AMZ30), selectively inactivates PME-1 and reduces the demethylated form of PP2A in living cells. Considering that 28 is structurally unrelated to ABL inhibitors of PME-1, these agents, together, provide a valuable set of pharmacological probes to study the role of methylation in regulating PP2A function. We furthermore observed that several serine hydrolases were sensitive to analogues of 28, suggesting that more extensive structural exploration of the sulfonyl acrylonitrile chemotype may result in useful inhibitors for other members of this large enzyme class. PMID:21639134

  15. An Optimization Model for the Selection of Bus-Only Lanes in a City

    PubMed Central

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers’ route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model. PMID:26214001

  16. Discovery and Optimization of Sulfonyl Acrylonitriles as Selective, Covalent Inhibitors of Protein Phosphatase Methylesterase-1

    PubMed Central

    Bachovchin, Daniel A.; Zuhl, Andrea M.; Speers, Anna E.; Wolfe, Monique R.; Weerapana, Eranthie; Brown, Steven J.; Rosen, Hugh; Cravatt, Benjamin F.

    2011-01-01

    The serine hydrolase protein phosphatase methylesterase-1 (PME-1) regulates the methylesterification state of protein phosphatase 2A (PP2A) and has been implicated in cancer and Alzheimer's disease. We recently reported a fluorescence polarization-activity-based protein profiling (fluopol-ABPP) high-throughput screen for PME-1 that uncovered a remarkably potent and selective class of aza-β-lactam (ABL) PME-1 inhibitors. Here, we describe a distinct set of sulfonyl acrylonitrile inhibitors that also emerged from this screen. The optimized compound, 28 (AMZ30), selectively inactivates PME-1 and reduces the demethylated form of PP2A in living cells. Considering that 28 is structurally unrelated to ABL inhibitors of PME-1, these agents, together, provide a valuable set of pharmacological probes to study the role of methylation in regulating PP2A function. We furthermore observed that several serine hydrolases were sensitive to analogs of 28, suggesting that more extensive structural exploration of the sulfonyl acrylonitrile chemotype may result in useful inhibitors for other members of this large enzyme class. PMID:21639134

  17. An Optimization Model for the Selection of Bus-Only Lanes in a City.

    PubMed

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers' route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model. PMID:26214001

  18. Gene selection for cancer identification: a decision tree model empowered by particle swarm optimization algorithm

    PubMed Central

    2014-01-01

    Background In the application of microarray data, how to select a small number of informative genes from thousands of genes that may contribute to the occurrence of cancers is an important issue. Many researchers use various computational intelligence methods to analyzed gene expression data. Results To achieve efficient gene selection from thousands of candidate genes that can contribute in identifying cancers, this study aims at developing a novel method utilizing particle swarm optimization combined with a decision tree as the classifier. This study also compares the performance of our proposed method with other well-known benchmark classification methods (support vector machine, self-organizing map, back propagation neural network, C4.5 decision tree, Naive Bayes, CART decision tree, and artificial immune recognition system) and conducts experiments on 11 gene expression cancer datasets. Conclusion Based on statistical analysis, our proposed method outperforms other popular classifiers for all test datasets, and is compatible to SVM for certain specific datasets. Further, the housekeeping genes with various expression patterns and tissue-specific genes are identified. These genes provide a high discrimination power on cancer classification. PMID:24555567

  19. Noncovalent Mutant Selective Epidermal Growth Factor Receptor Inhibitors: A Lead Optimization Case Study.

    PubMed

    Heald, Robert; Bowman, Krista K; Bryan, Marian C; Burdick, Daniel; Chan, Bryan; Chan, Emily; Chen, Yuan; Clausen, Saundra; Dominguez-Fernandez, Belen; Eigenbrot, Charles; Elliott, Richard; Hanan, Emily J; Jackson, Philip; Knight, Jamie; La, Hank; Lainchbury, Michael; Malek, Shiva; Mann, Sam; Merchant, Mark; Mortara, Kyle; Purkey, Hans; Schaefer, Gabriele; Schmidt, Stephen; Seward, Eileen; Sideris, Steve; Shao, Lily; Wang, Shumei; Yeap, Kuen; Yen, Ivana; Yu, Christine; Heffron, Timothy P

    2015-11-25

    Because of their increased activity against activating mutants, first-generation epidermal growth factor receptor (EGFR) kinase inhibitors have had remarkable success in treating non-small-cell lung cancer (NSCLC) patients, but acquired resistance, through a secondary mutation of the gatekeeper residue, means that clinical responses only last for 8-14 months. Addressing this unmet medical need requires agents that can target both of the most common double mutants: T790M/L858R (TMLR) and T790M/del(746-750) (TMdel). Herein we describe how a noncovalent double mutant selective lead compound was optimized using a strategy focused on the structure-guided increase in potency without added lipophilicity or reduction of three-dimensional character. Following successive rounds of design and synthesis it was discovered that cis-fluoro substitution on 4-hydroxy- and 4-methoxypiperidinyl groups provided synergistic, substantial, and specific potency gain through direct interaction with the enzyme and/or effects on the proximal ligand oxygen atom. Further development of the fluorohydroxypiperidine series resulted in the identification of a pair of diastereomers that showed 50-fold enzyme and cell based selectivity for T790M mutants over wild-type EGFR (wtEGFR) in vitro and pathway knock-down in an in vivo xenograft model. PMID:26455919

  20. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1998-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  1. Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

    1999-01-01

    Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built

  2. Process optimization for lattice-selective wet etching of crystalline silicon structures

    NASA Astrophysics Data System (ADS)

    Dixson, Ronald G.; Guthrie, William F.; Allen, Richard A.; Orji, Ndubuisi G.; Cresswell, Michael W.; Murabito, Christine E.

    2016-01-01

    Lattice-selective etching of silicon is used in a number of applications, but it is particularly valuable in those for which the lattice-defined sidewall angle can be beneficial to the functional goals. A relatively small but important niche application is the fabrication of tip characterization standards for critical dimension atomic force microscopes (CD-AFMs). CD-AFMs are commonly used as reference tools for linewidth metrology in semiconductor manufacturing. Accurate linewidth metrology using CD-AFM, however, is critically dependent upon calibration of the tip width. Two national metrology institutes and at least two commercial vendors have explored the development of tip calibration standards using lattice-selective etching of crystalline silicon. The National Institute of Standards and Technology standard of this type is called the single crystal critical dimension reference material. These specimens, which are fabricated using a lattice-plane-selective etch on (110) silicon, exhibit near vertical sidewalls and high uniformity and can be used to calibrate CD-AFM tip width to a standard uncertainty of less than 1 nm. During the different generations of this project, we evaluated variations of the starting material and process conditions. Some of our starting materials required a large etch bias to achieve the desired linewidths. During the optimization experiment described in this paper, we found that for potassium hydroxide etching of the silicon features, it was possible to independently tune the target linewidth and minimize the linewidth nonuniformity. Consequently, this process is particularly well suited for small-batch fabrication of CD-AFM linewidth standards.

  3. Selectivity optimization of substituted 1,2,3-triazoles as α7 nicotinic acetylcholine receptor agonists.

    PubMed

    Arunrungvichian, Kuntarat; Fokin, Valery V; Vajragupta, Opa; Taylor, Palmer

    2015-08-19

    Three series of substituted anti-1,2,3-triazoles (IND, PPRD, and QND), synthesized by cycloaddition from azide and alkyne building blocks, were designed to enhance selectivity and potency profiles of a lead α7 nicotinic acetylcholine receptor (α7-nAChR) agonist, TTIn-1. Designed compounds were synthesized and screened for affinity by a radioligand binding assay. Their functional characterization as agonists and antagonists was performed by fluorescence resonance energy transfer assay using cell lines expressing transfected cDNAs, α7-nAChRs, α4β2-nAChRs, and 5HT3A receptors, and a fluorescence cell reporter. In the IND series, a tropane ring of TTIn-1, substituted at N1, was replaced by mono- and bicyclic amines to vary length and conformational flexibility of a carbon linker between nitrogen atom and N1 of the triazole. Compounds with a two-carbon atom linker optimized binding with Kd's at the submicromolar level. Further modification at the hydrophobic indole of TTIn-1 was made in PPRD and QND series by fixing the amine center with the highest affinity building blocks in the IND series. Compounds from IND and PPRD series are selective as agonists for the α7-nAChRs over α4β2-nAChRs and 5HT3A receptors. Lead compounds in the three series have EC50's between 28 and 260 nM. Based on the EC50, affinity, and selectivity determined from the binding and cellular responses, two of the leads have been advanced to behavioral studies described in the companion article (DOI: 10.1021/acschemneuro.5b00059). PMID:25932897

  4. Multi-objective selection and optimization of shaped materials and laminated composites

    NASA Astrophysics Data System (ADS)

    Singh, Jasveer

    Most of the current optimization techniques for the design of light-weight structures are unable to generate structural alternatives at the concept stage of design. This research tackles the challenge of developing methods for the early stage of design involving structures made up of conventional materials and composite laminates. For conventional materials, the recently introduced shape transformer approach is used. This work extends the method to deal with the case of torsional stiffness design, and generalizes it to single and multi-criteria selection of lightweight shafts subjected to a combination of bending, shear, and torsional load. The prominent feature of the work is the useful integration of shape and material to model and visualize multi-objective selection problems. The scheme is centered on concept selection in structural design, and hinges on measures that govern the shape properties of a cross-section regardless of its size. These measures, referred to as shape transformers, can classify shapes in a way similar to material classification. The procedure is demonstrated by considering torsional stiffness as a constraint. Performance charts are developed for both single and multi-criteria cases to let the reader visualize in a glance the whole range of cross-sectional shapes for each material. Each design chart is explained with a brief example. The above mentioned approach is also extended to incorporate orthotropic composite laminates. Design charts are obtained for the selection of five generic design variables: shape, size, material, layup, and number of plies. These charts also aid in comparing the performances of two commonly used laminates in bending and torsion - angle plies and cross plies. For a generic composite laminate, due to the number of variables involved, these kinds of design charts are very difficult. However, other tactics like using an analytical model for function evaluation can be used at conceptual stage of design. This is

  5. Optimal landmarks selection and fiducial marker placement for minimal target registration error in image-guided neurosurgery

    NASA Astrophysics Data System (ADS)

    Shamir, Reuben R.; Joskowicz, Leo; Shoshan, Yigal

    2009-02-01

    We describe a new framework and method for the optimal selection of anatomical landmarks and optimal placement of fiducial markers in image-guided neurosurgery. The method allows the surgeon to optimally plan the markers locations on routine diagnostic images before preoperative imaging and to intraoperatively select the fiducial markers and the anatomical landmarks that minimize the Target Registration Error (TRE). The optimal fiducial marker configuration selection is performed by the surgeon on the diagnostic image following the target selection based on a visual Estimated TRE (E-TRE) map. The E-TRE map is automatically updated when the surgeon interactively adds and deletes candidate markers and targets. The method takes the guesswork out of the registration process, provides a reliable localization uncertainty error for navigation, and can reduce the localization error without additional imaging and hardware. Our clinical experiments on five patients who underwent brain surgery with a navigation system show that optimizing one marker location and the anatomical landmarks configuration reduces the average TRE from 4.7mm to 3.2mm, with a maximum improvement of 4mm. The reduction of the target registration error has the potential to support safer and more accurate minimally invasive neurosurgical procedures.

  6. Neural Network Cascade Optimizes MicroRNA Biomarker Selection for Nasopharyngeal Cancer Prognosis

    PubMed Central

    Zhu, Wenliang; Kan, Xuan

    2014-01-01

    MicroRNAs (miRNAs) have been shown to be promising biomarkers in predicting cancer prognosis. However, inappropriate or poorly optimized processing and modeling of miRNA expression data can negatively affect prediction performance. Here, we propose a holistic solution for miRNA biomarker selection and prediction model building. This work introduces the use of a neural network cascade, a cascaded constitution of small artificial neural network units, for evaluating miRNA expression and patient outcome. A miRNA microarray dataset of nasopharyngeal carcinoma was retrieved from Gene Expression Omnibus to illustrate the methodology. Results indicated a nonlinear relationship between miRNA expression and patient death risk, implying that direct comparison of expression values is inappropriate. However, this method performs transformation of miRNA expression values into a miRNA score, which linearly measures death risk. Spearman correlation was calculated between miRNA scores and survival status for each miRNA. Finally, a nine-miRNA signature was optimized to predict death risk after nasopharyngeal carcinoma by establishing a neural network cascade consisting of 13 artificial neural network units. Area under the ROC was 0.951 for the internal validation set and had a prediction accuracy of 83% for the external validation set. In particular, the established neural network cascade was found to have strong immunity against noise interference that disturbs miRNA expression values. This study provides an efficient and easy-to-use method that aims to maximize clinical application of miRNAs in prognostic risk assessment of patients with cancer. PMID:25310846

  7. Experiments for practical education in process parameter optimization for selective laser sintering to increase workpiece quality

    NASA Astrophysics Data System (ADS)

    Reutterer, Bernd; Traxler, Lukas; Bayer, Natascha; Drauschke, Andreas

    2016-04-01

    Selective Laser Sintering (SLS) is considered as one of the most important additive manufacturing processes due to component stability and its broad range of usable materials. However the influence of the different process parameters on mechanical workpiece properties is still poorly studied, leading to the fact that further optimization is necessary to increase workpiece quality. In order to investigate the impact of various process parameters, laboratory experiments are implemented to improve the understanding of the SLS limitations and advantages on an educational level. Experiments are based on two different workstations, used to teach students the fundamentals of SLS. First of all a 50 W CO2 laser workstation is used to investigate the interaction of the laser beam with the used material in accordance with varied process parameters to analyze a single-layered test piece. Second of all the FORMIGA P110 laser sintering system from EOS is used to print different 3D test pieces in dependence on various process parameters. Finally quality attributes are tested including warpage, dimension accuracy or tensile strength. For dimension measurements and evaluation of the surface structure a telecentric lens in combination with a camera is used. A tensile test machine allows testing of the tensile strength and the interpreting of stress-strain curves. The developed laboratory experiments are suitable to teach students the influence of processing parameters. In this context they will be able to optimize the input parameters depending on the component which has to be manufactured and to increase the overall quality of the final workpiece.

  8. Optimization of Culture Parameters for Maximum Polyhydroxybutyrate Production by Selected Bacterial Strains Isolated from Rhizospheric Soils.

    PubMed

    Lathwal, Priyanka; Nehra, Kiran; Singh, Manpreet; Jamdagni, Pragati; Rana, Jogender S

    2015-01-01

    The enormous applications of conventional non-biodegradable plastics have led towards their increased usage and accumulation in the environment. This has become one of the major causes of global environmental concern in the present century. Polyhydroxybutyrate (PHB), a biodegradable plastic is known to have properties similar to conventional plastics, thus exhibiting a potential for replacing conventional non-degradable plastics. In the present study, a total of 303 different bacterial isolates were obtained from soil samples collected from the rhizospheric area of three crops, viz., wheat, mustard and sugarcane. All the isolates were screened for PHB (Poly-3-hydroxy butyric acid) production using Sudan Black staining method, and 194 isolates were found to be PHB positive. Based upon the amount of PHB produced, the isolates were divided into three categories: high, medium and low producers. Representative isolates from each category were selected for biochemical characterization; and for optimization of various culture parameters (carbon source, nitrogen source, C/N ratio, different pH, temperature and incubation time periods) for maximizing PHB accumulation. The highest PHB yield was obtained when the culture medium was supplemented with glucose as the carbon source, ammonium sulphate at a concentration of 1.0 g/l as the nitrogen source, and by maintaining the C/N ratio of the medium as 20:1. The physical growth parameters which supported maximum PHB accumulation included a pH of 7.0, and an incubation temperature of 30 degrees C for a period of 48 h. A few isolates exhibited high PHB accumulation under optimized conditions, thus showing a potential for their industrial exploitation. PMID:26638531

  9. Optimization of Sample Site Selection Imaging for OSIRIS-REx Using Asteroid Surface Analog Images

    NASA Astrophysics Data System (ADS)

    Tanquary, Hannah E.; Sahr, Eric; Habib, Namrah; Hawley, Christopher; Weber, Nathan; Boynton, William V.; Kinney-Spano, Ellyne; Lauretta, Dante

    2014-11-01

    OSIRIS-REx will return a sample of regolith from the surface of asteroid 101955 Bennu. The mission will obtain high resolution images of the asteroid in order to create detailed maps which will satisfy multiple mission objectives. To select a site, we must (i) identify hazards to the spacecraft and (ii) characterize a number of candidate sites to determine the optimal location for sampling. To further characterize the site, a long-term science campaign will be undertaken to constrain the geologic properties. To satisfy these objectives, the distribution and size of blocks at the sample site and backup sample site must be determined. This will be accomplished through the creation of rock size frequency distribution maps. The primary goal of this study is to optimize the creation of these map products by assessing techniques for counting blocks on small bodies, and assessing the methods of analysis of the resulting data. We have produced a series of simulated surfaces of Bennu which have been imaged, and the images processed to simulate Polycam images during the Reconnaissance phase. These surface analog images allow us to explore a wide range of imaging conditions, both ideal and non-ideal. The images have been “degraded”, and are displayed as thumbnails representing the limits of Polycam resolution from an altitude of 225 meters. Specifically, this study addresses the mission requirement that the rock size frequency distribution of regolith grains < 2cm in longest dimension must be determined for the sample sites during Reconnaissance. To address this requirement, we focus on the range of available lighting angles. Varying illumination and phase angles in the simulated images, we can compare the size-frequency distributions calculated from the degraded images with the known size frequency distributions of the Bennu simulant material, and thus determine the optimum lighting conditions for satisfying the 2 cm requirement.

  10. A low-phase-noise digitally controlled crystal oscillator for DVB TV tuners

    NASA Astrophysics Data System (ADS)

    Wei, Zhao; Lei, Lu; Zhangwen, Tang

    2010-07-01

    This paper presents a 25-MHz fully-integrated digitally controlled crystal oscillator (DCXO) with automatic amplitude control (AAC). The DCXO is based on Colpitts topology for one-pin solution. The AAC circuit is introduced to optimize the phase noise performance. The automatic frequency control is realized by a 10-bit thermometer-code segmental tapered MOS capacitor array, ensuring a ~ 35 ppm tuning range and ~ 0.04 ppm frequency step. The measured phase noise results are -139 dBc/Hz at 1 kHz and -151 dBc/Hz at 10 kHz frequency offset, respectively. The chip consumes 1 mA at 1.8V supply and occupies 0.4 mm2 in a 0.18-μm CMOS process.

  11. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    NASA Astrophysics Data System (ADS)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the

  12. Performance optimization of total momentum filtering double-resonance energy selective electron heat pump

    NASA Astrophysics Data System (ADS)

    Ding, Ze-Min; Chen, Lin-Gen; Ge, Yan-Lin; Sun, Feng-Rui

    2016-04-01

    A theoretical model for energy selective electron (ESE) heat pumps operating with two-dimensional electron reservoirs is established in this study. In this model, a double-resonance energy filter operating with a total momentum filtering mechanism is considered for the transmission of electrons. The optimal thermodynamic performance of the ESE heat pump devices is also investigated. Numerical calculations show that the heating load of the device with two resonances is larger, whereas the coefficient of performance (COP) is lower than the ESE heat pump when considering a single-resonance filter. The performance characteristics of the ESE heat pumps in the total momentum filtering condition are generally superior to those with a conventional filtering mechanism. In particular, the performance characteristics of the ESE heat pumps considering a conventional filtering mechanism are vastly different from those of a device with total momentum filtering, which is induced by extra electron momentum in addition to the horizontal direction. Parameters such as resonance width and energy spacing are found to be associated with the performance of the electron system.

  13. a Geographic Analysis of Optimal Signage Location Selection in Scenic Area

    NASA Astrophysics Data System (ADS)

    Ruan, Ling; Long, Ying; Zhang, Ling; Wu, Xiao Ling

    2016-06-01

    As an important part of the scenic area infrastructure services, signage guiding system plays an indispensable role in guiding the way and improving the quality of tourism experience. This paper proposes an optimal method in signage location selection and direction content design in a scenic area based on geographic analysis. The object of the research is to provide a best solution to arrange limited guiding boards in a tourism area to show ways arriving at any scenic spot from any entrance. There are four steps to achieve the research object. First, the spatial distribution of the junction of the scenic road, the passageway and the scenic spots is analyzed. Then, the count of scenic roads intersection on the shortest path between all entrances and all scenic spots is calculated. Next, combing with the grade of the scenic road and scenic spots, the importance of each road intersection is estimated quantitatively. Finally, according to the importance of all road intersections, the most suitable layout locations of signage guiding boards can be provided. In addition, the method is applied in the Ming Tomb scenic area in China and the result is compared with the existing signage guiding space layout.

  14. Improving well-being at work: A randomized controlled intervention based on selection, optimization, and compensation.

    PubMed

    Müller, Andreas; Heiden, Barbara; Herbig, Britta; Poppe, Franziska; Angerer, Peter

    2016-04-01

    This study aimed to develop, implement, and evaluate an occupational health intervention that is based on the theoretical model of selection, optimization, and compensation (SOC). We conducted a stratified randomized controlled intervention with 70 nurses of a community hospital in Germany (94% women; mean age 43.7 years). Altogether, the training consisted of 6 sessions (16.5 hours) over a period of 9 months. The training took place in groups of 6-8 employees. Participants were familiarized with the SOC model and developed and implemented a personal project based on SOC to cope effectively with 1 important job demand or to activate a job resource. Consistent with our hypotheses, we observed a meaningful trend that the proposed SOC training enhanced mental well-being, particularly in employees with a strong commitment to the intervention. While highly committed training participants reported higher levels of job control at follow-up, the effects were not statistical significant. Additional analyses of moderation effects showed that the training is particularly effective to enhance mental well-being when job control is low. Contrary to our assumptions, perceived work ability was not improved by the training. Our study provides first indications that SOC training might be a promising approach to occupational health and stress prevention. Moreover, it identifies critical success factors of occupational interventions based on SOC. However, additional studies are needed to corroborate the effectiveness of SOC trainings in the occupational contexts. (PsycINFO Database Record PMID:26322438

  15. Selection of plants for optimization of vegetative filter strips treating runoff from turfgrass.

    PubMed

    Smith, Katy E; Putnam, Raymond A; Phaneuf, Clifford; Lanza, Guy R; Dhankher, Om P; Clark, John M

    2008-01-01

    Runoff from turf environments, such as golf courses, is of increasing concern due to the associated chemical contamination of lakes, reservoirs, rivers, and ground water. Pesticide runoff due to fungicides, herbicides, and insecticides used to maintain golf courses in acceptable playing condition is a particular concern. One possible approach to mitigate such contamination is through the implementation of effective vegetative filter strips (VFS) on golf courses and other recreational turf environments. The objective of the current study was to screen ten aesthetically acceptable plant species for their ability to remove four commonly-used and degradable pesticides: chlorpyrifos (CP), chlorothalonil (CT), pendimethalin (PE), and propiconazole (PR) from soil in a greenhouse setting, thus providing invaluable information as to the species composition that would be most efficacious for use in VFS surrounding turf environments. Our results revealed that blue flag iris (Iris versicolor) (76% CP, 94% CT, 48% PE, and 33% PR were lost from soil after 3 mo of plant growth), eastern gama grass (Tripsacum dactyloides) (47% CP, 95% CT, 17% PE, and 22% PR were lost from soil after 3 mo of plant growth), and big blue stem (Andropogon gerardii) (52% CP, 91% CT, 19% PE, and 30% PR were lost from soil after 3 mo of plant growth) were excellent candidates for the optimization of VFS as buffer zones abutting turf environments. Blue flag iris was most effective at removing selected pesticides from soil and had the highest aesthetic value of the plants tested. PMID:18689747

  16. Comparing the Selection and Placement of Best Management Practices in Improving Water Quality Using a Multiobjective Optimization and Targeting Method

    PubMed Central

    Chiang, Li-Chi; Chaubey, Indrajeet; Maringanti, Chetan; Huang, Tao

    2014-01-01

    Suites of Best Management Practices (BMPs) are usually selected to be economically and environmentally efficient in reducing nonpoint source (NPS) pollutants from agricultural areas in a watershed. The objective of this research was to compare the selection and placement of BMPs in a pasture-dominated watershed using multiobjective optimization and targeting methods. Two objective functions were used in the optimization process, which minimize pollutant losses and the BMP placement areas. The optimization tool was an integration of a multi-objective genetic algorithm (GA) and a watershed model (Soil and Water Assessment Tool—SWAT). For the targeting method, an optimum BMP option was implemented in critical areas in the watershed that contribute the greatest pollutant losses. A total of 171 BMP combinations, which consist of grazing management, vegetated filter strips (VFS), and poultry litter applications were considered. The results showed that the optimization is less effective when vegetated filter strips (VFS) are not considered, and it requires much longer computation times than the targeting method to search for optimum BMPs. Although the targeting method is effective in selecting and placing an optimum BMP, larger areas are needed for BMP implementation to achieve the same pollutant reductions as the optimization method. PMID:24619160

  17. A New Methodology to Select the Preferred Solutions from the Pareto-optimal Set: Application to Polymer Extrusion

    SciTech Connect

    Ferreira, Jose C.; Gaspar-Cunha, Antonio; Fonseca, Carlos M.

    2007-04-07

    Most of the real world optimization problems involve multiple, usually conflicting, optimization criteria. Generating Pareto optimal solutions plays an important role in multi-objective optimization, and the problem is considered to be solved when the Pareto optimal set is found, i.e., the set of non-dominated solutions. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to produce the complete set of non-dominated solutions. However, this is not allays enough since the aim is not only to know the Pareto set but, also, to obtain one solution from this Pareto set. Thus, the definition of a methodology able to select a single solution from the set of non-dominated solutions (or a region of the Pareto frontier), and taking into account the preferences of a Decision Maker (DM), is necessary. A different method, based on a weighted stress function, is proposed. It is able to integrate the user's preferences in order to find the best region of the Pareto frontier accordingly with these preferences. This method was tested on some benchmark test problems, with two and three criteria, and on a polymer extrusion problem. This methodology is able to select efficiently the best Pareto-frontier region for the specified relative importance of the criteria.

  18. The D-Optimality Item Selection Criterion in the Early Stage of CAT: A Study with the Graded Response Model

    ERIC Educational Resources Information Center

    Passos, Valeria Lima; Berger, Martijn P. F.; Tan, Frans E. S.

    2008-01-01

    During the early stage of computerized adaptive testing (CAT), item selection criteria based on Fisher"s information often produce less stable latent trait estimates than the Kullback-Leibler global information criterion. Robustness against early stage instability has been reported for the D-optimality criterion in a polytomous CAT with the…

  19. Insights into the Experiences of Older Workers and Change: Through the Lens of Selection, Optimization, and Compensation

    ERIC Educational Resources Information Center

    Unson, Christine; Richardson, Margaret

    2013-01-01

    Purpose: The study examined the barriers faced, the goals selected, and the optimization and compensation strategies of older workers in relation to career change. Method: Thirty open-ended interviews, 12 in the United States and 18 in New Zealand, were conducted, recorded, transcribed verbatim, and analyzed for themes. Results: Barriers to…

  20. Effect of Selection of Design Parameters on the Optimization of a Horizontal Axis Wind Turbine via Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Alpman, Emre

    2014-06-01

    The effect of selecting the twist angle and chord length distributions on the wind turbine blade design was investigated by performing aerodynamic optimization of a two-bladed stall regulated horizontal axis wind turbine. Twist angle and chord length distributions were defined using Bezier curve using 3, 5, 7 and 9 control points uniformly distributed along the span. Optimizations performed using a micro-genetic algorithm with populations composed of 5, 10, 15, 20 individuals showed that, the number of control points clearly affected the outcome of the process; however the effects were different for different population sizes. The results also showed the superiority of micro-genetic algorithm over a standard genetic algorithm, for the selected population sizes. Optimizations were also performed using a macroevolutionary algorithm and the resulting best blade design was compared with that yielded by micro-genetic algorithm.

  1. Automatised selection of load paths to construct reduced-order models in computational damage micromechanics: from dissipation-driven random selection to Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Goury, Olivier; Amsallem, David; Bordas, Stéphane Pierre Alain; Liu, Wing Kam; Kerfriden, Pierre

    2016-04-01

    In this paper, we present new reliable model order reduction strategies for computational micromechanics. The difficulties rely mainly upon the high dimensionality of the parameter space represented by any load path applied onto the representative volume element. We take special care of the challenge of selecting an exhaustive snapshot set. This is treated by first using a random sampling of energy dissipating load paths and then in a more advanced way using Bayesian optimization associated with an interlocked division of the parameter space. Results show that we can insure the selection of an exhaustive snapshot set from which a reliable reduced-order model can be built.

  2. Automatised selection of load paths to construct reduced-order models in computational damage micromechanics: from dissipation-driven random selection to Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Goury, Olivier; Amsallem, David; Bordas, Stéphane Pierre Alain; Liu, Wing Kam; Kerfriden, Pierre

    2016-08-01

    In this paper, we present new reliable model order reduction strategies for computational micromechanics. The difficulties rely mainly upon the high dimensionality of the parameter space represented by any load path applied onto the representative volume element. We take special care of the challenge of selecting an exhaustive snapshot set. This is treated by first using a random sampling of energy dissipating load paths and then in a more advanced way using Bayesian optimization associated with an interlocked division of the parameter space. Results show that we can insure the selection of an exhaustive snapshot set from which a reliable reduced-order model can be built.

  3. Knowledge-Based, Central Nervous System (CNS) Lead Selection and Lead Optimization for CNS Drug Discovery.

    PubMed

    Ghose, Arup K; Herbertz, Torsten; Hudkins, Robert L; Dorsey, Bruce D; Mallamo, John P

    2012-01-18

    The central nervous system (CNS) is the major area that is affected by aging. Alzheimer's disease (AD), Parkinson's disease (PD), brain cancer, and stroke are the CNS diseases that will cost trillions of dollars for their treatment. Achievement of appropriate blood-brain barrier (BBB) penetration is often considered a significant hurdle in the CNS drug discovery process. On the other hand, BBB penetration may be a liability for many of the non-CNS drug targets, and a clear understanding of the physicochemical and structural differences between CNS and non-CNS drugs may assist both research areas. Because of the numerous and challenging issues in CNS drug discovery and the low success rates, pharmaceutical companies are beginning to deprioritize their drug discovery efforts in the CNS arena. Prompted by these challenges and to aid in the design of high-quality, efficacious CNS compounds, we analyzed the physicochemical property and the chemical structural profiles of 317 CNS and 626 non-CNS oral drugs. The conclusions derived provide an ideal property profile for lead selection and the property modification strategy during the lead optimization process. A list of substructural units that may be useful for CNS drug design was also provided here. A classification tree was also developed to differentiate between CNS drugs and non-CNS oral drugs. The combined analysis provided the following guidelines for designing high-quality CNS drugs: (i) topological molecular polar surface area of <76 Å(2) (25-60 Å(2)), (ii) at least one (one or two, including one aliphatic amine) nitrogen, (iii) fewer than seven (two to four) linear chains outside of rings, (iv) fewer than three (zero or one) polar hydrogen atoms, (v) volume of 740-970 Å(3), (vi) solvent accessible surface area of 460-580 Å(2), and (vii) positive QikProp parameter CNS. The ranges within parentheses may be used during lead optimization. One violation to this proposed profile may be acceptable. The

  4. Knowledge-Based, Central Nervous System (CNS) Lead Selection and Lead Optimization for CNS Drug Discovery

    PubMed Central

    2011-01-01

    The central nervous system (CNS) is the major area that is affected by aging. Alzheimer’s disease (AD), Parkinson’s disease (PD), brain cancer, and stroke are the CNS diseases that will cost trillions of dollars for their treatment. Achievement of appropriate blood–brain barrier (BBB) penetration is often considered a significant hurdle in the CNS drug discovery process. On the other hand, BBB penetration may be a liability for many of the non-CNS drug targets, and a clear understanding of the physicochemical and structural differences between CNS and non-CNS drugs may assist both research areas. Because of the numerous and challenging issues in CNS drug discovery and the low success rates, pharmaceutical companies are beginning to deprioritize their drug discovery efforts in the CNS arena. Prompted by these challenges and to aid in the design of high-quality, efficacious CNS compounds, we analyzed the physicochemical property and the chemical structural profiles of 317 CNS and 626 non-CNS oral drugs. The conclusions derived provide an ideal property profile for lead selection and the property modification strategy during the lead optimization process. A list of substructural units that may be useful for CNS drug design was also provided here. A classification tree was also developed to differentiate between CNS drugs and non-CNS oral drugs. The combined analysis provided the following guidelines for designing high-quality CNS drugs: (i) topological molecular polar surface area of <76 Å2 (25–60 Å2), (ii) at least one (one or two, including one aliphatic amine) nitrogen, (iii) fewer than seven (two to four) linear chains outside of rings, (iv) fewer than three (zero or one) polar hydrogen atoms, (v) volume of 740–970 Å3, (vi) solvent accessible surface area of 460–580 Å2, and (vii) positive QikProp parameter CNS. The ranges within parentheses may be used during lead optimization. One violation to this proposed profile may be acceptable. The

  5. On selection of the optimal data time interval for real-time hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Liu, J.; Han, D.

    2013-09-01

    With the advancement in modern telemetry and communication technologies, hydrological data can be collected with an increasingly higher sampling rate. An important issue deserving attention from the hydrological community is which suitable time interval of the model input data should be chosen in hydrological forecasting. Such a problem has long been recognised in the control engineering community but is a largely ignored topic in operational applications of hydrological forecasting. In this study, the intrinsic properties of rainfall-runoff data with different time intervals are first investigated from the perspectives of the sampling theorem and the information loss using the discrete wavelet transform tool. It is found that rainfall signals with very high sampling rates may not always improve the accuracy of rainfall-runoff modelling due to the catchment low-pass-filtering effect. To further investigate the impact of a data time interval in real-time forecasting, a real-time forecasting system is constructed by incorporating the probability distributed model (PDM) with a real-time updating scheme, the autoregressive moving-average (ARMA) model. Case studies are then carried out on four UK catchments with different concentration times for real-time flow forecasting using data with different time intervals of 15, 30, 45, 60, 90 and 120 min. A positive relation is found between the forecast lead time and the optimal choice of the data time interval, which is also highly dependent on the catchment concentration time. Finally, based on the conclusions from the case studies, a hypothetical pattern is proposed in three-dimensional coordinates to describe the general impact of the data time interval and to provide implications of the selection of the optimal time interval in real-time hydrological forecasting. Although nowadays most operational hydrological systems still have low data sampling rates (daily or hourly), the future is that higher sampling rates will become

  6. Genetic Particle Swarm Optimization-Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection.

    PubMed

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-01-01

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm. PMID:27483285

  7. Optimal selection of on-site generation with combined heat andpower applications

    SciTech Connect

    Siddiqui, Afzal S.; Marnay, Chris; Bailey, Owen; HamachiLaCommare, Kristina

    2004-11-30

    While demand for electricity continues to grow, expansion of the traditional electricity supply system, or macrogrid, is constrained and is unlikely to keep pace with the growing thirst western economies have for electricity. Furthermore, no compelling case has been made that perpetual improvement in the overall power quality and reliability (PQR)delivered is technically possible or economically desirable. An alternative path to providing high PQR for sensitive loads would generate close to them in microgrids, such as the Consortium for Electricity Reliability Technology Solutions (CERTS) Microgrid. Distributed generation would alleviate the pressure for endless improvement in macrogrid PQR and might allow the establishment of a sounder economically based level of universal grid service. Energy conversion from available fuels to electricity close to loads can also provide combined heat and power (CHP) opportunities that can significantly improve the economics of small-scale on-site power generation, especially in hot climates when the waste heat serves absorption cycle cooling equipment that displaces expensive on-peak electricity. An optimization model, the Distributed Energy Resources Customer Adoption Model (DER-CAM), developed at Berkeley Lab identifies the energy bill minimizing combination of on-site generation and heat recovery equipment for sites, given their electricity and heat requirements, the tariffs they face, and a menu of available equipment. DER-CAM is used to conduct a systemic energy analysis of a southern California naval base building and demonstrates atypical current economic on-site power opportunity. Results achieve cost reductions of about 15 percent with DER, depending on the tariff.Furthermore, almost all of the energy is provided on-site, indicating that modest cost savings can be achieved when the microgrid is free to select distributed generation and heat recovery equipment in order to minimize its over all costs.

  8. G-STRATEGY: Optimal Selection of Individuals for Sequencing in Genetic Association Studies.

    PubMed

    Wang, Miaoyan; Jakobsdottir, Johanna; Smith, Albert V; McPeek, Mary Sara

    2016-09-01

    In a large-scale genetic association study, the number of phenotyped individuals available for sequencing may, in some cases, be greater than the study's sequencing budget will allow. In that case, it can be important to prioritize individuals for sequencing in a way that optimizes power for association with the trait. Suppose a cohort of phenotyped individuals is available, with some subset of them possibly already sequenced, and one wants to choose an additional fixed-size subset of individuals to sequence in such a way that the power to detect association is maximized. When the phenotyped sample includes related individuals, power for association can be gained by including partial information, such as phenotype data of ungenotyped relatives, in the analysis, and this should be taken into account when assessing whom to sequence. We propose G-STRATEGY, which uses simulated annealing to choose a subset of individuals for sequencing that maximizes the expected power for association. In simulations, G-STRATEGY performs extremely well for a range of complex disease models and outperforms other strategies with, in many cases, relative power increases of 20-40% over the next best strategy, while maintaining correct type 1 error. G-STRATEGY is computationally feasible even for large datasets and complex pedigrees. We apply G-STRATEGY to data on high-density lipoprotein and low-density lipoprotein from the AGES-Reykjavik and REFINE-Reykjavik studies, in which G-STRATEGY is able to closely approximate the power of sequencing the full sample by selecting for sequencing a only small subset of the individuals. PMID:27256766

  9. Synthesis and Purification of Iodoaziridines Involving Quantitative Selection of the Optimal Stationary Phase for Chromatography

    PubMed Central

    Boultwood, Tom; Affron, Dominic P.; Bull, James A.

    2014-01-01

    The highly diastereoselective preparation of cis-N-Ts-iodoaziridines through reaction of diiodomethyllithium with N-Ts aldimines is described. Diiodomethyllithium is prepared by the deprotonation of diiodomethane with LiHMDS, in a THF/diethyl ether mixture, at -78 °Cin the dark. These conditions are essential for the stability of the LiCHI2 reagent generated. The subsequent dropwise addition of N-Ts aldimines to the preformed diiodomethyllithium solution affords an amino-diiodide intermediate, which is not isolated. Rapid warming of the reaction mixture to 0 °C promotes cyclization to afford iodoaziridines with exclusive cis-diastereoselectivity. The addition and cyclization stages of the reaction are mediated in one reaction flask by careful temperature control. Due to the sensitivity of the iodoaziridines to purification, assessment of suitable methods of purification is required. A protocol to assess the stability of sensitive compounds to stationary phases for column chromatography is described. This method is suitable to apply to new iodoaziridines, or other potentially sensitive novel compounds. Consequently this method may find application in range of synthetic projects. The procedure involves firstly the assessment of the reaction yield, prior to purification, by 1H NMR spectroscopy with comparison to an internal standard. Portions of impure product mixture are then exposed to slurries of various stationary phases appropriate for chromatography, in a solvent system suitable as the eluent in flash chromatography. After stirring for 30 min to mimic chromatography, followed by filtering, the samples are analyzed by 1H NMR spectroscopy. Calculated yields for each stationary phase are then compared to that initially obtained from the crude reaction mixture. The results obtained provide a quantitative assessment of the stability of the compound to the different stationary phases; hence the optimal can be selected. The choice of basic alumina, modified to

  10. Optimal Wavelength Selection on Hyperspectral Data with Fused Lasso for Biomass Estimation of Tropical Rain Forest

    NASA Astrophysics Data System (ADS)

    Takayama, T.; Iwasaki, A.

    2016-06-01

    Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  11. Optimizing Sensitization Processes in Dinuclear Luminescent Lanthanide Oligomers. Selection of Rigid Aromatic Spacers

    PubMed Central

    Lemonnier, Jean-François; Guénée, Laure; Beuchat, César; Wesolowski, Tomasz A.; Mukherjee, Prasun; Waldeck, David H.; Gogick, Kristy A.; Petoud, Stéphane; Piguet, Claude

    2011-01-01

    This work illustrates a simple approach for optimizing the lanthanide luminescence in molecular dinuclear lanthanide complexes and identifies a particular multidentate europium complex as the best candidate for further incorporation into polymeric materials. The central phenyl ring in the bis-tridentate model ligands L3–L5, which are substituted with neutral (X = H, L3), electronwithdrawing (X = F, L4), or electron-donating (X = OCH3, L5) groups, separate the 2,6-bis(benzimidazol-2-yl)pyridine binding units of linear oligomeric multi-tridentate ligand strands that are designed for the complexation of luminescent trivalent lanthanides, Ln(III). Reactions of L3–L5 with [Ln(hfac)3(diglyme)] (hfac− is the hexafluoroacetylacetonate anion) produce saturated single-stranded dumbbell-shaped complexes [Ln2(Lk)(hfac)6] (k = 3–5), in which the lanthanide ions of the two nine-coordinate neutral [N3Ln(hfac)3] units are separated by 12–14 Å. The thermodynamic affinities of [Ln(hfac)3] for the tridentate binding sites in L3–L5 are average (6.6≤log(β2,1Y,Lk)≤8.4) , but still result in 15–30% dissociation at millimolar concentrations in acetonitrile. In addition to the empirical solubility trend found in organic solvents (L4 > L3 ≫ L5), which suggests that the 1,4-difluorophenyl spacer in L4 is preferable, we have developed a novel tool for deciphering the photophysical sensitization processes operating in [Eu2(Lk)(hfac)6]. A simple interpretation of the complete set of rate constants characterizing the energy migration mechanisms provides straightforward objective criteria for the selection of [Eu2(L4)(hfac)6] as the most promising building block. PMID:21882836

  12. In vitro selection of optimal DNA substrates for T4 RNA ligase

    NASA Technical Reports Server (NTRS)

    Harada, Kazuo; Orgel, Leslie E.

    1993-01-01

    We have used in vitro selection techniques to characterize DNA sequences that are ligated efficiently by T4 RNA ligase. We find that the ensemble of selected sequences ligated about 10 times as efficiently as the random mixture of sequences used as the input for selection. Surprisingly, the majority of the selected sequences approximated a well-defined consensus sequence.

  13. Adaptive selection of minimally correlated data for optimization of source-detector configuration in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Sabir, Sohail; Kim, Keehyun; Heo, Duchang; Cho, Seungryong

    2016-03-01

    The optimization of experimental design prior to deployment, not only for cost effective solution but also for computationally efficient image reconstruction has taken up for this study. We implemented the iterative method also known as effective independence (EFI) method for optimization of source/detector pair configuration. The notion behind for adaptive selection of minimally correlated measurements was to evaluate the information content passed by each measurement for estimation of unknown parameter. The EFI method actually ranks measurements according to their contribution to the linear independence of unknown parameter basis. Typically, to improve the solvability of ill conditioned system, regularization parameter is added, which may affect the source/detector selection configuration. We show that the source/detector pairs selected by EFI method were least prone to vary with sub optimal regularization value. Moreover, through series of simulation studies we also confirmed that sparse source/detector pair measurements selected by EFI method offered similar results in comparison with the dense measurement configuration for unknown parameters qualitatively as well as quantitatively. Additionally, EFI method also allow us to incorporate the prior knowledge, extracted in multimodality imaging cases, to design source/detector configuration sensitive to specific region of interest. The source/detector ranking method was further analyzed to derive the automatic cut off number for iterative scheme.

  14. Choosing the Optimal Number of Factors in Exploratory Factor Analysis: A Model Selection Perspective

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Zhang, Guangjian; Kim, Cheongtag; Mels, Gerhard

    2013-01-01

    A central problem in the application of exploratory factor analysis is deciding how many factors to retain ("m"). Although this is inherently a model selection problem, a model selection perspective is rarely adopted for this task. We suggest that Cudeck and Henly's (1991) framework can be applied to guide the selection process. Researchers must…

  15. Optimal contribution selection applied to the Norwegian and the North-Swedish cold-blooded trotter - a feasibility study.

    PubMed

    Olsen, H F; Meuwissen, T; Klemetsdal, G

    2013-06-01

    The aim of this study was to examine how to apply optimal contribution selection (OCS) in the Norwegian and the North-Swedish cold-blooded trotter and give practical recommendations for the future. OCS was implemented using the software Gencont with overlapping generations and selected a few, but young sires, as these turn over the generations faster and thus is less related to the mare candidates. In addition, a number of Swedish sires were selected as they were less related to the selection candidates. We concluded that implementing OCS is feasible to select sires (there is no selection on mares), and we recommend the number of available sire candidates to be continuously updated because of amongst others deaths and geldings. In addition, only considering sire candidates with phenotype above average within a year class would allow selection candidates from many year classes to be included and circumvent current limitation on number of selection candidates in Gencont (approx. 3000). The results showed that mare candidates can well be those being mated the previous year. OCS will, dynamically, recruit young stallions and manage the culling or renewal of annual breeding permits for stallions that had been previously approved. For the annual mating proportion per sire, a constraint in accordance with the maximum that a sire can mate naturally is recommended. PMID:23679942

  16. Hybrid particle swarm optimization and tabu search approach for selecting genes for tumor classification using gene expression data.

    PubMed

    Shen, Qi; Shi, Wei-Min; Kong, Wei

    2008-02-01

    Gene expression data are characterized by thousands even tens of thousands of measured genes on only a few tissue samples. This can lead either to possible overfitting and dimensional curse or even to a complete failure in analysis of microarray data. Gene selection is an important component for gene expression-based tumor classification systems. In this paper, we develop a hybrid particle swarm optimization (PSO) and tabu search (HPSOTS) approach for gene selection for tumor classification. The incorporation of tabu search (TS) as a local improvement procedure enables the algorithm HPSOTS to overleap local optima and show satisfactory performance. The proposed approach is applied to three different microarray data sets. Moreover, we compare the performance of HPSOTS on these datasets to that of stepwise selection, the pure TS and PSO algorithm. It has been demonstrated that the HPSOTS is a useful tool for gene selection and mining high dimension data. PMID:18093877

  17. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    PubMed

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-01

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics. PMID:26346190

  18. Discovery of 7-aminofuro[2,3-c]pyridine inhibitors of TAK1: optimization of kinase selectivity and pharmacokinetics.

    PubMed

    Hornberger, Keith R; Chen, Xin; Crew, Andrew P; Kleinberg, Andrew; Ma, Lifu; Mulvihill, Mark J; Wang, Jing; Wilde, Victoria L; Albertella, Mark; Bittner, Mark; Cooke, Andrew; Kadhim, Salam; Kahler, Jennifer; Maresca, Paul; May, Earl; Meyn, Peter; Romashko, Darlene; Tokar, Brianna; Turton, Roy

    2013-08-15

    The kinase selectivity and pharmacokinetic optimization of a series of 7-aminofuro[2,3-c]pyridine inhibitors of TAK1 is described. The intersection of insights from molecular modeling, computational prediction of metabolic sites, and in vitro metabolite identification studies resulted in a simple and unique solution to both of these problems. These efforts culminated in the discovery of compound 13a, a potent, relatively selective inhibitor of TAK1 with good pharmacokinetic properties in mice, which was active in an in vivo model of ovarian cancer. PMID:23856049

  19. Structural optimization and structure-functional selectivity relationship studies of G protein-biased EP2 receptor agonists.

    PubMed

    Ogawa, Seiji; Watanabe, Toshihide; Moriyuki, Kazumi; Goto, Yoshikazu; Yamane, Shinsaku; Watanabe, Akio; Tsuboi, Kazuma; Kinoshita, Atsushi; Okada, Takuya; Takeda, Hiroyuki; Tani, Kousuke; Maruyama, Toru

    2016-05-15

    The modification of the novel G protein-biased EP2 agonist 1 has been investigated to improve its G protein activity and develop a better understanding of its structure-functional selectivity relationship (SFSR). The optimization of the substituents on the phenyl ring of 1, followed by the inversion of the hydroxyl group on the cyclopentane moiety led to compound 9, which showed a 100-fold increase in its G protein activity compared with 1 without any increase in β-arrestin recruitment. Furthermore, SFSR studies revealed that the combination of meta and para substituents on the phenyl moiety was crucial to the functional selectivity. PMID:27055938

  20. High-Efficiency Nonfullerene Polymer Solar Cell Enabling by Integration of Film-Morphology Optimization, Donor Selection, and Interfacial Engineering.

    PubMed

    Zhang, Xin; Li, Weiping; Yao, Jiannian; Zhan, Chuanlang

    2016-06-22

    Carrier mobility is a vital factor determining the electrical performance of organic solar cells. In this paper we report that a high-efficiency nonfullerene organic solar cell (NF-OSC) with a power conversion efficiency of 6.94 ± 0.27% was obtained by optimizing the hole and electron transportations via following judicious selection of polymer donor and engineering of film-morphology and cathode interlayers: (1) a combination of solvent annealing and solvent vapor annealing optimizes the film morphology and hence both hole and electron mobilities, leading to a trade-off of fill factor and short-circuit current density (Jsc); (2) the judicious selection of polymer donor affords a higher hole and electron mobility, giving a higher Jsc; and (3) engineering the cathode interlayer affords a higher electron mobility, which leads to a significant increase in electrical current generation and ultimately the power conversion efficiency (PCE). PMID:27246160

  1. Quantitative and qualitative optimization of allergen extraction from peanut and selected tree nuts. Part 1. Screening of optimal extraction conditions using a D-optimal experimental design.

    PubMed

    L'Hocine, Lamia; Pitre, Mélanie

    2016-03-01

    A D-optimal design was constructed to optimize allergen extraction efficiency simultaneously from roasted, non-roasted, defatted, and non-defatted almond, hazelnut, peanut, and pistachio flours using three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various conditions of ionic strength, buffer-to-protein ratio, extraction temperature, and extraction duration. Statistical analysis showed that roasting and non-defatting significantly lowered protein recovery for all nuts. Increasing the temperature and the buffer-to-protein ratio during extraction significantly increased protein recovery, whereas increasing the extraction time had no significant impact. The impact of the three buffers on protein recovery varied significantly among the nuts. Depending on the extraction conditions, protein recovery varied from 19% to 95% for peanut, 31% to 73% for almond, 17% to 64% for pistachio, and 27% to 88% for hazelnut. A modulation by the buffer type and ionic strength of protein and immunoglobuline E binding profiles of extracts was evidenced, where high protein recovery levels did not always correlate with high immunoreactivity. PMID:26471618

  2. Design and optimization of a multi-element piezoelectric transducer for mode-selective generation of guided waves

    NASA Astrophysics Data System (ADS)

    Yazdanpanah Moghadam, Peyman; Quaegebeur, Nicolas; Masson, Patrice

    2016-07-01

    A novel multi-element piezoelectric transducers (MEPT) is designed, optimized, machined and experimentally tested to improve structural health monitoring systems for mode-selective generation of guided waves (GW) in an isotropic structure. GW generation using typical piezoceramics makes the signal processing and consequently damage detection very complicated because at any driving frequency at least two fundamental symmetric (S 0) and antisymmetric (A 0) modes are generated. To prevent this, mode selective transducer design is proposed based on MEPT. A numerical method is first developed to extract the interfacial stress between a single piezoceramic element and a host structure and then used as the input of an analytical model to predict the GW propagation through the thickness of an isotropic plate. Two novel objective functions are proposed to optimize the interfacial shear stress for both suppressing unwanted mode(s) and maximizing the desired mode. Simplicity and low manufacturing cost are two main targets driving the design of the MEPT. A prototype MEPT is then manufactured using laser micro-machining. An experimental procedure is presented to validate the performances of the MEPT as a new solution for mode-selective GW generation. Experimental tests illustrate the high capability of the MEPT for mode-selective GW generation, as unwanted mode is suppressed by a factor up to 170 times compared with the results obtained with a single piezoceramic.

  3. Regression metamodels of an optimal genomic testing strategy in dairy cattle when selection intensity is low

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic testing of dairy cattle increases reliability and can be used to select animals with superior genetic merit. Genomic testing is not free and not all candidates for selection should necessarily be tested. One common algorithm used to compare alternative decisions is time-consuming and not eas...

  4. Optimizing phenotypic and genotypic selection for Fusarium head blight resistance in wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fusarium head blight (FHB), or head scab, is an economically important disease of wheat (Triticum aestivum L.). In developing FHB-resistant soft winter wheat cultivars, breeders have relied on phenotypic selection, marker-assisted selection (MAS), or a combination of the two. The objectives of this ...

  5. Dual response surface-optimized process for feruloylated diacylglycerols by selective lipase-catalyzed transesterification in solvent free system.

    PubMed

    Zheng, Yan; Wu, Xiao-Mei; Branford-White, Christopher; Quan, Jing; Zhu, Li-Min

    2009-06-01

    Feruloylated diacylglycerol (FDAG) was synthesized using a selective lipase-catalyzed the transesterification between ethyl ferulate and triolein. To optimize the reaction conversion and purity of FDAG, dual response surface was applied to determine the effects of five-level-five-factors and their reciprocal interactions on product synthesis. A total of 32 individual experiments were performed to study reaction temperature, reaction time, substrate molar ratio, enzyme loading, and water activity. The highest reaction conversion and selectivity towards FDAG were 73.9% and 92.3%, respectively, at 55 degrees C, reaction time 5.3 day, enzyme loading 30.4 mg/ml, water activity 0.08, and a substrate molar ratio of 3.7. Moreover, predicted values showed good validation with the experimental values when experiments corresponding to selected points on the contour plots were carried out. PMID:19254838

  6. Selecting the optimal antithrombotic regimen for patients with acute coronary syndromes undergoing percutaneous coronary intervention

    PubMed Central

    Parikh, Shailja V; Keeley, Ellen C

    2009-01-01

    The wide variety of anticoagulant and antiplatelet agents available for clinical use has made choosing the optimal antithrombotic regimen for patients with acute coronary syndromes undergoing percutaneous coronary intervention a complex task. While there is no single best regimen, from a risk-benefit ratio standpoint, particular regimens may be considered optimal for different patients. We review the mechanisms of action for the commonly prescribed antithrombotic medications, summarize pertinent data from randomized trials on their use in acute coronary syndromes, and provide an algorithm (incorporating data from these trials as well as risk assessment instruments) that will help guide the decision-making process. PMID:19707287

  7. Intertwining Threshold Settings, Biological Data and Database Knowledge to Optimize the Selection of Differentially Expressed Genes from Microarray

    PubMed Central

    Chuchana, Paul; Holzmuller, Philippe; Vezilier, Frederic; Berthier, David; Chantal, Isabelle; Severac, Dany; Lemesre, Jean Loup; Cuny, Gerard; Nirdé, Philippe; Bucheton, Bruno

    2010-01-01

    Background Many tools used to analyze microarrays in different conditions have been described. However, the integration of deregulated genes within coherent metabolic pathways is lacking. Currently no objective selection criterion based on biological functions exists to determine a threshold demonstrating that a gene is indeed differentially expressed. Methodology/Principal Findings To improve transcriptomic analysis of microarrays, we propose a new statistical approach that takes into account biological parameters. We present an iterative method to optimise the selection of differentially expressed genes in two experimental conditions. The stringency level of gene selection was associated simultaneously with the p-value of expression variation and the occurrence rate parameter associated with the percentage of donors whose transcriptomic profile is similar. Our method intertwines stringency level settings, biological data and a knowledge database to highlight molecular interactions using networks and pathways. Analysis performed during iterations helped us to select the optimal threshold required for the most pertinent selection of differentially expressed genes. Conclusions/Significance We have applied this approach to the well documented mechanism of human macrophage response to lipopolysaccharide stimulation. We thus verified that our method was able to determine with the highest degree of accuracy the best threshold for selecting genes that are truly differentially expressed. PMID:20976008

  8. A Conceptual Framework for Procurement Decision Making Model to Optimize Supplier Selection: The Case of Malaysian Construction Industry

    NASA Astrophysics Data System (ADS)

    Chuan, Ngam Min; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Yong, Lee Choon; Ghazali, Azrul; Ezanee Rusli, Mohd; Itam, Zarina Binti; Beddu, Salmia; Liyana Mohd Kamal, Nur

    2016-03-01

    This paper intends to fathom the current state of procurement system in Malaysia specifically in the construction industry in the aspect of supplier selection. This paper propose a comprehensive study on the supplier selection metrics for infrastructure building, weight the importance of each metrics assigned and to find the relationship between the metrics among initiators, decision makers, buyers and users. With the metrics hierarchy of criteria importance, a supplier selection process can be defined, repeated and audited with lesser complications or difficulties. This will help the field of procurement to improve as this research is able to develop and redefine policies and procedures that have been set in supplier selection. Developing this systematic process will enable optimization of supplier selection and thus increasing the value for every stakeholders as the process of selection is greatly simplified. With a new redefined policy and procedure, it does not only increase the company’s effectiveness and profit, but also make it available for the company to reach greater heights in the advancement of procurement in Malaysia.

  9. Multi-scale textural feature extraction and particle swarm optimization based model selection for false positive reduction in mammography.

    PubMed

    Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin

    2015-12-01

    The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. PMID:25795630

  10. Optimization of fermentation parameters to study the behavior of selected lactic cultures on soy solid state fermentation.

    PubMed

    Rodríguez de Olmos, A; Bru, E; Garro, M S

    2015-03-01

    The use of solid fermentation substrate (SSF) has been appreciated by the demand for natural and healthy products. Lactic acid bacteria and bifidobacteria play a leading role in the production of novel functional foods and their behavior is practically unknown in these systems. Soy is an excellent substrate for the production of functional foods for their low cost and nutritional value. The aim of this work was to optimize different parameters involved in solid state fermentation (SSF) using selected lactic cultures to improve soybean substrate as a possible strategy for the elaboration of new soy food with enhanced functional and nutritional properties. Soy flour and selected lactic cultures were used under different conditions to optimize the soy SSF. The measured responses were bacterial growth, free amino acids and β-glucosidase activity, which were analyzed by applying response surface methodology. Based on the proposed statistical model, different fermentation conditions were raised by varying the moisture content (50-80%) of the soy substrate and temperature of incubation (31-43°C). The effect of inoculum amount was also investigated. These studies demonstrated the ability of selected strains (Lactobacillus paracasei subsp. paracasei and Bifidobacterium longum) to grow with strain-dependent behavior on the SSF system. β-Glucosidase activity was evident in both strains and L. paracasei subsp. paracasei was able to increase the free amino acids at the end of fermentation under assayed conditions. The used statistical model has allowed the optimization of fermentation parameters on soy SSF by selected lactic strains. Besides, the possibility to work with lower initial bacterial amounts to obtain results with significant technological impact was demonstrated. PMID:25498472

  11. Selecting Segmental Errors in Non-Native Dutch for Optimal Pronunciation Training

    ERIC Educational Resources Information Center

    Neri, Ambra; Cucchiarini, Catia; Strik, Helmer

    2006-01-01

    The current emphasis in second language teaching lies in the achievement of communicative effectiveness. In line with this approach, pronunciation training is nowadays geared towards helping learners avoid serious pronunciation errors, rather than eradicating the finest traces of foreign accent. However, to devise optimal pronunciation training…

  12. Joint source/FEC rate selection for quality-optimal MPEG-2 video delivery.

    PubMed

    Frossard, P; Verscheure, O

    2001-01-01

    This paper deals with the optimal allocation of MPEG-2 encoding and media-independent forward error correction (FEC) rates under a total given bandwidth. The optimality is defined in terms of minimum perceptual distortion given a set of video and network parameters. We first derive the set of equations leading to the residual loss process parameters. That is, the packet loss ratio (PLR) and the average burst length after FEC decoding. We then show that the perceptual source distortion decreases exponentially with the increasing MPEG-2 source rate. We also demonstrate that the perceptual distortion due to data loss is directly proportional to the number of lost macroblocks, and therefore decreases with the amount of channel protection. Finally, we derive the global set of equations that lead to the optimal dynamic rate allocation. The optimal distribution is shown to outperform classical FEC scheme, thanks to its adaptivity to the scene complexity, the available bandwidth and to the network performance. Furthermore, our approach holds for any standard video compression algorithms (i.e., MPEG-x, H.26x). PMID:18255521

  13. Item Selection for the Development of Short Forms of Scales Using an Ant Colony Optimization Algorithm

    ERIC Educational Resources Information Center

    Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.

    2008-01-01

    This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and…

  14. Integration of genomic information into sport horse breeding programs for optimization of accuracy of selection.

    PubMed

    Haberland, A M; König von Borstel, U; Simianer, H; König, S

    2012-09-01

    Reliable selection criteria are required for young riding horses to increase genetic gain by increasing accuracy of selection and decreasing generation intervals. In this study, selection strategies incorporating genomic breeding values (GEBVs) were evaluated. Relevant stages of selection in sport horse breeding programs were analyzed by applying selection index theory. Results in terms of accuracies of indices (r(TI) ) and relative selection response indicated that information on single nucleotide polymorphism (SNP) genotypes considerably increases the accuracy of breeding values estimated for young horses without own or progeny performance. In a first scenario, the correlation between the breeding value estimated from the SNP genotype and the true breeding value (= accuracy of GEBV) was fixed to a relatively low value of r(mg) = 0.5. For a low heritability trait (h(2) = 0.15), and an index for a young horse based only on information from both parents, additional genomic information doubles r(TI) from 0.27 to 0.54. Including the conventional information source 'own performance' into the before mentioned index, additional SNP information increases r(TI) by 40%. Thus, particularly with regard to traits of low heritability, genomic information can provide a tool for well-founded selection decisions early in life. In a further approach, different sources of breeding values (e.g. GEBV and estimated breeding values (EBVs) from different countries) were combined into an overall index when altering accuracies of EBVs and correlations between traits. In summary, we showed that genomic selection strategies have the potential to contribute to a substantial reduction in generation intervals in horse breeding programs. PMID:23031511

  15. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  16. Web-GIS oriented systems viability for municipal solid waste selective collection optimization in developed and transient economies.

    PubMed

    Rada, E C; Ragazzi, M; Fedrizzi, P

    2013-04-01

    Municipal solid waste management is a multidisciplinary activity that includes generation, source separation, storage, collection, transfer and transport, processing and recovery, and, last but not least, disposal. The optimization of waste collection, through source separation, is compulsory where a landfill based management must be overcome. In this paper, a few aspects related to the implementation of a Web-GIS based system are analyzed. This approach is critically analyzed referring to the experience of two Italian case studies and two additional extra-European case studies. The first case is one of the best examples of selective collection optimization in Italy. The obtained efficiency is very high: 80% of waste is source separated for recycling purposes. In the second reference case, the local administration is going to be faced with the optimization of waste collection through Web-GIS oriented technologies for the first time. The starting scenario is far from an optimized management of municipal solid waste. The last two case studies concern pilot experiences in China and Malaysia. Each step of the Web-GIS oriented strategy is comparatively discussed referring to typical scenarios of developed and transient economies. The main result is that transient economies are ready to move toward Web oriented tools for MSW management, but this opportunity is not yet well exploited in the sector. PMID:23402896

  17. Stochastic optimization algorithm selection in hydrological model calibration based on fitness landscape characterization

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Brissette, François P.; Poulin, Annie; Côté, Pascal; Martel, Jean-Luc

    2014-05-01

    The process of hydrological model parameter calibration is routinely performed with the help of stochastic optimization algorithms. Many such algorithms have been created and they sometimes provide varying levels of performance (as measured by an efficiency metric such as Nash-Sutcliffe). This is because each algorithm is better suited for one type of optimization problem rather than another. This research project's aim was twofold. First, it was sought upon to find various features in the calibration problem fitness landscapes to map the encountered problem types to the best possible optimization algorithm. Second, the optimal number of model evaluations in order to minimize resources usage and maximize overall model quality was investigated. A total of five stochastic optimization algorithms (SCE-UA, CMAES, DDS, PSO and ASA) were used to calibrate four lumped hydrological models (GR4J, HSAMI, HMETS and MOHYSE) on 421 basins from the US MOPEX database. Each of these combinations was performed using three objective functions (Log(RMSE), NSE, and a metric combining NSE, RMSE and BIAS) to add sufficient diversity to the fitness landscapes. Each run was performed 30 times for statistical analysis. With every parameter set tested during the calibration process, the validation value was taken on a separate period. It was then possible to outline the calibration skill versus the validation skill for the different algorithms. Fitness landscapes were characterized by various metrics, such as the dispersion metric, the mean distance between random points and their respective local minima (found through simple hill-climbing algorithms) and the mean distance between the local minima and the best local optimum found. These metrics were then compared to the calibration score of the various optimization algorithms. Preliminary results tend to show that fitness landscapes presenting a globally convergent structure are more prevalent than other types of landscapes in this

  18. Pseudo Optimization of E-Nose Data Using Region Selection with Feature Feedback Based on Regularized Linear Discriminant Analysis

    PubMed Central

    Jeong, Gu-Min; Nghia, Nguyen Trong; Choi, Sang-Il

    2015-01-01

    In this paper, we present a pseudo optimization method for electronic nose (e-nose) data using region selection with feature feedback based on regularized linear discriminant analysis (R-LDA) to enhance the performance and cost functions of an e-nose system. To implement cost- and performance-effective e-nose systems, the number of channels, sampling time and sensing time of the e-nose must be considered. We propose a method to select both important channels and an important time-horizon by analyzing e-nose sensor data. By extending previous feature feedback results, we obtain a two-dimensional discriminant information map consisting of channels and time units by reverse mapping the feature space to the data space based on R-LDA. The discriminant information map enables optimal channels and time units to be heuristically selected to improve the performance and cost functions. The efficacy of the proposed method is demonstrated experimentally for different volatile organic compounds. In particular, our method is both cost and performance effective for the real implementation of e-nose systems. PMID:25559000

  19. Fine tune W-CMP process with alignment mark selection for optimal metal layer overlay and yield benefits

    NASA Astrophysics Data System (ADS)

    Cui, Yuanting; So, Albert; Louks, Sean

    2004-05-01

    Alignment performance and overlay control of metal layer from W-CMP process highly depends on the process influence on the alignment mark. While in a manufacturing environment, there could be introduced many changes into W-CMP process for defect reduction, cost reduction and yield improvement to further guarantee our success in this highly competitive industry. This study characterizes the CMP effect, especially erosion and dishing effect, polishing selectivity on alignment mark profile, which results in different alignment performance. We illustrate that how we seek solution to achieve an optimal alignment performance with the existing mark in according to different CMP slurry process by further fine tuning W-CMP process, such as over-polishing, final polish. The CMP effect on different alignment mark types is also evaluated; future alignment mark selection and design based on future CMP process, film deposition can thus be proposed. This work explains a good working method of optimizing alignment for process, fine tuning process for alignment mark, feed-backing solutions for mark selection while taking into considerations of cost, throughput, defect, yield.

  20. Jointly optimal bandwidth selection for the planar kernel-smoothed density-ratio.

    PubMed

    Davies, Tilman M

    2013-06-01

    The kernel-smoothed density-ratio or 'relative risk' function for planar point data is a useful tool for examining disease rates over a certain geographical region. Instrumental to the quality of the resulting risk surface estimate is the choice of bandwidth for computation of the required numerator and denominator densities. The challenge associated with finding some 'optimal' smoothing parameter for standalone implementation of the kernel estimator given observed data is compounded when we deal with the density-ratio per se. To date, only one method specifically designed for calculation of density-ratio optimal bandwidths has received any notable attention in the applied literature. However, this method exhibits significant variability in the estimated smoothing parameters. In this work, the first practical comparison of this selector with a little-known alternative technique is provided. The possibility of exploiting an asymptotic MISE formulation in an effort to control excess variability is also examined, and numerical results seem promising. PMID:23725887

  1. Selecting Observation Platforms for Optimized Anomaly Detectability under Unreliable Partial Observations

    SciTech Connect

    Wen-Chiao Lin; Humberto E. Garcia; Tae-Sic Yoo

    2011-06-01

    Diagnosers for keeping track on the occurrences of special events in the framework of unreliable partially observed discrete-event dynamical systems were developed in previous work. This paper considers observation platforms consisting of sensors that provide partial and unreliable observations and of diagnosers that analyze them. Diagnosers in observation platforms typically perform better as sensors providing the observations become more costly or increase in number. This paper proposes a methodology for finding an observation platform that achieves an optimal balance between cost and performance, while satisfying given observability requirements and constraints. Since this problem is generally computational hard in the framework considered, an observation platform optimization algorithm is utilized that uses two greedy heuristics, one myopic and another based on projected performances. These heuristics are sequentially executed in order to find best observation platforms. The developed algorithm is then applied to an observation platform optimization problem for a multi-unit-operation system. Results show that improved observation platforms can be found that may significantly reduce the observation platform cost but still yield acceptable performance for correctly inferring the occurrences of special events.

  2. The suitability of selected multidisciplinary design and optimization techniques to conceptual aerospace vehicle design

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1992-01-01

    Four methods for preliminary aerospace vehicle design are reviewed. The first three methods (classical optimization, system decomposition, and system sensitivity analysis (SSA)) employ numerical optimization techniques and numerical gradients to feed back changes in the design variables. The optimum solution is determined by stepping through a series of designs toward a final solution. Of these three, SSA is argued to be the most applicable to a large-scale highly coupled vehicle design where an accurate minimum of an objective function is required. With SSA, several tasks can be performed in parallel. The techniques of classical optimization and decomposition can be included in SSA, resulting in a very powerful design method. The Taguchi method is more of a 'smart' parametric design method that analyzes variable trends and interactions over designer specified ranges with a minimum of experimental analysis runs. Its advantages are its relative ease of use, ability to handle discrete variables, and ability to characterize the entire design space with a minimum of analysis runs.

  3. Mapping carbon flux uncertainty and selecting optimal locations for future flux towers in the Great Plains

    USGS Publications Warehouse

    Gu, Y.; Howard, D.M.; Wylie, B.K.; Zhang, L.

    2012-01-01

    Flux tower networks (e. g., AmeriFlux, Agriflux) provide continuous observations of ecosystem exchanges of carbon (e. g., net ecosystem exchange), water vapor (e. g., evapotranspiration), and energy between terrestrial ecosystems and the atmosphere. The long-term time series of flux tower data are essential for studying and understanding terrestrial carbon cycles, ecosystem services, and climate changes. Currently, there are 13 flux towers located within the Great Plains (GP). The towers are sparsely distributed and do not adequately represent the varieties of vegetation cover types, climate conditions, and geophysical and biophysical conditions in the GP. This study assessed how well the available flux towers represent the environmental conditions or "ecological envelopes" across the GP and identified optimal locations for future flux towers in the GP. Regression-based remote sensing and weather-driven net ecosystem production (NEP) models derived from different extrapolation ranges (10 and 50%) were used to identify areas where ecological conditions were poorly represented by the flux tower sites and years previously used for mapping grassland fluxes. The optimal lands suitable for future flux towers within the GP were mapped. Results from this study provide information to optimize the usefulness of future flux towers in the GP and serve as a proxy for the uncertainty of the NEP map.

  4. Optimal selection of artificial boundary conditions for model update and damage detection

    NASA Astrophysics Data System (ADS)

    Gordis, Joshua H.; Papagiannakis, Konstantinos

    2011-07-01

    Sensitivity-based model error localization and damage detection is hindered by the relative differences in modal sensitivity magnitude among updating parameters. The method of artificial boundary conditions is shown to directly address this limitation, resulting in the increase of the number of updating parameters at which errors can be accurately localized. Using a single set of FRF data collected from a modal test, the artificial boundary conditions (ABC) method identifies experimentally the natural frequencies of a structure under test for a variety of different boundary conditions, without having to physically apply the boundary conditions, hence the term "artificial". The parameter-specific optimal ABC sets applied to the finite element model will produce increased sensitivities in the updating parameter, yielding accurate error localization and damage detection solutions. A method is developed for identifying the parameter-specific optimal ABC sets for updating or damage detection, and is based on the QR decomposition with column pivoting. Updating solution residuals, such as magnitude error and false error location, are shown to be minimized when the updating parameter set is limited to those corresponding to the QR pivot columns. The existence of an optimal ABC set for a given updating parameter is shown to be dependent on the number of modes used, and hence the method developed provides a systematic determination of the minimum number of modes required for localization in a given updating parameter. These various concepts are demonstrated on a simple model with simulated test data.

  5. Radar Tracking Waveform Design in Continuous Space and Optimization Selection Using Differential Evolution

    NASA Astrophysics Data System (ADS)

    Paul, Bryan

    Waveform design that allows for a wide variety of frequency-modulation (FM) has proven benefits. However, dictionary based optimization is limited and gradient search methods are often intractable. A new method is proposed using differential evolution to design waveforms with instantaneous frequencies (IFs) with cubic FM functions whose coefficients are constrained to the surface of the three dimensional unit sphere. Cubic IF functions subsume well-known IF functions such as linear, quadratic monomial, and cubic monomial IF functions. In addition, all nonlinear IF functions sufficiently approximated by a third order Taylor series over the unit time sequence can be represented in this space. Analog methods for generating polynomial IF waveforms are well established allowing for practical implementation in real world systems. By sufficiently constraining the search space to these waveforms of interest, alternative optimization methods such as differential evolution can be used to optimize tracking performance in a variety of radar environments. While simplified tracking models and finite waveform dictionaries have information theoretic results, continuous waveform design in high SNR, narrowband, cluttered environments is explored.

  6. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  7. Web-GIS oriented systems viability for municipal solid waste selective collection optimization in developed and transient economies

    SciTech Connect

    Rada, E.C.; Ragazzi, M.; Fedrizzi, P.

    2013-04-15

    Highlights: ► As an appropriate solution for MSW management in developed and transient countries. ► As an option to increase the efficiency of MSW selective collection. ► As an opportunity to integrate MSW management needs and services inventories. ► As a tool to develop Urban Mining actions. - Abstract: Municipal solid waste management is a multidisciplinary activity that includes generation, source separation, storage, collection, transfer and transport, processing and recovery, and, last but not least, disposal. The optimization of waste collection, through source separation, is compulsory where a landfill based management must be overcome. In this paper, a few aspects related to the implementation of a Web-GIS based system are analyzed. This approach is critically analyzed referring to the experience of two Italian case studies and two additional extra-European case studies. The first case is one of the best examples of selective collection optimization in Italy. The obtained efficiency is very high: 80% of waste is source separated for recycling purposes. In the second reference case, the local administration is going to be faced with the optimization of waste collection through Web-GIS oriented technologies for the first time. The starting scenario is far from an optimized management of municipal solid waste. The last two case studies concern pilot experiences in China and Malaysia. Each step of the Web-GIS oriented strategy is comparatively discussed referring to typical scenarios of developed and transient economies. The main result is that transient economies are ready to move toward Web oriented tools for MSW management, but this opportunity is not yet well exploited in the sector.

  8. Spacecraft flight control with the new phase space control law and optimal linear jet select

    NASA Technical Reports Server (NTRS)

    Bergmann, E. V.; Croopnick, S. R.; Turkovich, J. J.; Work, C. C.

    1977-01-01

    An autopilot designed for rotation and translation control of a rigid spacecraft is described. The autopilot uses reaction control jets as control effectors and incorporates a six-dimensional phase space control law as well as a linear programming algorithm for jet selection. The interaction of the control law and jet selection was investigated and a recommended configuration proposed. By means of a simulation procedure the new autopilot was compared with an existing system and was found to be superior in terms of core memory, central processing unit time, firings, and propellant consumption. But it is thought that the cycle time required to perform the jet selection computations might render the new autopilot unsuitable for existing flight computer applications, without modifications. The new autopilot is capable of maintaining attitude control in the presence of a large number of jet failures.

  9. Overcoming mutagenicity and ion channel activity: optimization of selective spleen tyrosine kinase inhibitors.

    PubMed

    Ellis, J Michael; Altman, Michael D; Bass, Alan; Butcher, John W; Byford, Alan J; Donofrio, Anthony; Galloway, Sheila; Haidle, Andrew M; Jewell, James; Kelly, Nancy; Leccese, Erica K; Lee, Sandra; Maddess, Matthew; Miller, J Richard; Moy, Lily Y; Osimboni, Ekundayo; Otte, Ryan D; Reddy, M Vijay; Spencer, Kerrie; Sun, Binyuan; Vincent, Stella H; Ward, Gwendolyn J; Woo, Grace H C; Yang, Chiming; Houshyar, Hani; Northrup, Alan B

    2015-02-26

    Development of a series of highly kinome-selective spleen tyrosine kinase (Syk) inhibitors with favorable druglike properties is described. Early leads were discovered through X-ray crystallographic analysis, and a systematic survey of cores within a selected chemical space focused on ligand binding efficiency. Attenuation of hERG ion channel activity inherent within the initial chemotype was guided through modulation of physicochemical properties including log D, PSA, and pKa. PSA proved most effective for prospective compound design. Further profiling of an advanced compound revealed bacterial mutagenicity in the Ames test using TA97a Salmonella strain, and subsequent study demonstrated that this mutagenicity was pervasive throughout the series. Identification of intercalation as a likely mechanism for the mutagenicity-enabled modification of the core scaffold. Implementation of a DNA binding assay as a prescreen and models in DNA allowed resolution of the mutagenicity risk, affording molecules with favorable potency, selectivity, pharmacokinetic, and off-target profiles. PMID:25625541

  10. A methodology for selecting an optimal experimental design for the computer analysis of a complex system

    SciTech Connect

    RUTHERFORD,BRIAN M.

    2000-02-03

    Investigation and evaluation of a complex system is often accomplished through the use of performance measures based on system response models. The response models are constructed using computer-generated responses supported where possible by physical test results. The general problem considered is one where resources and system complexity together restrict the number of simulations that can be performed. The levels of input variables used in defining environmental scenarios, initial and boundary conditions and for setting system parameters must be selected in an efficient way. This report describes an algorithmic approach for performing this selection.

  11. Methodology of research for qualitative composition of municipal solid waste to select an optimal method of recycling

    NASA Astrophysics Data System (ADS)

    Kravtsova, M. V.; Volkov, D. A.

    2015-09-01

    The article offers research methodology for qualitative composition of municipal solid waste to select an optimal method of recycling. The resource potential of waste directly depends on its composition and determines effectiveness of using various techniques, including separation and separate collection of refuge. The decision on re-equipment of waste-separating enterprise, which decreases the supply of waste to the burial site and provides economy of nonrenewable energy sources, is well-grounded, because it allows to diminish an anthropogenic load on environment.

  12. An approach to selecting the optimal sensing coil configuration structure for switched reluctance motor rotor position measurement

    NASA Astrophysics Data System (ADS)

    Cai, Jun; Deng, Zhiquan

    2015-02-01

    Accurate rotor position signal is highly required for controlling the switched reluctance motor (SRM). The use of galvanic isolated sensing coils can provide independent circuit for position estimation without affecting the SRM actuation. However, the cross-coupling between main winding and sensing coil, and the mutual coupling between adjacent phase sensing coils may affect the position estimation performance seriously. In this paper, three sensing coil configurations in a 12/8 structure SRM are analyzed and compared for selecting an optimal configuration that can effectively minimize the bad effects of the cross-coupling factors. The finite element analysis and experimental results are provided for verification.

  13. An approach to selecting the optimal sensing coil configuration structure for switched reluctance motor rotor position measurement.

    PubMed

    Cai, Jun; Deng, Zhiquan

    2015-02-01

    Accurate rotor position signal is highly required for controlling the switched reluctance motor (SRM). The use of galvanic isolated sensing coils can provide independent circuit for position estimation without affecting the SRM actuation. However, the cross-coupling between main winding and sensing coil, and the mutual coupling between adjacent phase sensing coils may affect the position estimation performance seriously. In this paper, three sensing coil configurations in a 12/8 structure SRM are analyzed and compared for selecting an optimal configuration that can effectively minimize the bad effects of the cross-coupling factors. The finite element analysis and experimental results are provided for verification. PMID:25725876

  14. Selecting optimal hyperspectral bands to discriminate nitrogen status in durum wheat: a comparison of statistical approaches.

    PubMed

    Stellacci, A M; Castrignanò, A; Troccoli, A; Basso, B; Buttafuoco, G

    2016-03-01

    Hyperspectral data can provide prediction of physical and chemical vegetation properties, but data handling, analysis, and interpretation still limit their use. In this study, different methods for selecting variables were compared for the analysis of on-the-ground hyperspectral signatures of wheat grown under a wide range of nitrogen supplies. Spectral signatures were recorded at the end of stem elongation, booting, and heading stages in 100 georeferenced locations, using a 512-channel portable spectroradiometer operating in the 325-1075-nm range. The following procedures were compared: (i) a heuristic combined approach including lambda-lambda R(2) (LL R(2)) model, principal component analysis (PCA), and stepwise discriminant analysis (SDA); (ii) variable importance for projection (VIP) statistics derived from partial least square (PLS) regression (PLS-VIP); and (iii) multiple linear regression (MLR) analysis through maximum R-square improvement (MAXR) and stepwise algorithms. The discriminating capability of selected wavelengths was evaluated by canonical discriminant analysis. Leaf-nitrogen concentration was quantified on samples collected at the same locations and dates and used as response variable in regressive methods. The different methods resulted in differences in the number and position of the selected wavebands. Bands extracted through regressive methods were mostly related to response variable, as shown by the importance of the visible region for PLS and stepwise. Band selection techniques can be extremely useful not only to improve the power of predictive models but also for data interpretation or sensor design. PMID:26922749

  15. Potential and optimization of genomic selection for fusarium head blight resistance in six-row barley

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fusarium head blight (FHB) is a devastating disease of barley, causing reductions in yield and quality. Marker-based selection for resistance to FHB and lowered deoxynivalenol (DON) grain concentration would save considerable costs and time associated with phenotyping. A comprehensive marker-based s...

  16. Encapsulation of a Decision-Making Model to Optimize Supplier Selection via Structural Equation Modeling (SEM)

    NASA Astrophysics Data System (ADS)

    Sahul Hameed, Ruzanna; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Ezanee Rusli, Mohd; Yong, Lee Choon; Ghazali, Azrul; Itam, Zarina; Hakimie, Hazlinda; Beddu, Salmia; Liyana Mohd Kamal, Nur

    2016-03-01

    This paper proposes a conceptual framework to compare criteria/factor that influence the supplier selection. A mixed methods approach comprising qualitative and quantitative survey will be used. The study intend to identify and define the metrics that key stakeholders at Public Works Department (PWD) believed should be used for supplier. The outcomes would foresee the possible initiatives to bring procurement in PWD to a strategic level. The results will provide a deeper understanding of drivers for supplier’s selection in the construction industry. The obtained output will benefit many parties involved in the supplier selection decision-making. The findings provides useful information and greater understanding of the perceptions that PWD executives hold regarding supplier selection and the extent to which these perceptions are consistent with findings from prior studies. The findings from this paper can be utilized as input for policy makers to outline any changes in the current procurement code of practice in order to enhance the degree of transparency and integrity in decision-making.

  17. Design, Synthesis, and Optimization of Novel Epoxide Incorporating Peptidomimetics as Selective Calpain Inhibitors

    PubMed Central

    Schiefer, Isaac T.; Tapadar, Subhasish; Litosh, Vladislav; Siklos, Marton; Scism, Rob; Wijewickrama, Gihani T.; Chandrasena, Esala P.; Sinha, Vaishali; Tavassoli, Ehsan; Brunsteiner, Michael; Fa′, Mauro; Arancio, Ottavio; Petukhov, Pavel; Thatcher, Gregory R. J.

    2014-01-01

    Hyperactivation of the calcium-dependent cysteine protease, calpain-1 (Cal1), is implicated as a primary or secondary pathological event in a wide range of illnesses, and in neurodegenerative states, including Alzheimer’s disease (AD). E-64 is an epoxide-containing natural product identified as a potent non-selective, calpain inhibitor, with demonstrated efficacy in animal models of AD. Using E-64 as a lead, three successive generations of calpain inhibitors were developed using computationally assisted design to increase selectivity for Cal1. First generation analogs were potent inhibitors, effecting covalent modification of recombinant Cal1 catalytic domain (Cal1cat), demonstrated using LC-MS/MS. Refinement yielded 2nd generation inhibitors with improved selectivity. Further library expansion and ligand refinement gave three Cal1 inhibitors, one of which was designed as an activity-based protein profiling probe. These were determined to be irreversible and selective inhibitors by kinetic studies comparing full length Cal1 with the general cysteine protease, papain. PMID:23834438

  18. Surface stability and the selection rules of substrate orientation for optimal growth of epitaxial II-VI semiconductors

    SciTech Connect

    Yin, Wan-Jian; Yang, Ji-Hui; Zaunbrecher, Katherine; Gessert, Tim; Barnes, Teresa; Wei, Su-Huai; Yan, Yanfa

    2015-10-05

    The surface structures of ionic zinc-blende CdTe (001), (110), (111), and (211) surfaces are systematically studied by first-principles density functional calculations. Based on the surface structures and surface energies, we identify the detrimental twinning appearing in molecular beam epitaxy (MBE) growth of II-VI compounds as the (111) lamellar twin boundaries. To avoid the appearance of twinning in MBE growth, we propose the following selection rules for choosing optimal substrate orientations: (1) the surface should be nonpolar so that there is no large surface reconstructions that could act as a nucleation center and promote the formation of twins; (2) the surface structure should have low symmetry so that there are no multiple equivalent directions for growth. These straightforward rules, in consistent with experimental observations, provide guidelines for selecting proper substrates for high-quality MBE growth of II-VI compounds.

  19. Surface stability and the selection rules of substrate orientation for optimal growth of epitaxial II-VI semiconductors

    NASA Astrophysics Data System (ADS)

    Yin, Wan-Jian; Yang, Ji-Hui; Zaunbrecher, Katherine; Gessert, Tim; Barnes, Teresa; Yan, Yanfa; Wei, Su-Huai

    2015-10-01

    The surface structures of ionic zinc-blende CdTe (001), (110), (111), and (211) surfaces are systematically studied by first-principles density functional calculations. Based on the surface structures and surface energies, we identify the detrimental twinning appearing in molecular beam epitaxy (MBE) growth of II-VI compounds as the (111) lamellar twin boundaries. To avoid the appearance of twinning in MBE growth, we propose the following selection rules for choosing optimal substrate orientations: (1) the surface should be nonpolar so that there is no large surface reconstructions that could act as a nucleation center and promote the formation of twins; (2) the surface structure should have low symmetry so that there are no multiple equivalent directions for growth. These straightforward rules, in consistent with experimental observations, provide guidelines for selecting proper substrates for high-quality MBE growth of II-VI compounds.

  20. A hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization.

    PubMed

    Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein

    2016-06-01

    This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy. PMID:27154739

  1. Topology optimization design of a lightweight ultra-broadband wide-angle resistance frequency selective surface absorber

    NASA Astrophysics Data System (ADS)

    Sui, Sai; Ma, Hua; Wang, Jiafu; Pang, Yongqiang; Qu, Shaobo

    2015-06-01

    In this paper, the topology design of a lightweight ultra-broadband polarization-independent frequency selective surface absorber is proposed. The absorption over a wide frequency range of 6.68-26.08 GHz with reflection below -10 dB can be achieved by optimizing the topology and dimensions of the resistive frequency selective surface by virtue of genetic algorithm. This ultra-broadband absorption can be kept when the incident angle is less than 55 degrees and is independent of the incident wave polarization. The experimental results agree well with the numerical simulations. The density of our ultra-broadband absorber is only 0.35 g cm  -  3 and thus may find potential applications in microwave engineering, such as electromagnetic interference and stealth technology.

  2. Optimal feature point selection and automatic initialization in active shape model search.

    PubMed

    Lekadir, Karim; Yang, Guang-Zhong

    2008-01-01

    This paper presents a novel approach for robust and fully automatic segmentation with active shape model search. The proposed method incorporates global geometric constraints during feature point search by using interlandmark conditional probabilities. The A* graph search algorithm is adapted to identify in the image the optimal set of valid feature points. The technique is extended to enable reliable and fast automatic initialization of the ASM search. Validation with 2-D and 3-D MR segmentation of the left ventricular epicardial border demonstrates significant improvement in robustness and overall accuracy, while eliminating the need for manual initialization. PMID:18979776

  3. Process Optimization through Adaptation of Shielding Gas Selection and Feeding during Laser Beam Welding

    NASA Astrophysics Data System (ADS)

    Patschger, Andreas; Sahib, Christoffer; Bergmann, Jean Pierre; Bastick, André

    For this paper the influence of the shielding gas itself as well as the feeding method on austenitic welding joints was examined with thin sheets frequently used in the household appliance industry. The composition of the shielding gas mixture with active and/or inert gases was varied in the examination, and the effect on the weld seam could be made clear. By comparing different shielding gas feeding concepts, the process was optimized with regard to seam formation. Moreover, the influence of oxygen on the seam shape was examined in the deep welding regime on a specific example.

  4. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    An optimizing computer program determined the turboprop aircraft with lowest direct operating cost for various sets of cruise speed and field length constraints. External variables included wing area, wing aspect ratio and engine sea level static horsepower; tail sizes, climb speed and cruise altitude were varied within the function evaluation program. Direct operating cost was minimized for a 150 n.mi typical mission. Generally, DOC increased with increasing speed and decreasing field length but not by a large amount. Ride roughness, however, increased considerably as speed became higher and field length became shorter.

  5. On Optimization of Surface Roughness of Selective Laser Melted Stainless Steel Parts: A Statistical Study

    NASA Astrophysics Data System (ADS)

    Alrbaey, K.; Wimpenny, D.; Tosi, R.; Manning, W.; Moroz, A.

    2014-06-01

    In this work, the effects of re-melting parameters for postprocessing the surface texture of Additively Manufactured parts using a statistical approach are investigated. This paper focuses on improving the final surface texture of stainless steel (316L) parts, built using a Renishaw SLM 125 machine. This machine employs a fiber laser to fuse fine powder on a layer-by-layer basis to generate three-dimensional parts. The samples were produced using varying angles of inclination in order to generate range of surface roughness between 8 and 20 µm. Laser re-melting (LR) as post-processing was performed in order to investigate surface roughness through optimization of parameters. The re-melting process was carried out using a custom-made hybrid laser re-cladding machine, which uses a 200 W fiber laser. Optimized processing parameters were based on statistical analysis within a Design of Experiment framework, from which a model was then constructed. The results indicate that the best obtainable final surface roughness is about 1.4 µm ± 10%. This figure was obtained when laser power of about 180 W was used, to give energy density between 2200 and 2700 J/cm2 for the re-melting process. Overall, the obtained results indicate LR as a post-build process has the capacity to improve surface finishing of SLM components up to 80%, compared with the initial manufactured surface.

  6. Leveraging information storage to select forecast-optimal parameters for delay-coordinate reconstructions

    NASA Astrophysics Data System (ADS)

    Garland, Joshua; James, Ryan G.; Bradley, Elizabeth

    2016-02-01

    Delay-coordinate reconstruction is a proven modeling strategy for building effective forecasts of nonlinear time series. The first step in this process is the estimation of good values for two parameters, the time delay and the embedding dimension. Many heuristics and strategies have been proposed in the literature for estimating these values. Few, if any, of these methods were developed with forecasting in mind, however, and their results are not optimal for that purpose. Even so, these heuristics—intended for other applications—are routinely used when building delay coordinate reconstruction-based forecast models. In this paper, we propose an alternate strategy for choosing optimal parameter values for forecast methods that are based on delay-coordinate reconstructions. The basic calculation involves maximizing the shared information between each delay vector and the future state of the system. We illustrate the effectiveness of this method on several synthetic and experimental systems, showing that this metric can be calculated quickly and reliably from a relatively short time series, and that it provides a direct indication of how well a near-neighbor based forecasting method will work on a given delay reconstruction of that time series. This allows a practitioner to choose reconstruction parameters that avoid any pathologies, regardless of the underlying mechanism, and maximize the predictive information contained in the reconstruction.

  7. Leveraging information storage to select forecast-optimal parameters for delay-coordinate reconstructions.

    PubMed

    Garland, Joshua; James, Ryan G; Bradley, Elizabeth

    2016-02-01

    Delay-coordinate reconstruction is a proven modeling strategy for building effective forecasts of nonlinear time series. The first step in this process is the estimation of good values for two parameters, the time delay and the embedding dimension. Many heuristics and strategies have been proposed in the literature for estimating these values. Few, if any, of these methods were developed with forecasting in mind, however, and their results are not optimal for that purpose. Even so, these heuristics-intended for other applications-are routinely used when building delay coordinate reconstruction-based forecast models. In this paper, we propose an alternate strategy for choosing optimal parameter values for forecast methods that are based on delay-coordinate reconstructions. The basic calculation involves maximizing the shared information between each delay vector and the future state of the system. We illustrate the effectiveness of this method on several synthetic and experimental systems, showing that this metric can be calculated quickly and reliably from a relatively short time series, and that it provides a direct indication of how well a near-neighbor based forecasting method will work on a given delay reconstruction of that time series. This allows a practitioner to choose reconstruction parameters that avoid any pathologies, regardless of the underlying mechanism, and maximize the predictive information contained in the reconstruction. PMID:26986345

  8. Novel selective and potent inhibitors of malaria parasite dihydroorotate dehydrogenase: discovery and optimization of dihydrothiophenone derivatives.

    PubMed

    Xu, Minghao; Zhu, Junsheng; Diao, Yanyan; Zhou, Hongchang; Ren, Xiaoli; Sun, Deheng; Huang, Jin; Han, Dongmei; Zhao, Zhenjiang; Zhu, Lili; Xu, Yufang; Li, Honglin

    2013-10-24

    Taking the emergence of drug resistance and lack of effective antimalarial vaccines into consideration, it is of significant importance to develop novel antimalarial agents for the treatment of malaria. Herein, we elucidated the discovery and structure-activity relationships of a series of dihydrothiophenone derivatives as novel specific inhibitors of Plasmodium falciparum dihydroorotate dehydrogenase (PfDHODH). The most promising compound, 50, selectively inhibited PfDHODH (IC50 = 6 nM, with >14,000-fold species-selectivity over hDHODH) and parasite growth in vitro (IC50 = 15 and 18 nM against 3D7 and Dd2 cells, respectively). Moreover, an oral bioavailability of 40% for compound 50 was determined from in vivo pharmacokinetic studies. These results further indicate that PfDHODH is an effective target for antimalarial chemotherapy, and the novel scaffolds reported in this work might lead to the discovery of new antimalarial agents. PMID:24073986

  9. Improved pretreatment process using an electron beam for optimization of glucose yield with high selectivity.

    PubMed

    Lee, Byoung-Min; Lee, Jin-Young; Kang, Phil-Hyun; Hong, Sung-Kwon; Jeun, Joon-Pyo

    2014-10-01

    In this study, electron beam irradiation (EBI) assisted by a dilute acid pretreatment process was investigated to improve the glucose yield and show high selectivity in the enzymatic hydrolysis of rice straw. In the first step, EBI of rice straw was performed at various doses ranging from 50 to 500 kGy. The electron beam-irradiated rice straw was then autoclaved with 3 % dilute acid at 120 °C for 1 h. The pretreated rice straw was finally subjected to enzymatic hydrolysis at 50 °C for 24, 48, and 72 h by 70 filter paper units (FPU)/mL cellulase and 40 cellobiose units (CbU)/mL glucosidase. Glucose was obtained with a very high selectivity of 92.7 % and a total sugar yield of 80 % from pretreated rice straw after 72 h of enzymatic hydrolysis. PMID:25123364

  10. Neural selection of the optimal optical signature for a rapid characterization of a submicrometer period grating

    NASA Astrophysics Data System (ADS)

    Robert, Stéphane; Mure-Ravaud, Alain; Thiria, Sylvie; Yacoub, Méziane; Badran, Fouad

    2004-08-01

    The characterization of gratings with small period-to-wavelength ratios can be achieved by solving the inverse problem of the diffraction. The use of a neural network has shown several advantages: it is a non-destructive, non-local and non-invasive method. However, although the calculation of results is instantaneous, the neural characterizations already published require the measurement of many diffracted intensities and can so need a long measurement time. We present, in this paper, a neural selection process called heuristic variable selection. This method reduces the number of diffractive efficiencies allowing a correct reconstruction of the profile shape according to an expected accuracy. In the same way, the non-redundancy of the data composing the optical signature is ensured. We relate a 1-μm period grating etched in silicon which could be characterized with only six measurements when a trapezoidal profile shape is assumed.

  11. Issues in the Optimal Selection of a Cranial Nerve Monitoring System

    PubMed Central

    Selesnick, Samuel H.; Goldsmith, Daniel F.

    1993-01-01

    Intraoperative nerve monitoring (IONM) is a safe technique that is of clear clinical value in the preservation of cranial nerves in skull base surgery and is rapidly becoming the standard of care. Available nerve monitoring systems vary widely in capabilities and costs. A well-informed surgeon may best decide on monitoring needs based on surgical case selection, experience, operating room space, availability of monitoring personnel, and cost. Key system characteristics that should be reviewed in the decision-making process include the monitoring technique (electromyography, pressure transducer, direct nerve monitoring, brainstem auditory evoked potential) and the stimulus technique (stimulating parameters, probe selection). In the past, IONM has been primarily employed in posterior fossa and temporal bone surgery, but the value of IONM is being recognized in more skull base and head and neck surgeries. Suggested IONM strategies for specific surgeries are presented. PMID:17170916

  12. Optimal site selection for a high-resolution ice core record in East Antarctica

    NASA Astrophysics Data System (ADS)

    Vance, Tessa R.; Roberts, Jason L.; Moy, Andrew D.; Curran, Mark A. J.; Tozer, Carly R.; Gallant, Ailie J. E.; Abram, Nerilie J.; van Ommen, Tas D.; Young, Duncan A.; Grima, Cyril; Blankenship, Don D.; Siegert, Martin J.

    2016-03-01

    Ice cores provide some of the best-dated and most comprehensive proxy records, as they yield a vast and growing array of proxy indicators. Selecting a site for ice core drilling is nonetheless challenging, as the assessment of potential new sites needs to consider a variety of factors. Here, we demonstrate a systematic approach to site selection for a new East Antarctic high-resolution ice core record. Specifically, seven criteria are considered: (1) 2000-year-old ice at 300 m depth; (2) above 1000 m elevation; (3) a minimum accumulation rate of 250 mm years-1 IE (ice equivalent); (4) minimal surface reworking to preserve the deposited climate signal; (5) a site with minimal displacement or elevation change in ice at 300 m depth; (6) a strong teleconnection to midlatitude climate; and (7) an appropriately complementary relationship to the existing Law Dome record (a high-resolution record in East Antarctica). Once assessment of these physical characteristics identified promising regions, logistical considerations (for site access and ice core retrieval) were briefly considered. We use Antarctic surface mass balance syntheses, along with ground-truthing of satellite data by airborne radar surveys to produce all-of-Antarctica maps of surface roughness, age at specified depth, elevation and displacement change, and surface air temperature correlations to pinpoint promising locations. We also use the European Centre for Medium-Range Weather Forecast ERA 20th Century reanalysis (ERA-20C) to ensure that a site complementary to the Law Dome record is selected. We find three promising sites in the Indian Ocean sector of East Antarctica in the coastal zone from Enderby Land to the Ingrid Christensen Coast (50-100° E). Although we focus on East Antarctica for a new ice core site, the methodology is more generally applicable, and we include key parameters for all of Antarctica which may be useful for ice core site selection elsewhere and/or for other purposes.

  13. Optimal site selection for a high resolution ice core record in East Antarctica

    NASA Astrophysics Data System (ADS)

    Vance, T.; Roberts, J.; Moy, A.; Curran, M.; Tozer, C.; Gallant, A.; Abram, N.; van Ommen, T.; Young, D.; Grima, C.; Blankenship, D.; Siegert, M.

    2015-11-01

    Ice cores provide some of the best dated and most comprehensive proxy records, as they yield a vast and growing array of proxy indicators. Selecting a site for ice core drilling is nonetheless challenging, as the assessment of potential new sites needs to consider a variety of factors. Here, we demonstrate a systematic approach to site selection for a new East Antarctic high resolution ice core record. Specifically, seven criteria are considered: (1) 2000 year old ice at 300 m depth, (2) above 1000 m elevation, (3) a minimum accumulation rate of 250 mm yr-1 IE, (4) minimal surface re-working to preserve the deposited climate signal, (5) a site with minimal displacement or elevation change of ice at 300 m depth, (6) a strong teleconnection to mid-latitude climate and (7) an appropriately complementary relationship to the existing Law Dome record (a high resolution record in East Antarctica). Once assessment of these physical characteristics identified promising regions, logistical considerations (for site access and ice core retrieval) were briefly considered. We use Antarctic surface mass balance syntheses, along with ground-truthing of satellite data by airborne radar surveys to produce all-of-Antarctica maps of surface roughness, age at specified depth, elevation and displacement change and surface air temperature correlations to pinpoint promising locations. We also use the European Centre for Medium-Range Weather Forecast ERA 20th Century reanalysis (ERA-20C) to ensure a site complementary to the Law Dome record is selected. We find three promising sites in the Indian Ocean sector of East Antarctica in the coastal zone from Enderby Land to the Ingrid Christensen Coast (50-100° E). Although we focus on East Antarctica for a new ice core site, the methodology is more generally applicable and we include key parameters for all of Antarctica which may be useful for ice core site selection elsewhere and/or for other purposes.

  14. Toward optimized high-relaxivity MRI agents: thermodynamic selectivity of hydroxypyridonate/catecholate ligands.

    PubMed

    Pierre, Valérie C; Melchior, Marco; Doble, Dan M J; Raymond, Kenneth N

    2004-12-27

    The thermodynamic selectivity for Gd(3+) relative to Ca(2+), Zn(2+), and Fe(3+) of two ligands of potential interest as magnetic resonance imaging (MRI) contrast agents has been determined by NMR spectroscopy and potentiometric and spectrophotometric titration. The two hexadentate ligands TREN-6-Me-3,2-HOPO (H(3)L2) and TREN-bisHOPO-TAM-EA (H(4)L3) incorporate 2,3-dihydroxypyridonate and 2,3-dihydroxyterephthalamide moieties. They were chosen to span a range of basicity while maintaining a structural motif similar to that of the parent ligand, TREN-1-Me-3,2-HOPO (H(3)L1), in order to investigate the effect of the ligand basicity on its selectivity. The 1:1 stability constants (beta(110)) at 25 degrees C and 0.1 M KCl are as follows. L2: Gd(3+), 20.3; Ca(2+), 7.4; Zn(2+), 11.9; Fe(3+), 27.9. L3: Gd(3+), 24.3; Ca(2+), 5.2; Zn(2+), 14.6; Fe(3+), 35.1. At physiological pH, the selectivity of the ligand for Gd(3+) over Ca(2+) increases with the basicity of the ligand and decreases for Gd(3+) over Fe(3+). These trends are consistent with the relative acidities of the various metal ions;- more basic ligands favor harder metals with a higher charge-to-radius ratio. The stabilities of the Zn(2+) complexes do not correlate with basicity and are thought to be more influenced by geometric factors. The selectivities of these ligands are superior to those of the octadentate poly(aminocarboxylate) ligands that are currently used as MRI contrast agents in diagnostic medicine. PMID:15606201

  15. Optimization of tetrahydronaphthalene inhibitors of Raf with selectivity over hERG.

    PubMed

    Huang, Shih-Chung; Adhikari, Sharmila; Afroze, Roushan; Brewer, Katherine; Calderwood, Emily F; Chouitar, Jouhara; England, Dylan B; Fisher, Craig; Galvin, Katherine M; Gaulin, Jeffery; Greenspan, Paul D; Harrison, Sean J; Kim, Mi-Sook; Langston, Steven P; Ma, Li-Ting; Menon, Saurabh; Mizutani, Hirotake; Rezaei, Mansoureh; Smith, Michael D; Zhang, Dong Mei; Gould, Alexandra E

    2016-02-15

    Investigations of a biaryl ether scaffold identified tetrahydronaphthalene Raf inhibitors with good in vivo activity; however these compounds had affinity toward the hERG potassium channel. Herein we describe our work to eliminate this hERG activity via alteration of the substituents on the benzoic amide functionality. The resulting compounds have improved selectivity against the hERG channel, good pharmacokinetic properties and potently inhibit the Raf pathway in vivo. PMID:26804230

  16. Metal-organic framework with optimally selective xenon adsorption and separation.

    PubMed

    Banerjee, Debasis; Simon, Cory M; Plonka, Anna M; Motkuri, Radha K; Liu, Jian; Chen, Xianyin; Smit, Berend; Parise, John B; Haranczyk, Maciej; Thallapally, Praveen K

    2016-01-01

    Nuclear energy is among the most viable alternatives to our current fossil fuel-based energy economy. The mass deployment of nuclear energy as a low-emissions source requires the reprocessing of used nuclear fuel to recover fissile materials and mitigate radioactive waste. A major concern with reprocessing used nuclear fuel is the release of volatile radionuclides such as xenon and krypton that evolve into reprocessing facility off-gas in parts per million concentrations. The existing technology to remove these radioactive noble gases is a costly cryogenic distillation; alternatively, porous materials such as metal-organic frameworks have demonstrated the ability to selectively adsorb xenon and krypton at ambient conditions. Here we carry out a high-throughput computational screening of large databases of metal-organic frameworks and identify SBMOF-1 as the most selective for xenon. We affirm this prediction and report that SBMOF-1 exhibits by far the highest reported xenon adsorption capacity and a remarkable Xe/Kr selectivity under conditions pertinent to nuclear fuel reprocessing. PMID:27291101

  17. Metal–organic framework with optimally selective xenon adsorption and separation

    PubMed Central

    Banerjee, Debasis; Simon, Cory M.; Plonka, Anna M.; Motkuri, Radha K.; Liu, Jian; Chen, Xianyin; Smit, Berend; Parise, John B.; Haranczyk, Maciej; Thallapally, Praveen K.

    2016-01-01

    Nuclear energy is among the most viable alternatives to our current fossil fuel-based energy economy. The mass deployment of nuclear energy as a low-emissions source requires the reprocessing of used nuclear fuel to recover fissile materials and mitigate radioactive waste. A major concern with reprocessing used nuclear fuel is the release of volatile radionuclides such as xenon and krypton that evolve into reprocessing facility off-gas in parts per million concentrations. The existing technology to remove these radioactive noble gases is a costly cryogenic distillation; alternatively, porous materials such as metal–organic frameworks have demonstrated the ability to selectively adsorb xenon and krypton at ambient conditions. Here we carry out a high-throughput computational screening of large databases of metal–organic frameworks and identify SBMOF-1 as the most selective for xenon. We affirm this prediction and report that SBMOF-1 exhibits by far the highest reported xenon adsorption capacity and a remarkable Xe/Kr selectivity under conditions pertinent to nuclear fuel reprocessing. PMID:27291101

  18. Comparative evaluation of five Beauveria isolates for housefly (Musca domestica L.) control and growth optimization of selected strain.

    PubMed

    Mishra, Sapna; Malik, Anushree

    2012-11-01

    Pathogenic potential of five native Beauveria isolates was assessed against housefly adult and larvae in laboratory bioassays. Beauveria isolate Beauveria bassiana HQ917687 showed highest virulence with 72.3 and 100 % mortality of larvae and adults of Musca domestica, respectively. Other Beauveria isolates caused 36-52 % housefly larval mortality while the adult mortalities varied between 72 and 82 %. B. bassiana HQ917687 also showed the fastest killing activity with LT(50) of 4 days (for larvae) and 3 days (for adults). This isolate showing highest virulence was selected for its growth optimization in terms of biomass and spore production using response surface methodology. The optimum value of temperature, yeast extract, and pH for maximum biomass and spore production was predicted as 27 °C, 5.00 g/l, and 6.75, respectively. Temperature was found to be the most critical factor influencing biomass and spore yield of the fungus and even nullified the effects of other factors at sufficiently higher value. The results obtained in this study depict the significance of appropriate strain selection and process parameter optimization in order to facilitate mass production of biocontrol agents. PMID:22864861

  19. Optimization of cotton seed biodiesel quality (critical properties) through modification of its FAME composition by highly selective homogeneous hydrogenation.

    PubMed

    Papadopoulos, Christos E; Lazaridou, Anastasia; Koutsoumba, Asimina; Kokkinos, Nikolaos; Christoforidis, Achilleas; Nikolaou, Nikolaos

    2010-03-01

    The catalytic (homogeneous) hydrogenation of biodiesel's polyunsaturated fatty acid methyl esters (FAME), synthesized by transesterification of vegetable (cotton seed) oil, selectively to monounsaturated FAME, could upgrade the final quality of biodiesel. The final fuel can be optimized to have a higher cetane number and improved oxidative stability. The low-temperature performance after hydrogenation (CFPP) might be worst, but this, could be further improved through selective wintering and/or blending. The homogeneous hydrogenation of FAMEs of cotton seed biodiesel was catalyzed by the catalyst precursor RhCl(3).3H(2)0 and STPP-TiOA. Four groups of hydrogenation experiments were carried out regarding the effects of pressure, temperature, reaction time and molecular ratio CC/Rh. Partial hydrogenation of cotton seed FAMEs took place under mild conditions of pressure and temperature and high catalytic activities were observed in very short reaction times, and for high molecular ratios CC/Rh. Biodiesel's quality optimization studies, based on existing empirical models of biodiesel properties, were carried out in order to identify optimum FAME compositions and those hydrogenation conditions that could possibly supply them. PMID:19896370

  20. Saving Lives and Money: A Multi-Objective Optimization Approach to the Selection of Structural Retrofits

    NASA Astrophysics Data System (ADS)

    Franco, G.; Deodatis, G.; Smyth, A.

    2005-12-01

    The existence of large numbers of poorly constructed buildings in earthquake-prone areas has made large scale retrofitting campaigns a desirable strategy for reducing the risk of loss of life and infrastructure. Since the retrofitting operation must be, therefore, carried out for numerous buildings, it has become a necessity to find the most attractive retrofitting solution for a given type of buildings that significantly reduces the risk of loss of life while remaining as economic as possible. Each retrofit solution for an existing building carries a set of potential costs and benefits over a given period of time. Some of these costs and benefits can be taken into account in terms of monetary values, whereas others cannot. The value of lives spared or lives lost that may potentially occur over the lifespan of a building as a consequence of choosing a particular retrofit cannot be easily measured in terms of money. In the absence of better methods to incorporate the value of life into economic analyses, a somewhat arbitrary monetary quantity based on personal insurance or personal productivity is often chosen as a proxy to quantify it. These quantities, however, not only fail to capture the otherwise incalculable cost of life, but are also strongly dependent on the economy under study. In this work, the monetary costs of construction and structural damage are clearly differentiated from the loss of life, which is simply measured as the number of potential casualties in a certain earthquake scenario. Finding the best retrofit then becomes a multi-objective optimization problem, whose purpose is to find the cheapest solution that saves most lives. The conflicting nature of the objectives causes the appearance of not only one optimal solution but of a set of most convenient solutions that capture the different levels of trade-off between costs and lives saved. A large number of possible structural retrofits must be considered in order to find the set of best solutions

  1. On optimization of a composite bone plate using the selective stress shielding approach.

    PubMed

    Samiezadeh, Saeid; Tavakkoli Avval, Pouria; Fawaz, Zouheir; Bougherara, Habiba

    2015-02-01

    Bone fracture plates are used to stabilize fractures while allowing for adequate compressive force on the fracture ends. Yet the high stiffness of conventional bone plates significantly reduces compression at the fracture site, and can lead to subsequent bone loss upon healing. Fibre-reinforced composite bone plates have been introduced to address this drawback. However, no studies have optimized their configurations to fulfill the requirements of proper healing. In the present study, classical laminate theory and the finite element method were employed for optimization of a composite bone plate. A hybrid composite made of carbon fibre/epoxy with a flax/epoxy core, which was introduced previously, was optimized by varying the laminate stacking sequence and the contribution of each material, in order to minimize the axial stiffness and maximize the torsional stiffness for a given range of bending stiffness. The initial 14×4(14) possible configurations were reduced to 13 after applying various design criteria. A comprehensive finite element model, validated against a previous experimental study, was used to evaluate the mechanical performance of each composite configuration in terms of its fracture stability, load sharing, and strength in transverse and oblique Vancouver B1 fracture configurations at immediately post-operative, post-operative, and healed bone stages. It was found that a carbon fibre/epoxy plate with an axial stiffness of 4.6 MN, and bending and torsional stiffness of 13 and 14 N·m(2), respectively, showed an overall superiority compared with other laminate configurations. It increased the compressive force at the fracture site up to 14% when compared to a conventional metallic plate, and maintained fracture stability by ensuring the fracture fragments' relative motions were comparable to those found during metallic plate fixation. The healed stage results revealed that implantation of the titanium plate caused a 40.3% reduction in bone stiffness

  2. Using information Theory in Optimal Test Point Selection for Health Management in NASA's Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Mehr, Ali Farhang; Tumer, Irem

    2005-01-01

    In this paper, we will present a new methodology that measures the "worth" of deploying an additional testing instrument (sensor) in terms of the amount of information that can be retrieved from such measurement. This quantity is obtained using a probabilistic model of RLV's that has been partially developed in the NASA Ames Research Center. A number of correlated attributes are identified and used to obtain the worth of deploying a sensor in a given test point from an information-theoretic viewpoint. Once the information-theoretic worth of sensors is formulated and incorporated into our general model for IHM performance, the problem can be formulated as a constrained optimization problem where reliability and operational safety of the system as a whole is considered. Although this research is conducted specifically for RLV's, the proposed methodology in its generic form can be easily extended to other domains of systems health monitoring.

  3. Optimal Technology Selection and Operation of Microgrids inCommercial Buildings

    SciTech Connect

    Marnay, Chris; Venkataramanan, Giri; Stadler, Michael; Siddiqui,Afzal; Firestone, Ryan; Chandran, Bala

    2007-01-15

    The deployment of small (<1-2 MW) clusters of generators,heat and electrical storage, efficiency investments, and combined heatand power (CHP) applications (particularly involving heat activatedcooling) in commercial buildings promises significant benefits but posesmany technical and financial challenges, both in system choice and itsoperation; if successful, such systems may be precursors to widespreadmicrogrid deployment. The presented optimization approach to choosingsuch systems and their operating schedules uses Berkeley Lab'sDistributed Energy Resources Customer Adoption Model [DER-CAM], extendedto incorporate electrical storage options. DER-CAM chooses annual energybill minimizing systems in a fully technology-neutral manner. Anillustrative example for a San Francisco hotel is reported. The chosensystem includes two engines and an absorption chiller, providing anestimated 11 percent cost savings and 10 percent carbon emissionreductions, under idealized circumstances.

  4. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    The relationships between field length and cruise speed and aircraft direct operating cost were determined. A gradient optimizing computer program was developed to minimize direct operating cost (DOC) as a function of airplane geometry. In this way, the best airplane operating under one set of constraints can be compared with the best operating under another. A constant 30-passenger fuselage and rubberized engines based on the General Electric CT-7 were used as a baseline. All aircraft had to have a 600 nautical mile maximum range and were designed to FAR part 25 structural integrity and climb gradient regulations. Direct operating cost was minimized for a typical design mission of 150 nautical miles. For purposes of C sub L sub max calculation, all aircraft had double-slotted flaps but with no Fowler action.

  5. Optimal selection of proton exchange membrane fuel cell condition monitoring thresholds

    NASA Astrophysics Data System (ADS)

    Boškoski, Pavle; Debenjak, Andrej

    2014-12-01

    When commissioning or restarting a system after a maintenance action there is a need to properly tune the decision thresholds of the diagnostic system. Too low or too high thresholds may implicate either missed alarms or false alarm rates. This paper suggests an efficient data-driven approach to optimal setting of decision thresholds for a PEM fuel cell system based solely on data acquired from the system in reference state of health (i.e. under fault free operation). The only design parameter is the desired false alarm rate. Technically, the problem reduces to analytically determining the probability distribution of the fuel cell's complex impedance and its particular components. Employing pseudo-random binary sequence perturbation signals, the distribution of the impedance is estimated through the complex wavelet coefficients of the fuel cell voltage and current. The approach is validated on a PEM fuel cell system subjected to various faults.

  6. Optimized conditions for selective gold flotation by ToF-SIMS and ToF-LIMS

    NASA Astrophysics Data System (ADS)

    Chryssoulis, S. L.; Dimov, S. S.

    2004-06-01

    This work describes a comprehensive characterization of the factors controlling the floatability of free gold from flotation test using reagents (collectors) at plant concentration levels. A relationship between the collectors loadings on gold particles and their surface composition has been established. The findings of this study show that silver activates gold flotation and there is a strong correlation between the surface concentration of silver and the loading of certain collectors. The organic surface analysis was done by ToF-SIMS while the inorganic surface analysis was carried out by time-of-flight laser ionization mass spectrometry (ToF-LIMS). The developed testing protocol based on ToF-LIMS and ToF-SIMS complementary surface analysis allows for optimization of the flotation scheme and hence improved gold recovery.

  7. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    SciTech Connect

    Dumidu Wijayasekara; Milos Manic; Piyush Sabharwall; Vivek Utgikar

    2011-07-01

    Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or overlearning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the testing error achieved was in the order of magnitude of within 10{sup -5} - 10{sup -3}. It was also show that the absolute error achieved by EBaLM-OTR was an order of magnitude better than the lowest error achieved by EBaLM-THP.

  8. Sensor selection and chemo-sensory optimization: toward an adaptable chemo-sensory system.

    PubMed

    Vergara, Alexander; Llobet, Eduard

    2011-01-01

    Over the past two decades, despite the tremendous research on chemical sensors and machine olfaction to develop micro-sensory systems that will accomplish the growing existent needs in personal health (implantable sensors), environment monitoring (widely distributed sensor networks), and security/threat detection (chemo/bio warfare agents), simple, low-cost molecular sensing platforms capable of long-term autonomous operation remain beyond the current state-of-the-art of chemical sensing. A fundamental issue within this context is that most of the chemical sensors depend on interactions between the targeted species and the surfaces functionalized with receptors that bind the target species selectively, and that these binding events are coupled with transduction processes that begin to change when they are exposed to the messy world of real samples. With the advent of fundamental breakthroughs at the intersection of materials science, micro- and nano-technology, and signal processing, hybrid chemo-sensory systems have incorporated tunable, optimizable operating parameters, through which changes in the response characteristics can be modeled and compensated as the environmental conditions or application needs change. The objective of this article, in this context, is to bring together the key advances at the device, data processing, and system levels that enable chemo-sensory systems to "adapt" in response to their environments. Accordingly, in this review we will feature the research effort made by selected experts on chemical sensing and information theory, whose work has been devoted to develop strategies that provide tunability and adaptability to single sensor devices or sensory array systems. Particularly, we consider sensor-array selection, modulation of internal sensing parameters, and active sensing. The article ends with some conclusions drawn from the results presented and a visionary look toward the future in terms of how the field may evolve. PMID

  9. Sensor Selection and Chemo-Sensory Optimization: Toward an Adaptable Chemo-Sensory System

    PubMed Central

    Vergara, Alexander; Llobet, Eduard

    2011-01-01

    Over the past two decades, despite the tremendous research on chemical sensors and machine olfaction to develop micro-sensory systems that will accomplish the growing existent needs in personal health (implantable sensors), environment monitoring (widely distributed sensor networks), and security/threat detection (chemo/bio warfare agents), simple, low-cost molecular sensing platforms capable of long-term autonomous operation remain beyond the current state-of-the-art of chemical sensing. A fundamental issue within this context is that most of the chemical sensors depend on interactions between the targeted species and the surfaces functionalized with receptors that bind the target species selectively, and that these binding events are coupled with transduction processes that begin to change when they are exposed to the messy world of real samples. With the advent of fundamental breakthroughs at the intersection of materials science, micro- and nano-technology, and signal processing, hybrid chemo-sensory systems have incorporated tunable, optimizable operating parameters, through which changes in the response characteristics can be modeled and compensated as the environmental conditions or application needs change. The objective of this article, in this context, is to bring together the key advances at the device, data processing, and system levels that enable chemo-sensory systems to “adapt” in response to their environments. Accordingly, in this review we will feature the research effort made by selected experts on chemical sensing and information theory, whose work has been devoted to develop strategies that provide tunability and adaptability to single sensor devices or sensory array systems. Particularly, we consider sensor-array selection, modulation of internal sensing parameters, and active sensing. The article ends with some conclusions drawn from the results presented and a visionary look toward the future in terms of how the field may evolve. PMID

  10. Optimizing energy yields in black locust through genetic selection: final report

    SciTech Connect

    Bongarten, B.C.; Merkle, S.A.

    1996-10-01

    The purpose of this work was to assess the magnitude of improvement in biomass yield of black locust possible through breeding, and to determine methods for efficiently capturing the yield improvement achievable from selective breeding. To meet this overall objective, six tasks were undertaken to determine: (1) the amount and geographic pattern of natural genetic variation, (2) the mating system of the species, (3) quantitative genetic parameters of relevant traits, (4) the relationship between nitrogen fixation and growth in black locust, (5) the viability of mass vegetative propagation, and (6) the feasibility of improvement through genetic transformation.

  11. Computer-aided method for automated selection of optimal imaging plane for measurement of total cerebral blood flow by MRI

    NASA Astrophysics Data System (ADS)

    Teng, Pang-yu; Bagci, Ahmet Murat; Alperin, Noam

    2009-02-01

    A computer-aided method for finding an optimal imaging plane for simultaneous measurement of the arterial blood inflow through the 4 vessels leading blood to the brain by phase contrast magnetic resonance imaging is presented. The method performance is compared with manual selection by two observers. The skeletons of the 4 vessels for which centerlines are generated are first extracted. Then, a global direction of the relatively less curved internal carotid arteries is calculated to determine the main flow direction. This is then used as a reference direction to identify segments of the vertebral arteries that strongly deviates from the main flow direction. These segments are then used to identify anatomical landmarks for improved consistency of the imaging plane selection. An optimal imaging plane is then identified by finding a plane with the smallest error value, which is defined as the sum of the angles between the plane's normal and the vessel centerline's direction at the location of the intersections. Error values obtained using the automated and the manual methods were then compared using 9 magnetic resonance angiography (MRA) data sets. The automated method considerably outperformed the manual selection. The mean error value with the automated method was significantly lower than the manual method, 0.09+/-0.07 vs. 0.53+/-0.45, respectively (p<.0001, Student's t-test). Reproducibility of repeated measurements was analyzed using Bland and Altman's test, the mean 95% limits of agreements for the automated and manual method were 0.01~0.02 and 0.43~0.55 respectively.

  12. Design and optimization for variable rate selective excitation using an analytic RF scaling function

    NASA Astrophysics Data System (ADS)

    Gai, Neville D.; Zur, Yuval

    2007-11-01

    At higher B0 fields, specific absorption rate (SAR) deposition increases. Due to maximum SAR limitation, slice coverage decreases and/or scan time increases. Conventional selective RF pulses are played out in conjunction with a time independent field gradient. Variable rate selective excitation (VERSE) is a technique that modifies the original RF and gradient waveforms such that slice profile is unchanged. The drawback is that the slice profile for off-resonance spins is distorted. A new VERSE algorithm based on modeling the scaled waveforms as a Fermi function is introduced. It ensures that system related constraints of maximum gradient amplitude and slew rate are not exceeded. The algorithm can be used to preserve the original RF pulse duration while minimizing SAR and peak b1 or to minimize the RF pulse duration. The design is general and can be applied to any symmetrical or asymmetrical RF waveform. The algorithm is demonstrated by using it to (a) minimize the SAR of a linear phase RF pulse, (b) minimize SAR of a hyperbolic secant RF pulse, and (c) minimize the duration of a linear phase RF pulse. Images with a T1-FLAIR (T1 FLuid Attenuated Inversion Recovery) sequence using a conventional and VERSE adiabatic inversion RF pulse are presented. Comparison of images and scan parameters for different anatomies and coils shows increased scan coverage and decreased SAR with the VERSE inversion RF pulse, while image quality is preserved.

  13. Optimized hyperspectral band selection using hybrid genetic algorithm and gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie

    2015-12-01

    The serious information redundancy in hyperspectral images (HIs) cannot contribute to the data analysis accuracy, instead it require expensive computational resources. Consequently, to identify the most useful and valuable information from the HIs, thereby improve the accuracy of data analysis, this paper proposed a novel hyperspectral band selection method using the hybrid genetic algorithm and gravitational search algorithm (GA-GSA). In the proposed method, the GA-GSA is mapped to the binary space at first. Then, the accuracy of the support vector machine (SVM) classifier and the number of selected spectral bands are utilized to measure the discriminative capability of the band subset. Finally, the band subset with the smallest number of spectral bands as well as covers the most useful and valuable information is obtained. To verify the effectiveness of the proposed method, studies conducted on an AVIRIS image against two recently proposed state-of-the-art GSA variants are presented. The experimental results revealed the superiority of the proposed method and indicated that the method can indeed considerably reduce data storage costs and efficiently identify the band subset with stable and high classification precision.

  14. Miniature wavelength-selectable Raman laser: new insights for optimizing performance.

    PubMed

    Li, Xiaoli; Pask, Helen M; Lee, Andrew J; Huo, Yujing; Piper, James A; Spence, David J

    2011-12-01

    We report a miniature, wavelength-selectable crystalline Raman laser operating either in the yellow (588 nm) or lime (559 nm) selected simply by changing the temperature of an intracavity LBO crystal. Continuous-wave (CW) output powers are 320 mW and 660 mW respectively, corresponding to record diode-visible optical conversion efficiencies of 8.4% and 17% for such miniature devices. The complex laser behavior arising from interplay between nonlinear processes is studied experimentally and theoretically. We show that the interplay can lead to complete suppression of the first-Stokes field and that the phase matching conditions for maximum visible powers differ markedly for different length LBO crystals. By using threshold measurements, we calculate the round-trip resonator losses and show that crystal bulk losses dominate over other losses. As a consequence, Raman lasers utilizing shorter LBO crystals for intracavity frequency mixing can produce higher visible output power. These are new considerations for the optimum design of CW intracavity Raman lasers with visible output. PMID:22273955

  15. Design and optimization of highly-selective fungal CYP51 inhibitors.

    PubMed

    Hoekstra, William J; Garvey, Edward P; Moore, William R; Rafferty, Stephen W; Yates, Christopher M; Schotzinger, Robert J

    2014-08-01

    While the orally-active azoles such as voriconazole and itraconazole are effective antifungal agents, they potently inhibit a broad range of off-target human cytochrome P450 enzymes (CYPs) leading to various safety issues (e.g., drug-drug interactions, liver toxicity). Herein, we describe rationally-designed, broad-spectrum antifungal agents that are more selective for the target fungal enzyme, CYP51, than related human CYP enzymes such as CYP3A4. Using proprietary methodology, the triazole metal-binding group found in current clinical agents was replaced with novel, less avid metal-binding groups in concert with potency-enhancing molecular scaffold modifications. This process produced a unique series of fungal CYP51-selective inhibitors that included the oral antifungal 7d (VT-1161), now in Phase 2 clinical trials. This series exhibits excellent potency against key yeast and dermatophyte strains. The chemical methodology described is potentially applicable to the design of new and more effective metalloenzyme inhibitor treatments for a broad array of diseases. PMID:24948565

  16. NESP: Nonlinear enhancement and selection of plane for optimal segmentation and recognition of scene word images

    NASA Astrophysics Data System (ADS)

    Kumar, Deepak; Anil Prasad, M. N.; Ramakrishnan, A. G.

    2013-01-01

    In this paper, we report a breakthrough result on the difficult task of segmentation and recognition of coloured text from the word image dataset of ICDAR robust reading competition challenge 2: reading text in scene images. We split the word image into individual colour, gray and lightness planes and enhance the contrast of each of these planes independently by a power-law transform. The discrimination factor of each plane is computed as the maximum between-class variance used in Otsu thresholding. The plane that has maximum discrimination factor is selected for segmentation. The trial version of Omnipage OCR is then used on the binarized words for recognition. Our recognition results on ICDAR 2011 and ICDAR 2003 word datasets are compared with those reported in the literature. As baseline, the images binarized by simple global and local thresholding techniques were also recognized. The word recognition rate obtained by our non-linear enhancement and selection of plance method is 72.8% and 66.2% for ICDAR 2011 and 2003 word datasets, respectively. We have created ground-truth for each image at the pixel level to benchmark these datasets using a toolkit developed by us. The recognition rate of benchmarked images is 86.7% and 83.9% for ICDAR 2011 and 2003 datasets, respectively.

  17. Selection of Steady-State Process Simulation Software to Optimize Treatment of Radioactive and Hazardous Waste

    SciTech Connect

    Nichols, T. T.; Barnes, C. M.; Lauerhass, L.; Taylor, D. D.

    2001-06-01

    The process used for selecting a steady-state process simulator under conditions of high uncertainty and limited time is described. Multiple waste forms, treatment ambiguity, and the uniqueness of both the waste chemistries and alternative treatment technologies result in a large set of potential technical requirements that no commercial simulator can totally satisfy. The aim of the selection process was two-fold. First, determine the steady-state simulation software that best, albeit not completely, satisfies the requirements envelope. And second, determine if the best is good enough to justify the cost. Twelve simulators were investigated with varying degrees of scrutiny. The candidate list was narrowed to three final contenders: ASPEN Plus 10.2, PRO/II 5.11, and CHEMCAD 5.1.0. It was concluded from ''road tests'' that ASPEN Plus appears to satisfy the project's technical requirements the best and is worth acquiring. The final software decisions provide flexibility: they involve annual rather than multi-year licensing, and they include periodic re-assessment.

  18. Optimal stapler cartridge selection according to the thickness of the pancreas in distal pancreatectomy.

    PubMed

    Kim, Hongbeom; Jang, Jin-Young; Son, Donghee; Lee, Seungyeoun; Han, Youngmin; Shin, Yong Chan; Kim, Jae Ri; Kwon, Wooil; Kim, Sun-Whe

    2016-08-01

    Stapling is a popular method for stump closure in distal pancreatectomy (DP). However, research on which cartridges are suitable for different pancreatic thickness is lacking. To identify the optimal stapler cartridge choice in DP according to pancreatic thickness.From November 2011 to April 2015, data were prospectively collected from 217 consecutive patients who underwent DP with 3-layer endoscopic staple closure in Seoul National University Hospital, Korea. Postoperative pancreatic fistula (POPF) was graded according to International Study Group on Pancreatic Fistula definitions. Staplers were grouped based on closed length (CL) (Group I: CL ≤ 1.5 mm, II: 1.5 mm < CL < 2 mm, III: CL ≥ 2 mm). Compression ratio (CR) was defined as pancreas thickness/CL. Distribution of pancreatic thickness was used to find the cut-off point of thickness which predicts POPF according to stapler groups.POPF developed in 130 (59.9%) patients (Grade A; n = 86 [66.1%], B; n = 44 [33.8%]). The numbers in each stapler group were 46, 101, and 70, respectively. Mean thickness was higher in POPF cases (15.2 mm vs 13.5 mm, P = 0.002). High body mass index (P = 0.003), thick pancreas (P = 0.011), and high CR (P = 0.024) were independent risk factors for POPF in multivariate analysis. Pancreatic thickness was grouped into <12 mm, 12 to 17 mm, and >17 mm. With pancreatic thickness <12 mm, the POPF rate was lowest with Group II (I: 50%, II: 27.6%, III: 69.2%, P = 0.035).The optimal stapler cartridges with pancreatic thickness <12 mm were those in Group II (Gold, CL: 1.8 mm). There was no suitable cartridge for thicker pancreases. Further studies are necessary to reduce POPF in thick pancreases. PMID:27583852

  19. Optimal stapler cartridge selection according to the thickness of the pancreas in distal pancreatectomy

    PubMed Central

    Kim, Hongbeom; Jang, Jin-Young; Son, Donghee; Lee, Seungyeoun; Han, Youngmin; Shin, Yong Chan; Kim, Jae Ri; Kwon, Wooil; Kim, Sun-Whe

    2016-01-01

    Abstract Stapling is a popular method for stump closure in distal pancreatectomy (DP). However, research on which cartridges are suitable for different pancreatic thickness is lacking. To identify the optimal stapler cartridge choice in DP according to pancreatic thickness. From November 2011 to April 2015, data were prospectively collected from 217 consecutive patients who underwent DP with 3-layer endoscopic staple closure in Seoul National University Hospital, Korea. Postoperative pancreatic fistula (POPF) was graded according to International Study Group on Pancreatic Fistula definitions. Staplers were grouped based on closed length (CL) (Group I: CL ≤ 1.5 mm, II: 1.5 mm < CL < 2 mm, III: CL ≥ 2 mm). Compression ratio (CR) was defined as pancreas thickness/CL. Distribution of pancreatic thickness was used to find the cut-off point of thickness which predicts POPF according to stapler groups. POPF developed in 130 (59.9%) patients (Grade A; n = 86 [66.1%], B; n = 44 [33.8%]). The numbers in each stapler group were 46, 101, and 70, respectively. Mean thickness was higher in POPF cases (15.2 mm vs 13.5 mm, P = 0.002). High body mass index (P = 0.003), thick pancreas (P = 0.011), and high CR (P = 0.024) were independent risk factors for POPF in multivariate analysis. Pancreatic thickness was grouped into <12 mm, 12 to 17 mm, and >17 mm. With pancreatic thickness <12 mm, the POPF rate was lowest with Group II (I: 50%, II: 27.6%, III: 69.2%, P = 0.035). The optimal stapler cartridges with pancreatic thickness <12 mm were those in Group II (Gold, CL: 1.8 mm). There was no suitable cartridge for thicker pancreases. Further studies are necessary to reduce POPF in thick pancreases. PMID:27583852

  20. Optimal crop selection and water allocation under limited water supply in irrigation

    NASA Astrophysics Data System (ADS)

    Stange, Peter; Grießbach, Ulrike; Schütze, Niels

    2015-04-01

    Due to climate change, extreme weather conditions such as droughts may have an increasing impact on irrigated agriculture. To cope with limited water resources in irrigation systems, a new decision support framework is developed which focuses on an integrated management of both irrigation water supply and demand at the same time. For modeling the regional water demand, local (and site-specific) water demand functions are used which are derived from optimized agronomic response on farms scale. To account for climate variability the agronomic response is represented by stochastic crop water production functions (SCWPF). These functions take into account different soil types, crops and stochastically generated climate scenarios. The SCWPF's are used to compute the water demand considering different conditions, e.g., variable and fixed costs. This generic approach enables the consideration of both multiple crops at farm scale as well as of the aggregated response to water pricing at a regional scale for full and deficit irrigation systems. Within the SAPHIR (SAxonian Platform for High Performance IRrigation) project a prototype of a decision support system is developed which helps to evaluate combined water supply and demand management policies.

  1. Optimized selective lactate excitation with a refocused multiple-quantum filter

    NASA Astrophysics Data System (ADS)

    Holbach, Mirjam; Lambert, Jörg; Johst, Sören; Ladd, Mark E.; Suter, Dieter

    2015-06-01

    Selective detection of lactate signals in in vivo MR spectroscopy with spectral editing techniques is necessary in situations where strong lipid or signals from other molecules overlap the desired lactate resonance in the spectrum. Several pulse sequences have been proposed for this task. The double-quantum filter SSel-MQC provides very good lipid and water signal suppression in a single scan. As a major drawback, it suffers from significant signal loss due to incomplete refocussing in situations where long evolution periods are required. Here we present a refocused version of the SSel-MQC technique that uses only one additional refocussing pulse and regains the full refocused lactate signal at the end of the sequence.

  2. Metrological study for the optimal selection of the photoelastic model in transmission photoelasticity.

    PubMed

    Fernández, Manuel Solaguren-Beascoa

    2011-10-10

    In transmission photoelasticity, stresses and strains are not directly measured on the real piece, but on a photoelastic model. To improve accuracy, the photoelastic material type, size of the model, its thickness, and the applied load must be chosen properly. In this paper, the influence of selectable parameters in a photoelastic transmission analysis has been studied through the evaluation of measurement uncertainties. The experimental data and further study of a generic functional relationship, representative of a stress-separation technique, show that, for a given photoelastic material, the model of minimum uncertainty of measurement is the one whose ratio load/dimension is the maximum allowed by the data-acquisition technique used. The thickness affects only the amount of material used. Therefore, any size of the model can achieve maximum accuracy, provided that it is subjected to the greatest possible load within its elastic range. PMID:22015367

  3. Optimizing selection of large animals for antibody production by screening immune response to standard vaccines.

    PubMed

    Thompson, Mary K; Fridy, Peter C; Keegan, Sarah; Chait, Brian T; Fenyö, David; Rout, Michael P

    2016-03-01

    Antibodies made in large animals are integral to many biomedical research endeavors. Domesticated herd animals like goats, sheep, donkeys, horses and camelids all offer distinct advantages in antibody production. However, their cost of use is often prohibitive, especially where poor antigen response is commonplace; choosing a non-responsive animal can set a research program back or even prevent experiments from moving forward entirely. Over the course of production of antibodies from llamas, we found that some animals consistently produced a higher humoral antibody response than others, even to highly divergent antigens, as well as to their standard vaccines. Based on our initial data, we propose that these "high level responders" could be pre-selected by checking antibody titers against common vaccines given to domestic farm animals. Thus, time and money can be saved by reducing the chances of getting poor responding animals and minimizing the use of superfluous animals. PMID:26775851

  4. Precursor Selection for Property Optimization in Biomorphic SiC Ceramics

    NASA Technical Reports Server (NTRS)

    Varela-Feria, F. M.; Lopez-Robledo, M. J.; Martinez-Fernandez, J.; deArellano-Lopez, A. R.; Singh, M.; Gray, Hugh R. (Technical Monitor)

    2002-01-01

    Biomorphic SiC ceramics have been fabricated using different wood precursors. The evolution of volume, density and microstructure of the woods, carbon performs, and final SiC products are systematically studied in order to establish experimental guidelines that allow materials selection. The wood density is a critical characteristic, which results in a particular final SiC density, and the level of anisotropy in mechanical properties in directions parallel (axial) and perpendicular (radial) to the growth of the wood. The purpose of this work is to explore experimental laws that can help choose a type of wood as precursor for a final SiC product, with a given microstructure, density and level of anisotropy. Preliminary studies of physical properties suggest that not only mechanical properties are strongly anisotropic, but also electrical conductivity and gas permeability, which have great technological importance.

  5. Optimizing the selectivity of DIFO-based reagents for intracellular bioorthogonal applications.

    PubMed

    Kim, Eun J; Kang, Dong W; Leucke, Hans F; Bond, Michelle R; Ghosh, Salil; Love, Dona C; Ahn, Jong-Seog; Kang, Dae-Ook; Hanover, John A

    2013-08-01

    One of the most commonly employed bioorthogonal reactions with azides is copper-catalyzed azide-alkyne [3+2] cycloaddition (CuAAC, a 'click' reaction). More recently, the strain-promoted azide-alkyne [3+2] cycloaddition (SPAAC, a copper-free 'click' reaction) was developed, in which an alkyne is sufficiently strained to promote rapid cycloaddition with an azide to form a stable triazole conjugate. In this report, we show that an internal alkyne in a strained ring system with two electron-withdrawing fluorine atoms adjacent to the carbon-carbon triple bond reacts to yield covalent adducts not only with azide moieties but also reacts with free sulfhydryl groups abundant in the cytosol. We have identified conditions that allow the enhanced reactivity to be tolerated when using such conformationally strained reagents to enhance reaction rates and selectivity for bioorthogonal applications such as O-GlcNAc detection. PMID:23770695

  6. Discovery and Optimization of Potent, Selective, and in Vivo Efficacious 2-Aryl Benzimidazole BCATm Inhibitors.

    PubMed

    Deng, Hongfeng; Zhou, Jingye; Sundersingh, Flora; Messer, Jeffrey A; Somers, Donald O; Ajakane, Myriam; Arico-Muendel, Christopher C; Beljean, Arthur; Belyanskaya, Svetlana L; Bingham, Ryan; Blazensky, Emily; Boullay, Anne-Benedicte; Boursier, Eric; Chai, Jing; Carter, Paul; Chung, Chun-Wa; Daugan, Alain; Ding, Yun; Herry, Kenny; Hobbs, Clare; Humphries, Eric; Kollmann, Christopher; Nguyen, Van Loc; Nicodeme, Edwige; Smith, Sarah E; Dodic, Nerina; Ancellin, Nicolas

    2016-04-14

    To identify BCATm inhibitors suitable for in vivo study, Encoded Library Technology (ELT) was used to affinity screen a 117 million member benzimidazole based DNA encoded library, which identified an inhibitor series with both biochemical and cellular activities. Subsequent SAR studies led to the discovery of a highly potent and selective compound, 1-(3-(5-bromothiophene-2-carboxamido)cyclohexyl)-N-methyl-2-(pyridin-2-yl)-1H-benzo[d]imidazole-5-carboxamide (8b) with much improved PK properties. X-ray structure revealed that 8b binds to the active site of BACTm in a unique mode via multiple H-bond and van der Waals interactions. After oral administration, 8b raised mouse blood levels of all three branched chain amino acids as a consequence of BCATm inhibition. PMID:27096045

  7. Selection of energy optimized pump concepts for multi core and multi mode erbium doped fiber amplifiers.

    PubMed

    Krummrich, Peter M; Akhtari, Simon

    2014-12-01

    The selection of an appropriate pump concept has a major impact on amplifier cost and power consumption. The energy efficiency of different pump concepts is compared for multi core and multi mode active fibers. In preamplifier stages, pump power density requirements derived from full C-band low noise WDM operation result in superior energy efficiency of direct pumping of individual cores in a multi core fiber with single mode pump lasers compared to cladding pumping with uncooled multi mode lasers. Even better energy efficiency is achieved by direct pumping of the core in multi mode active fibers. Complexity of pump signal combiners for direct pumping of multi core fibers can be reduced by deploying integrated components. PMID:25606957

  8. Optimizing Training Population Data and Validation of Genomic Selection for Economic Traits in Soft Winter Wheat

    PubMed Central

    Hoffstetter, Amber; Cabrera, Antonio; Huang, Mao; Sneller, Clay

    2016-01-01

    Genomic selection (GS) is a breeding tool that estimates breeding values (GEBVs) of individuals based solely on marker data by using a model built using phenotypic and marker data from a training population (TP). The effectiveness of GS increases as the correlation of GEBVs and phenotypes (accuracy) increases. Using phenotypic and genotypic data from a TP of 470 soft winter wheat lines, we assessed the accuracy of GS for grain yield, Fusarium Head Blight (FHB) resistance, softness equivalence (SE), and flour yield (FY). Four TP data sampling schemes were tested: (1) use all TP data, (2) use subsets of TP lines with low genotype-by-environment interaction, (3) use subsets of markers significantly associated with quantitative trait loci (QTL), and (4) a combination of 2 and 3. We also correlated the phenotypes of relatives of the TP to their GEBVs calculated from TP data. The GS accuracy within the TP using all TP data ranged from 0.35 (FHB) to 0.62 (FY). On average, the accuracy of GS from using subsets of data increased by 54% relative to using all TP data. Using subsets of markers selected for significant association with the target trait had the greatest impact on GS accuracy. Between-environment prediction accuracy was also increased by using data subsets. The accuracy of GS when predicting the phenotypes of TP relatives ranged from 0.00 to 0.85. These results suggest that GS could be useful for these traits and GS accuracy can be greatly improved by using subsets of TP data. PMID:27440921

  9. Quantitative and qualitative optimization of allergen extraction from peanut and selected tree nuts. Part 2. Optimization of buffer and ionic strength using a full factorial experimental design.

    PubMed

    L'Hocine, Lamia; Pitre, Mélanie

    2016-03-01

    A full factorial design was used to assess the single and interactive effects of three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various ionic strengths (I) on allergen extractability from and immunoglobulin E (IgE) immunoreactivity of peanut, almond, hazelnut, and pistachio. The results indicated that the type and ionic strength of the buffer had different effects on protein recovery from the nuts under study. Substantial differences in protein profiles, abundance, and IgE-binding intensity with different combinations of pH and ionic strength were found. A significant interaction between pH and ionic strength was observed for pistachio and almond. The optimal buffer system conditions, which maximized the IgE-binding efficiency of allergens and provided satisfactory to superior protein recovery yield and profiles, were carbonate buffer at an ionic strength of I=0.075 for peanut, carbonate buffer at I=0.15 for almond, phosphate buffer at I=0.5 for hazelnut, and borate at I=0.15 for pistachio. The buffer type and its ionic strength could be manipulated to achieve the selective solubility of desired allergens. PMID:26471623

  10. Optimal Site Characterization and Selection Criteria for Oyster Restoration using Multicolinear Factorial Water Quality Approach

    NASA Astrophysics Data System (ADS)

    Yoon, J.

    2015-12-01

    Elevated levels of nutrient loadings have enriched the Chesapeake Bay estuaries and coastal waters via point and nonpoint sources and the atmosphere. Restoring oyster beds is considered a Best Management Practice (BMP) to improve the water quality as well as provide physical aquatic habitat and a healthier estuarine system. Efforts include declaring sanctuaries for brood-stocks, supplementing hard substrate on the bottom and aiding natural populations with the addition of hatchery-reared and disease-resistant stocks. An economic assessment suggests that restoring the ecological functions will improve water quality, stabilize shorelines, and establish a habitat for breeding grounds that outweighs the value of harvestable oyster production. Parametric factorial models were developed to investigate multicolinearities among in situ water quality and oyster restoration activities to evaluate posterior success rates upon multiple substrates, and physical, chemical, hydrological and biological site characteristics to systematically identify significant factors. Findings were then further utilized to identify the optimal sites for successful oyster restoration augmentable with Total Maximum Daily Loads (TMDLs) and BMPs. Factorial models evaluate the relationship among the dependent variable, oyster biomass, and treatments of temperature, salinity, total suspended solids, E. coli/Enterococci counts, depth, dissolved oxygen, chlorophyll a, nitrogen and phosphorus, and blocks consist of alternative substrates (oyster shells versus riprap, granite, cement, cinder blocks, limestone marl or combinations). Factorial model results were then compared to identify which combination of variables produces the highest posterior biomass of oysters. Developed Factorial model can facilitate maximizing the likelihood of successful oyster reef restoration in an effort to establish a healthier ecosystem and to improve overall estuarine water quality in the Chesapeake Bay estuaries.

  11. Optimizing nest survival and female survival: Consequences of nest site selection for Canada Geese

    USGS Publications Warehouse

    Miller, David A.; Grand, J.B.; Fondell, T.F.; Anthony, R.M.

    2007-01-01

    We examined the relationship between attributes of nest sites used by Canada Geese (Branta canadensis) in the Copper River Delta, Alaska, and patterns in nest and female survival. We aimed to determine whether nest site attributes related to nest and female survival differed and whether nest site attributes related to nest survival changed within and among years. Nest site attributes that we examined included vegetation at and surrounding the nest, as well as associations with other nesting birds. Optimal nest site characteristics were different depending on whether nest survival or female survival was examined. Prior to 25 May, the odds of daily survival for nests in tall shrubs and on islands were 2.92 and 2.26 times greater, respectively, than for nests in short shrub sites. Bald Eagles (Halieaeetus leucocephalus) are the major predator during the early breeding season and their behavior was likely important in determining this pattern. After 25 May, when eagle predation is limited due to the availability of alternative prey, no differences in nest survival among the nest site types were found. In addition, nest survival was positively related to the density of other Canada Goose nests near the nest site. Although the number of detected mortalities for females was relatively low, a clear pattern was found, with mortality three times more likely at nest sites dominated by high shrub density within 50 m than at open sites dominated by low shrub density. The negative relationship of nest concealment and adult survival is consistent with that found in other studies of ground-nesting birds. Physical barriers that limited access to nest sites by predators and sites that allowed for early detection of predators were important characteristics of nest site quality for Canada Geese and nest site quality shifted within seasons, likely as a result of shifting predator-prey interactions.

  12. Improving the prediction of chemotherapeutic sensitivity of tumors in breast cancer via optimizing the selection of candidate genes.

    PubMed

    Jiang, Lina; Huang, Liqiu; Kuang, Qifan; Zhang, Juan; Li, Menglong; Wen, Zhining; He, Li

    2014-04-01

    Estrogen receptor status and the pathologic response to preoperative chemotherapy are two important indicators of chemotherapeutic sensitivity of tumors in breast cancer, which are used to guide the selection of specific regimens for patients. Microarray-based gene expression profiling, which is successfully applied to the discovery of tumor biomarkers and the prediction of drug response, was suggested to predict the cancer outcomes using the gene signatures differentially expressed between two clinical states. However, many false positive genes unrelated to the phenotypic differences will be involved in the lists of differentially expressed genes (DEGs) when only using the statistical methods for gene selection, e.g. Student's t test, and subsequently affect the performance of the predictive models. For the purpose of improving the prediction of clinical outcomes, we optimized the selection of DEGs by using a combined strategy, for which the DEGs were firstly identified by the statistical methods, and then filtered by a similarity profiling approach that used for candidate gene prioritization. In our study, we firstly verified the molecular functions of the DEGs identified by the combined strategy with the gene expression data generated in the microarray experiments of Si-Wu-Tang, which is a popular formula in traditional Chinese medicine. The results showed that, for Si-Wu-Tang experimental data set, the cancer-related signaling pathways were significantly enriched by gene set enrichment analysis when using the DEG lists generated by the combined strategy, confirming the potentially cancer-preventive effect of Si-Wu-Tang. To verify the performance of the predictive models in clinical application, we used the combined strategy to select the DEGs as features from the gene expression data of the clinical samples, which were collected from the breast cancer patients, and constructed models to predict the chemotherapeutic sensitivity of tumors in breast cancer. After

  13. Turbine cooling configuration selection and design optimization for the high-reliability gas turbine. Final report

    SciTech Connect

    Smith, M J; Suo, M

    1981-04-01

    The potential of advanced turbine convectively air-cooled concepts for application to the Department of Energy/Electric Power Research Institute (EPRI) Advanced Liquid/Gas-Fueled Engine Program was investigated. Cooling of turbine airfoils is critical technology and significant advances in cooling technology will permit higher efficiency coal-base-fuel gas turbine energy systems. Two new airfoil construction techniques, bonded and wafer, were the principal designs considered. In the bonded construction, two airfoil sections having intricate internal cooling configurations are bonded together to form a complete blade or vane. In the wafer construction, a larger number (50 or more) of wafers having intricate cooling flow passages are bonded together to form a complete blade or vane. Of these two construction techniques, the bonded airfoil is considered to be lower in risk and closer to production readiness. Bonded airfoils are being used in aircraft engines. A variety of industrial materials were evaluated for the turbine airfoils. A columnar grain nickel alloy was selected on the basis of strength and corrosion resistance. Also, cost of electricity and reliability were considered in the final concept evaluation. The bonded airfoil design yielded a 3.5% reduction in cost-of-electricity relative to a baseline Reliable Engine design. A significant conclusion of this study was that the bonded airfoil convectively air-cooled design offers potential for growth to turbine inlet temperatures above 2600/sup 0/F with reasonable development risk.

  14. Loco-regional therapies for patients with hepatocellular carcinoma awaiting liver transplantation: Selecting an optimal therapy

    PubMed Central

    Byrne, Thomas J; Rakela, Jorge

    2016-01-01

    Hepatocellular carcinoma (HCC) is a common, increasingly prevalent malignancy. For all but the smallest lesions, surgical removal of cancer via resection or liver transplantation (LT) is considered the most feasible pathway to cure. Resection - even with favorable survival - is associated with a fairly high rate of recurrence, perhaps since most HCCs occur in the setting of cirrhosis. LT offers the advantage of removing not only the cancer but the diseased liver from which the cancer has arisen, and LT outperforms resection for survival with selected patients. Since time waiting for LT is time during which HCC can progress, loco-regional therapy (LRT) is widely employed by transplant centers. The purpose of LRT is either to bridge patients to LT by preventing progression and waitlist dropout, or to downstage patients who slightly exceed standard eligibility criteria initially but can fall within it after treatment. Transarterial chemoembolization and radiofrequency ablation have been the most widely utilized LRTs to date, with favorable efficacy and safety as a bridge to LT (and for the former, as a downstaging modality). The list of potentially effective LRTs has expanded in recent years, and includes transarterial chemoembolization with drug-eluting beads, radioembolization and novel forms of extracorporal therapy. Herein we appraise the various LRT modalities for HCC, and their potential roles in specific clinical scenarios in patients awaiting LT. PMID:27358775

  15. Optimization of chemical structure of Schottky-type selection diode for crossbar resistive memory.

    PubMed

    Kim, Gun Hwan; Lee, Jong Ho; Jeon, Woojin; Song, Seul Ji; Seok, Jun Yeong; Yoon, Jung Ho; Yoon, Kyung Jean; Park, Tae Joo; Hwang, Cheol Seong

    2012-10-24

    The electrical performances of Pt/TiO(2)/Ti/Pt stacked Schottky-type diode (SD) was systematically examined, and this performance is dependent on the chemical structures of the each layer and their interfaces. The Ti layers containing a tolerable amount of oxygen showed metallic electrical conduction characteristics, which was confirmed by sheet resistance measurement with elevating the temperature, transmission line measurement (TLM), and Auger electron spectroscopy (AES) analysis. However, the chemical structure of SD stack and resulting electrical properties were crucially affected by the dissolved oxygen concentration in the Ti layers. The lower oxidation potential of the Ti layer with initially higher oxygen concentration suppressed the oxygen deficiency of the overlying TiO(2) layer induced by consumption of the oxygen from TiO(2) layer. This structure results in the lower reverse current of SDs without significant degradation of forward-state current. Conductive atomic force microscopy (CAFM) analysis showed the current conduction through the local conduction paths in the presented SDs, which guarantees a sufficient forward-current density as a selection device for highly integrated crossbar array resistive memory. PMID:22999222

  16. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    USGS Publications Warehouse

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  17. Loco-regional therapies for patients with hepatocellular carcinoma awaiting liver transplantation: Selecting an optimal therapy.

    PubMed

    Byrne, Thomas J; Rakela, Jorge

    2016-06-24

    Hepatocellular carcinoma (HCC) is a common, increasingly prevalent malignancy. For all but the smallest lesions, surgical removal of cancer via resection or liver transplantation (LT) is considered the most feasible pathway to cure. Resection - even with favorable survival - is associated with a fairly high rate of recurrence, perhaps since most HCCs occur in the setting of cirrhosis. LT offers the advantage of removing not only the cancer but the diseased liver from which the cancer has arisen, and LT outperforms resection for survival with selected patients. Since time waiting for LT is time during which HCC can progress, loco-regional therapy (LRT) is widely employed by transplant centers. The purpose of LRT is either to bridge patients to LT by preventing progression and waitlist dropout, or to downstage patients who slightly exceed standard eligibility criteria initially but can fall within it after treatment. Transarterial chemoembolization and radiofrequency ablation have been the most widely utilized LRTs to date, with favorable efficacy and safety as a bridge to LT (and for the former, as a downstaging modality). The list of potentially effective LRTs has expanded in recent years, and includes transarterial chemoembolization with drug-eluting beads, radioembolization and novel forms of extracorporal therapy. Herein we appraise the various LRT modalities for HCC, and their potential roles in specific clinical scenarios in patients awaiting LT. PMID:27358775

  18. Selection of optimal variants of Gō-like models of proteins through studies of stretching.

    PubMed

    Sułkowska, Joanna I; Cieplak, Marek

    2008-10-01

    The Gō-like models of proteins are constructed based on the knowledge of the native conformation. However, there are many possible choices of a Hamiltonian for which the ground state coincides with the native state. Here, we propose to use experimental data on protein stretching to determine what choices are most adequate physically. This criterion is motivated by the fact that stretching processes usually start with the native structure, in the vicinity of which the Gō-like models should work the best. Our selection procedure is applied to 62 different versions of the Gō model and is based on 28 proteins. We consider different potentials, contact maps, local stiffness energies, and energy scales--uniform and nonuniform. In the latter case, the strength of the nonuniformity was governed either by specificity or by properties related to positioning of the side groups. Among them is the simplest variant: uniform couplings with no i, i + 2 contacts. This choice also leads to good folding properties in most cases. We elucidate relationship between the local stiffness described by a potential which involves local chirality and the one which involves dihedral and bond angles. The latter stiffness improves folding but there is little difference between them when it comes to stretching. PMID:18567634

  19. Selection of Optimal Variants of Gō-Like Models of Proteins through Studies of Stretching

    PubMed Central

    Sułkowska, Joanna I.; Cieplak, Marek

    2008-01-01

    The Gō-like models of proteins are constructed based on the knowledge of the native conformation. However, there are many possible choices of a Hamiltonian for which the ground state coincides with the native state. Here, we propose to use experimental data on protein stretching to determine what choices are most adequate physically. This criterion is motivated by the fact that stretching processes usually start with the native structure, in the vicinity of which the Gō-like models should work the best. Our selection procedure is applied to 62 different versions of the Gō model and is based on 28 proteins. We consider different potentials, contact maps, local stiffness energies, and energy scales—uniform and nonuniform. In the latter case, the strength of the nonuniformity was governed either by specificity or by properties related to positioning of the side groups. Among them is the simplest variant: uniform couplings with no i, i + 2 contacts. This choice also leads to good folding properties in most cases. We elucidate relationship between the local stiffness described by a potential which involves local chirality and the one which involves dihedral and bond angles. The latter stiffness improves folding but there is little difference between them when it comes to stretching. PMID:18567634

  20. Selection of optimal variants of Go-like models of proteins through studies of stretching

    NASA Astrophysics Data System (ADS)

    Sulkowska, Joanna; Cieplak, Marek

    2009-03-01

    The Go-like models of proteins are constructed based on the knowledge of the native conformation. However, there are many possible choices of a Hamiltonian for which the ground state coincides with the native state. Here, we propose to use experimental data on protein stretching to determine what choices are most adequate physically. This criterion is motivated by the fact that stretching processes usually start with the native structure, in the vicinity of which the Go-like models should work the best. Our selection procedure is applied to 62 different versions of the Go model and is based on 28 proteins. We consider different potentials, contact maps, local stiffness energies, and energy scales -- uniform and non-uniform. In the latter case, the strength of the nonuniformity was governed either by specificity or by properties related to positioning of the side groups. Among them there is the simplest variant: uniform couplings and no i,i+2 contacts. This choice also leads to good folding properties in most cases. We elucidate relationship between the local stiffness described by a potential which involves local chirality and the one which involves dihedral and bond angles. The latter stiffness improves folding but there is little difference between them when it comes to stretching.

  1. Information access in a dual-task context: testing a model of optimal strategy selection

    NASA Technical Reports Server (NTRS)

    Wickens, C. D.; Seidler, K. S.

    1997-01-01

    Pilots were required to access information from a hierarchical aviation database by navigating under single-task conditions (Experiment 1) and when this task was time-shared with an altitude-monitoring task of varying bandwidth and priority (Experiment 2). In dual-task conditions, pilots had 2 viewports available, 1 always used for the information task and the other to be allocated to either task. Dual-task strategy, inferred from the decision of which task to allocate to the 2nd viewport, revealed that allocation was generally biased in favor of the monitoring task and was only partly sensitive to the difficulty of the 2 tasks and their relative priorities. Some dominant sources of navigational difficulties failed to adaptively influence selection strategy. The implications of the results are to provide tools for jumping to the top of the database, to provide 2 viewports into the common database, and to provide training as to the optimum viewport management strategy in a multitask environment.

  2. Enhanced Magnetoresistance in Molecular Junctions by Geometrical Optimization of Spin-Selective Orbital Hybridization.

    PubMed

    Rakhmilevitch, David; Sarkar, Soumyajit; Bitton, Ora; Kronik, Leeor; Tal, Oren

    2016-03-01

    Molecular junctions based on ferromagnetic electrodes allow the study of electronic spin transport near the limit of spintronics miniaturization. However, these junctions reveal moderate magnetoresistance that is sensitive to the orbital structure at their ferromagnet-molecule interfaces. The key structural parameters that should be controlled in order to gain high magnetoresistance have not been established, despite their importance for efficient manipulation of spin transport at the nanoscale. Here, we show that single-molecule junctions based on nickel electrodes and benzene molecules can yield a significant anisotropic magnetoresistance of up to ∼200% near the conductance quantum G0. The measured magnetoresistance is mechanically tuned by changing the distance between the electrodes, revealing a nonmonotonic response to junction elongation. These findings are ascribed with the aid of first-principles calculations to variations in the metal-molecule orientation that can be adjusted to obtain highly spin-selective orbital hybridization. Our results demonstrate the important role of geometrical considerations in determining the spin transport properties of metal-molecule interfaces. PMID:26926769

  3. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    PubMed

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. PMID:24012389

  4. Selection of optimal chelator improves the contrast of GRPR imaging using bombesin analogue RM26.

    PubMed

    Mitran, Bogdan; Varasteh, Zohreh; Selvaraju, Ram Kumar; Lindeberg, Gunnar; Sörensen, Jens; Larhed, Mats; Tolmachev, Vladimir; Rosenström, Ulrika; Orlova, Anna

    2016-05-01

    Bombesin (BN) analogs bind with high affinity to gastrin-releasing peptide receptors (GRPRs) that are up-regulated in prostate cancer and can be used for the visualization of prostate cancer. The aim of this study was to investigate the influence of radionuclide-chelator complexes on the biodistribution pattern of the 111In-labeled bombesin antagonist PEG2-D-Phe-Gln-Trp-Ala-Val-Gly-His-Sta-Leu-NH2 (PEG2-RM26) and to identify an optimal construct for SPECT imaging. A series of RM26 analogs N-terminally conjugated with NOTA, NODAGA, DOTA and DOTAGA via a PEG2 spacer were radiolabeled with 111In and evaluated both in vitro and in vivo. The conjugates were successfully labeled with 111In with 100% purity and retained binding specificity to GRPR and high stability. The cellular processing of all compounds was characterized by slow internalization. The IC50 values were in the low nanomolar range, with lower IC50 values for positively charged natIn-NOTA-PEG2-RM26 (2.6 ± 0.1 nM) and higher values for negatively charged natIn-DOTAGA-PEG2-RM26 (4.8 ± 0.5 nM). The kinetic binding studies showed KD values in the picomolar range that followed the same pattern as the IC50 data. The biodistribution of all compounds was studied in BALB/c nu/nu mice bearing PC-3 prostate cancer xenografts. Tumor targeting and biodistribution studies displayed rapid clearance of radioactivity from the blood and normal organs via kidney excretion. All conjugates showed similar uptake in tumors at 4 h p.i. The radioactivity accumulation in GRPR-expressing organs was significantly lower for DOTA- and DOTAGA-containing constructs compared to those containing NOTA and NODAGA. 111In-NOTA-PEG2-RM26 with a positively charged complex showed the highest initial uptake and the slowest clearance of radioactivity from the liver. At 4 h p.i., DOTA- and DOTAGA-coupled analogs showed significantly higher tumor-to-organ ratios compared to NOTA- and NODAGA-containing variants. The NODAGA conjugate demonstrated the

  5. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    PubMed Central

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  6. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness.

    PubMed

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  7. In vitro selection of RNase P RNA reveals optimized catalytic activity in a highly conserved structural domain.

    PubMed

    Frank, D N; Ellington, A E; Pace, N R

    1996-12-01

    In vitro selection techniques are useful means of dissecting the functions of both natural and artificial ribozymes. Using a self-cleaving conjugate containing the Escherichia coli ribonuclease P RNA and its substrate, pre-tRNA (Frank DN, Harris ME, Pace NR, 1994, Biochemistry 33:10800-10808), we have devised a method to select for catalytically active variants of the RNase P ribozyme. A selection experiment was performed to probe the structural and sequence constraints that operate on a highly conserved region of RNase P: the J3/4-P4-J2/4 region, which lies within the core of RNase P and is thought to bind catalytically essential magnesium ions (Harris ME et al., 1994, EMBO J 13:3953-3963; Hardt WD et al., 1995, EMBO J 14:2935-2944; Harris ME, Pace NR, 1995, RNA 1:210-218). We sought to determine which, if any, of the nearly invariant nucleotides within J3/4-P4-J2/4 are required for ribozyme-mediated catalysis. Twenty-two residues in the J3/4-P4-J2/4 component of RNase P RNA were randomized and, surprisingly, after only 10 generations, each of the randomized positions returned to the wild-type sequence. This indicates that every position in J3/4-P4-J2/4 contributes to optimal catalytic activity. These results contrast sharply with selections involving other large ribozymes, which evolve improved catalytic function readily in vitro (Chapman KB, Szostak JW, 1994, Curr Opin Struct Biol 4:618-622; Joyce GF, 1994, Curr Opin Struct Biol 4:331-336; Kumar PKR, Ellington AE, 1995, FASEB J 9:1183-1195). The phylogenetic conservation of J3/4-P4-J2/4, coupled with the results reported here, suggests that the contribution of this structure to RNA-mediated catalysis was optimized very early in evolution, before the last common ancestor of all life. PMID:8972768

  8. Do Bayesian Model Weights Tell the Whole Story? New Analysis and Optimal Design Tools for Maximum-Confidence Model Selection

    NASA Astrophysics Data System (ADS)

    Schöniger, A.; Nowak, W.; Wöhling, T.

    2013-12-01

    Bayesian model averaging (BMA) combines the predictive capabilities of alternative conceptual models into a robust best estimate and allows the quantification of conceptual uncertainty. The individual models are weighted with their posterior probability according to Bayes' theorem. Despite this rigorous procedure, we see four obstacles to robust model ranking: (1) The weights inherit uncertainty related to measurement noise in the calibration data set, which may compromise the reliability of model ranking. (2) Posterior weights rank the models only relative to each other, but do not contain information about the absolute model performance. (3) There is a lack of objective methods to assess whether the suggested models are practically distinguishable or very similar to each other, i.e., whether the individual models explore different regions of the model space. (4) No theory for optimal design (OD) of experiments exists that explicitly aims at maximum-confidence model discrimination. The goal of our study is to overcome these four shortcomings. We determine the robustness of weights against measurement noise (1) by repeatedly perturbing the observed data with random measurement errors and analyzing the variability in the obtained weights. Realizing that model weights have a probability distribution of their own, we introduce an additional term into the overall prediction uncertainty analysis scheme which we call 'weighting uncertainty'. We further assess an 'absolute distance' in performance of the model set from the truth (2) as seen through the eyes of the data by interpreting statistics of Bayesian model evidence. This analysis is of great value for modellers to decide, if the modelling task can be satisfactorily carried out with the model(s) at hand, or if more effort should be invested in extending the set with better performing models. As a further prerequisite for robust model selection, we scrutinize the ability of BMA to distinguish between the models in

  9. Closed-form solutions for linear regulator-design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.

  10. A New Combinatorial Optimization Approach for Integrated Feature Selection Using Different Datasets: A Prostate Cancer Transcriptomic Study

    PubMed Central

    Puthiyedth, Nisha; Riveros, Carlos; Berretta, Regina; Moscato, Pablo

    2015-01-01

    Background The joint study of multiple datasets has become a common technique for increasing statistical power in detecting biomarkers obtained from smaller studies. The approach generally followed is based on the fact that as the total number of samples increases, we expect to have greater power to detect associations of interest. This methodology has been applied to genome-wide association and transcriptomic studies due to the availability of datasets in the public domain. While this approach is well established in biostatistics, the introduction of new combinatorial optimization models to address this issue has not been explored in depth. In this study, we introduce a new model for the integration of multiple datasets and we show its application in transcriptomics. Methods We propose a new combinatorial optimization problem that addresses the core issue of biomarker detection in integrated datasets. Optimal solutions for this model deliver a feature selection from a panel of prospective biomarkers. The model we propose is a generalised version of the (α,β)-k-Feature Set problem. We illustrate the performance of this new methodology via a challenging meta-analysis task involving six prostate cancer microarray datasets. The results are then compared to the popular RankProd meta-analysis tool and to what can be obtained by analysing the individual datasets by statistical and combinatorial methods alone. Results Application of the integrated method resulted in a more informative signature than the rank-based meta-analysis or individual dataset results, and overcomes problems arising from real world datasets. The set of genes identified is highly significant in the context of prostate cancer. The method used does not rely on homogenisation or transformation of values to a common scale, and at the same time is able to capture markers associated with subgroups of the disease. PMID:26106884

  11. Structural and mechanical evaluations of a topology optimized titanium interbody fusion cage fabricated by selective laser melting process.

    PubMed

    Lin, Chia-Ying; Wirtz, Tobias; LaMarca, Frank; Hollister, Scott J

    2007-11-01

    A topology optimized lumbar interbody fusion cage was made of Ti-Al6-V4 alloy by the rapid prototyping process of selective laser melting (SLM) to reproduce designed microstructure features. Radiographic characterizations and the mechanical properties were investigated to determine how the structural characteristics of the fabricated cage were reproduced from design characteristics using micro-computed tomography scanning. The mechanical modulus of the designed cage was also measured to compare with tantalum, a widely used porous metal. The designed microstructures can be clearly seen in the micrographs of the micro-CT and scanning electron microscopy examinations, showing the SLM process can reproduce intricate microscopic features from the original designs. No imaging artifacts from micro-CT were found. The average compressive modulus of the tested caged was 2.97+/-0.90 GPa, which is comparable with the reported porous tantalum modulus of 3 GPa and falls between that of cortical bone (15 GPa) and trabecular bone (0.1-0.5 GPa). The new porous Ti-6Al-4V optimal-structure cage fabricated by SLM process gave consistent mechanical properties without artifactual distortion in the imaging modalities and thus it can be a promising alternative as a porous implant for spine fusion. PMID:17415762

  12. OptiTope—a web server for the selection of an optimal set of peptides for epitope-based vaccines

    PubMed Central

    Toussaint, Nora C.; Kohlbacher, Oliver

    2009-01-01

    Epitope-based vaccines (EVs) have recently been attracting significant interest. They trigger an immune response by confronting the immune system with immunogenic peptides derived from, e.g. viral- or cancer-related proteins. Binding of these peptides to proteins from the major histocompatibility complex (MHC) is crucial for immune system activation. However, since the MHC is highly polymorphic, different patients typically bind different repertoires of peptides. Furthermore, economical and regulatory issues impose strong limitations on the number of peptides that can be included in an EV. Hence, it is crucial to identify the optimal set of peptides for a vaccine, given constraints such as MHC allele probabilities in the target population, peptide mutation rates and maximum number of selected peptides. OptiTope aims at assisting immunologists in this critical task. With OptiTope, we provide an easy-to-use tool to determine a provably optimal set of epitopes with respect to overall immunogenicity in a specific individual (personalized medicine) or a target population (e.g. a certain ethnic group). OptiTope is available at http://www.epitoolkit.org/optitope. PMID:19420066

  13. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection.

    PubMed

    Doshi, Jimit; Erus, Guray; Ou, Yangming; Resnick, Susan M; Gur, Ruben C; Gur, Raquel E; Satterthwaite, Theodore D; Furth, Susan; Davatzikos, Christos

    2016-02-15

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  14. A class of stochastic optimization problems with one quadratic & several linear objective functions and extended portfolio selection model

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Li, Jun

    2002-09-01

    In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.

  15. Particle Swarm Optimization Based Feature Enhancement and Feature Selection for Improved Emotion Recognition in Speech and Glottal Signals

    PubMed Central

    Muthusamy, Hariharan; Polat, Kemal; Yaacob, Sazali

    2015-01-01

    In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature. PMID:25799141

  16. Selecting training and test images for optimized anomaly detection algorithms in hyperspectral imagery through robust parameter design

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2011-06-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques have been applied to some of these algorithms in an attempt to choose robust settings capable of operating consistently across a large variety of image scenes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research developed a frameworkfor optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. This paper describes a method for selecting hyperspectral image training and test subsets yielding consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. Several different mathematical models representing the value of a training and test set based on such measures as the D-optimal score and various distance norms are tested in a simulation experiment.

  17. Particle swarm optimization based feature enhancement and feature selection for improved emotion recognition in speech and glottal signals.

    PubMed

    Muthusamy, Hariharan; Polat, Kemal; Yaacob, Sazali

    2015-01-01

    In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature. PMID:25799141

  18. Conservative Extensions of Linkage Disequilibrium Measures from Pairwise to Multi-loci and Algorithms for Optimal Tagging SNP Selection

    NASA Astrophysics Data System (ADS)

    Tarpine, Ryan; Lam, Fumei; Istrail, Sorin

    We present results on two classes of problems. The first result addresses the long standing open problem of finding unifying principles for Linkage Disequilibrium (LD) measures in population genetics (Lewontin 1964 [10], Hedrick 1987 [8], Devlin and Risch 1995 [5]). Two desirable properties have been proposed in the extensive literature on this topic and the mutual consistency between these properties has remained at the heart of statistical and algorithmic difficulties with haplotype and genome-wide association study analysis. The first axiom is (1) The ability to extend LD measures to multiple loci as a conservative extension of pairwise LD. All widely used LD measures are pairwise measures. Despite significant attempts, it is not clear how to naturally extend these measures to multiple loci, leading to a "curse of the pairwise". The second axiom is (2) The Interpretability of Intermediate Values. In this paper, we resolve this mutual consistency problem by introducing a new LD measure, directed informativeness overrightarrow{I} (the directed graph theoretic counterpart of the informativeness measure introduced by Halldorsson et al. [6]) and show that it satisfies both of the above axioms. We also show the maximum informative subset of tagging SNPs based on overrightarrow{I} can be computed exactly in polynomial time for realistic genome-wide data. Furthermore, we present polynomial time algorithms for optimal genome-wide tagging SNPs selection for a number of commonly used LD measures, under the bounded neighborhood assumption for linked pairs of SNPs. One problem in the area is the search for a quality measure for tagging SNPs selection that unifies the LD-based methods such as LD-select (implemented in Tagger, de Bakker et al. 2005 [4], Carlson et al. 2004 [3]) and the information-theoretic ones such as informativeness. We show that the objective function of the LD-select algorithm is the Minimal Dominating Set (MDS) on r 2-SNP graphs and show that we can

  19. Selection of HyspIRI optimal band positions for the earth compositional mapping using HyTES data

    NASA Astrophysics Data System (ADS)

    Ullah, Saleem; Khalid, Noora; Iqbal, Arshad

    2016-07-01

    In near future, NASA/JPL will orbit a new space-borne sensor called HyspIRI (Hyperspectral and Infrared Imager) which will cover the spectral range from 0.4 -14μm. Two instruments will be mounted on HyspIRI platform; one is hyperspectral instrument which can sense earth surface between 0.4-2.5μm with 10 nm intervals and a multispectral TIR sensor will acquire images between 3 to 14μm in 8 (1 in MIR and 7 in TIR) spectral bands. The TIR spectral wavebands will be positioned based on their importance in various applications. This study aimed to find HyspIRI optimal TIR wavebands position for earth compositional mapping. Genetic algorithms coupled with Spectral Angle Mapper (GA-SAM) were used as spectral bands selector. High dimensional HyTES (Hyperspectral Thermal Emission Spectrometer) data comprised of 256 spectral bands of Cuprite and Death Valley regions were used to select meaningful subsets of bands for earth compositional mapping. The GA-SAM was trained for eight mineral classes and the algorithms were run iteratively 40 times. High calibration (> 98 %) and validation (> 96 %) accuracies were achieved with limited numbers (seven) of spectral bands selected by GA-SAM. Knowing the important band positions will help scientist of HyspIRI group to place spectral bands at regions were accuracies of earth compositional mapping can be enhanced.

  20. Size-based protocol optimization using automatic tube current modulation and automatic kV selection in computed tomography.

    PubMed

    MacDougall, Robert D; Kleinman, Patricia L; Callahan, Michael J

    2016-01-01

    Size-based diagnostic reference ranges (DRRs) for contrast-enhanced pediatric abdominal computed tomography (CT) have been published in order to establish practical upper and lower limits of CTDI, DLP, and SSDE. Based on these DRRs, guidelines for establishing size-based SSDE target levels from the SSDE of a standard adult by applying a linear correction factor have been published and provide a great reference for dose optimization initiatives. The necessary step of designing manufacturer-specific CT protocols to achieve established SSDE targets is the responsibility of the Qualified Medical Physicist. The task is straightforward if fixed-mA protocols are used, however, more difficult when automatic exposure control (AEC) and automatic kV selection are considered. In such cases, the physicist must deduce the operation of AEC algorithms from technical documentation or through testing, using a wide range of phantom sizes. Our study presents the results of such testing using anthropomorphic phantoms ranging in size from the newborn to the obese adult. The effect of each user-controlled parameter was modeled for a single-manufacturer AEC algorithm (Siemens CARE Dose4D) and automatic kV selection algorithm (Siemens CARE kV). Based on the results presented in this study, a process for designing mA-modulated, pediatric abdominal CT protocols that achieve user-defined SSDE and kV targets is described. PMID:26894344

  1. Toward chelerythrine optimization: Analogues designed by molecular simplification exhibit selective growth inhibition in non-small-cell lung cancer cells.

    PubMed

    Yang, Rosania; Tavares, Maurício T; Teixeira, Sarah F; Azevedo, Ricardo A; C Pietro, Diego; Fernandes, Thais B; Ferreira, Adilson K; Trossini, Gustavo H G; Barbuto, José A M; Parise-Filho, Roberto

    2016-10-01

    A series of novel chelerythrine analogues was designed and synthesized. Antitumor activity was evaluated against A549, NCI-H1299, NCI-H292, and NCI-H460 non-small-cell lung cancer (NSCLC) cell lines in vitro. The selectivity of the most active analogues and chelerythrine was also evaluated, and we compared their cytotoxicity in NSCLC cells and non-tumorigenic cell lines, including human umbilical vein endothelial cells (HUVECs) and LL24 human lung fibroblasts. In silico studies were performed to establish structure-activity relationships between chelerythrine and the analogues. The results showed that analogue compound 3f induced significant dose-dependent G0/G1 cell cycle arrest in A549 and NCI-H1299 cells. Theoretical studies indicated that the molecular arrangement and electron characteristics of compound 3f were closely related to the profile of chelerythrine, supporting its activity. The present study presents a new and simplified chelerythrinoid scaffold with enhanced selectivity against NSCLC tumor cells for further optimization. PMID:27561984

  2. The Cord Blood Apgar: a novel scoring system to optimize selection of banked cord blood grafts for transplantation

    PubMed Central

    Page, Kristin M.; Zhang, Lijun; Mendizabal, Adam; Wease, Stephen; Carter, Shelly; Shoulars, Kevin; Gentry, Tracy; Balber, Andrew E.; Kurtzberg, Joanne

    2012-01-01

    BACKGROUND Engraftment failure and delays, likely due to diminished cord blood unit (CBU) potency, remain major barriers to the overall success of unrelated umbilical cord blood transplantation (UCBT). To address this problem, we developed and retrospectively validated a novel scoring system, the Cord Blood Apgar (CBA), which is predictive of engraftment after UCBT. STUDY DESIGN AND METHODS In a single-center retrospective study, utilizing a database of 435 consecutive single cord myeloablative UCBTs performed between January 1, 2000, to December 31, 2008, precryopreservation and postthaw graft variables (total nucleated cell, CD34+, colony-forming units, mononuclear cell content, and volume) were initially correlated with neutrophil engraftment. Subsequently, based on the magnitude of hazard ratios (HRs) in univariate analysis, a weighted scoring system to predict CBU potency was developed using a randomly selected training data set and internally validated on the remaining data set. RESULTS The CBA assigns transplanted CBUs three scores: a precryopreservation score (PCS), a postthaw score (PTS), and a composite score (CS), which incorporates the PCS and PTS values. CBA-PCS scores, which could be used for initial unit selection, were predictive of neutrophil (CBA-PCS ≥ 7.75 vs. <7.75, HR 3.5; p < 0.0001) engraftment. Likewise, CBA-PTS and CS scores were strongly predictive of Day 42 neutrophil engraftment (CBA-PTS ≥ 9.5 vs. <9.5, HR 3.16, p < 0.0001; CBA-CS ≥ 17.75 vs. <17.75, HR 4.01, p < 0.0001). CONCLUSION The CBA is strongly predictive of engraftment after UCBT and shows promise for optimizing screening of CBU donors for transplantation. In the future, a segment could be assayed for the PTS score providing data to apply the CS for final CBU selection. PMID:21810098

  3. SU-E-I-60: The Correct Selection of Pitch and Rotation Time for Optimal CT Scanning : The Big Misconception

    SciTech Connect

    Ranallo, F; Szczykutowicz, T

    2014-06-01

    Purpose: To provide correct guidance in the proper selection of pitch and rotation time for optimal CT imaging with multi-slice scanners. Methods: There exists a widespread misconception concerning the role of pitch in patient dose with modern multi-slice scanners, particularly with the use of mA modulation techniques. We investigated the relationship of pitch and rotation time to image quality, dose, and scan duration, with CT scanners from different manufacturers in a way that clarifies this misconception. This source of this misconception may concern the role of pitch in single slice CT scanners. Results: We found that the image noise and dose are generally independent of the selected effective mAs (mA*time/ pitch) with manual mA technique settings and are generally independent of the selected pitch and /or rotation time with automatic mA modulation techniques. However we did find that on certain scanners the use of a pitch just above 0.5 provided images of equal image noise at a lower dose compared to the use of a pitch just below 1.0. Conclusion: The misconception that the use of a lower pitch over-irradiates patients by wasting dose is clearly false. The use of a lower pitch provides images of equal or better image quality at the same patient dose, whether using manual mA or automatic mA modulation techniques. By decreasing the pitch and the rotation times by equal amounts, both helical and patient motion artifacts can be reduced without affecting the exam time. The use of lower helical pitch also allows better scanning of larger patients by allowing a greater scan effective mAs, if the exam time can be extended. The one caution with the use of low pitch is not related to patient dose, but to the length of the scan time if the rotation time is not set short enough. Partial Research funding from GE HealthCare.

  4. Reducing residual stresses and deformations in selective laser melting through multi-level multi-scale optimization of cellular scanning strategy

    NASA Astrophysics Data System (ADS)

    Mohanty, Sankhya; Hattel, Jesper H.

    2016-04-01

    Residual stresses and deformations continue to remain one of the primary challenges towards expanding the scope of selective laser melting as an industrial scale manufacturing process. While process monitoring and feedback-based process control of the process has shown significant potential, there is still dearth of techniques to tackle the issue. Numerical modelling of selective laser melting process has thus been an active area of research in the last few years. However, large computational resource requirements have slowed the usage of these models for optimizing the process. In this paper, a calibrated, fast, multiscale thermal model coupled with a 3D finite element mechanical model is used to simulate residual stress formation and deformations during selective laser melting. The resulting reduction in thermal model computation time allows evolutionary algorithm-based optimization of the process. A multilevel optimization strategy is adopted using a customized genetic algorithm developed for optimizing cellular scanning strategy for selective laser melting, with an objective of reducing residual stresses and deformations. The resulting thermo-mechanically optimized cellular scanning strategies are compared with standard scanning strategies and have been used to manufacture standard samples.

  5. The surprising negative correlation of gene length and optimal codon use - disentangling translational selection from GC-biased gene conversion in yeast

    PubMed Central

    2011-01-01

    Background Surprisingly, in several multi-cellular eukaryotes optimal codon use correlates negatively with gene length. This contrasts with the expectation under selection for translational accuracy. While suggested explanations focus on variation in strength and efficiency of translational selection, it has rarely been noticed that the negative correlation is reported only in organisms whose optimal codons are biased towards codons that end with G or C (-GC). This raises the question whether forces that affect base composition - such as GC-biased gene conversion - contribute to the negative correlation between optimal codon use and gene length. Results Yeast is a good organism to study this as equal numbers of optimal codons end in -GC and -AT and one may hence compare frequencies of optimal GC- with optimal AT-ending codons to disentangle the forces. Results of this study demonstrate in yeast frequencies of GC-ending (optimal AND non-optimal) codons decrease with gene length and increase with recombination. A decrease of GC-ending codons along genes contributes to the negative correlation with gene length. Correlations with recombination and gene expression differentiate between GC-ending and optimal codons, and also substitution patterns support effects of GC-biased gene conversion. Conclusion While the general effect of GC-biased gene conversion is well known, the negative correlation of optimal codon use with gene length has not been considered in this context before. Initiation of gene conversion events in promoter regions and the presence of a gene conversion gradient most likely explain the observed decrease of GC-ending codons with gene length and gene position. PMID:21481245

  6. An experimental transplantation to select the optimal site for restoration of the eelgrass Zostera marina in the Taehwa River estuary

    NASA Astrophysics Data System (ADS)

    Park, Jung-Im; Kim, Jeong Bae; Lee, Kun-Seop; Son, Min Ho

    2013-12-01

    To select the optimal site for the restoration of seagrass habitats in the Taehwa River estuary, we transplanted the eelgrass Zostera marina to three potential candidate sites in March 2007 and monitored the transplanted seagrass and associated environmental factors for six months. In all three sites, the transplanted seagrasses exhibited no initial morphological loss due to transplanting stress. The transplanted seagrass communities at sites 2 and 3 showed more than a 180% increase in density over the entire survey period. In contrast, despite a density increase in the first month after transplantation, most of the transplanted seagrasses at site 1 died. This may be due to the large decrease in underwater irradiance reaching the seagrass leaves at site 1 for two months during June and July, which fell below the level of compensation irradiance. The growth rate and size of the seagrass shoots were also larger at sites 2 and 3 compared with site 1. This is probably due to higher nutrient concentrations in the sediment pore water at sites 2 and 3 compared with site 1, although water depth, salinity, and the nutrient concentrations in the water columns from the three sites were similar. Therefore, for the restoration of seagrass habitats in the Taehwa River estuary, sites 2 and 3 were preferable to site 1 as transplantation sites.

  7. Optimization of Cat's Whiskers Tea (Orthosiphon stamineus) Using Supercritical Carbon Dioxide and Selective Chemotherapeutic Potential against Prostate Cancer Cells

    PubMed Central

    Al-Suede, Fouad Saleih R.; Khadeer Ahamed, Mohamed B.; Abdul Majid, Aman S.; Baharetha, Hussin M.; Hassan, Loiy E. A.; Kadir, Mohd Omar A.; Nassar, Zeyad D.; Abdul Majid, Amin M. S.

    2014-01-01

    Cat's whiskers (Orthosiphon stamineus) leaves extracts were prepared using supercritical CO2 (SC-CO2) with full factorial design to determine the optimum extraction parameters. Nine extracts were obtained by varying pressure, temperature, and time. The extracts were analysed using FTIR, UV-Vis, and GC-MS. Cytotoxicity of the extracts was evaluated on human (colorectal, breast, and prostate) cancer and normal fibroblast cells. Moderate pressure (31.1 MPa) and temperature (60°C) were recorded as optimum extraction conditions with high yield (1.74%) of the extract (B2) at 60 min extraction time. The optimized extract (B2) displayed selective cytotoxicity against prostate cancer (PC3) cells (IC50 28 µg/mL) and significant antioxidant activity (IC50 42.8 µg/mL). Elevated levels of caspases 3/7 and 9 in B2-treated PC3 cells suggest the induction of apoptosis through nuclear and mitochondrial pathways. Hoechst and rhodamine assays confirmed the nuclear condensation and disruption of mitochondrial membrane potential in the cells. B2 also demonstrated inhibitory effects on motility and colonies of PC3 cells at its subcytotoxic concentrations. It is noteworthy that B2 displayed negligible toxicity against the normal cells. Chemometric analysis revealed high content of essential oils, hydrocarbon, fatty acids, esters, and aromatic sesquiterpenes in B2. This study highlights the therapeutic potentials of SC-CO2 extract of cat's whiskers in targeting prostate carcinoma. PMID:25276215

  8. Synthesis and Assessment of DNA/Silver Nanoclusters Probes for Optimal and Selective Detection of Tristeza Virus Mild Strains.

    PubMed

    Shokri, Ehsan; Hosseini, Morteza; Faridbod, Farnoush; Rahaie, Mahdi

    2016-09-01

    Citrus Tristeza virus (CTV) is one of the most destructive pathogens worldwide that exist as a mixture of malicious (Sever) and tolerable (Mild) strains. Mild strains of CTV can be used to immunize healthy plants from more Severe strains damage. Recently, innovative methods based on the fluorescent properties of DNA/silver nanoclusters have been developed for molecular detection purposes. In this study, a simple procedure was followed to create more active DNA/AgNCs probe for accurate and selective detection of Tristeza Mild-RNA. To this end, four distinct DNA emitter scaffolds (C12, Red, Green, Yellow) were tethered to the Mild capture sequence and investigated in various buffers in order to find highly emissive combinations. Then, to achieve specific and reliable results, several chemical additives, including organic solvents, PEG and organo-soluble salts were used to enhance control fluorescence signals and optimize the hybridization solution. The data showed that, under adjusted conditions, the target sensitivity is enhanced by a factor of five and the high discrimination between Mild and Severe RNAs were obtained. The emission ratio of the DNA/AgNCs was dropped in the presence of target RNAs and I0/I intensity linearly ranged from 1.5 × 10(-8) M to 1.8 × 10(-6) M with the detection limit of 4.3 × 10(-9) M. PMID:27349801

  9. Optimization of Cat's Whiskers Tea (Orthosiphon stamineus) Using Supercritical Carbon Dioxide and Selective Chemotherapeutic Potential against Prostate Cancer Cells.

    PubMed

    Al-Suede, Fouad Saleih R; Khadeer Ahamed, Mohamed B; Abdul Majid, Aman S; Baharetha, Hussin M; Hassan, Loiy E A; Kadir, Mohd Omar A; Nassar, Zeyad D; Abdul Majid, Amin M S

    2014-01-01

    Cat's whiskers (Orthosiphon stamineus) leaves extracts were prepared using supercritical CO2 (SC-CO2) with full factorial design to determine the optimum extraction parameters. Nine extracts were obtained by varying pressure, temperature, and time. The extracts were analysed using FTIR, UV-Vis, and GC-MS. Cytotoxicity of the extracts was evaluated on human (colorectal, breast, and prostate) cancer and normal fibroblast cells. Moderate pressure (31.1 MPa) and temperature (60°C) were recorded as optimum extraction conditions with high yield (1.74%) of the extract (B2) at 60 min extraction time. The optimized extract (B2) displayed selective cytotoxicity against prostate cancer (PC3) cells (IC50 28 µg/mL) and significant antioxidant activity (IC50 42.8 µg/mL). Elevated levels of caspases 3/7 and 9 in B2-treated PC3 cells suggest the induction of apoptosis through nuclear and mitochondrial pathways. Hoechst and rhodamine assays confirmed the nuclear condensation and disruption of mitochondrial membrane potential in the cells. B2 also demonstrated inhibitory effects on motility and colonies of PC3 cells at its subcytotoxic concentrations. It is noteworthy that B2 displayed negligible toxicity against the normal cells. Chemometric analysis revealed high content of essential oils, hydrocarbon, fatty acids, esters, and aromatic sesquiterpenes in B2. This study highlights the therapeutic potentials of SC-CO2 extract of cat's whiskers in targeting prostate carcinoma. PMID:25276215

  10. An ECG signal compressor based on the selection of optimal threshold levels of discrete wavelet transform coefficients.

    PubMed

    Al-Ajlouni, A F; Abo-Zahhad, M; Ahmed, S M; Schilling, R J

    2008-01-01

    Compression of electrocardiography (ECG) is necessary for efficient storage and transmission of the digitized ECG signals. Discrete wavelet transform (DWT) has recently emerged as a powerful technique for ECG signal compression due to its multi-resolution signal decomposition and locality properties. This paper presents an ECG compressor based on the selection of optimum threshold levels of DWT coefficients in different subbands that achieve maximum data volume reduction while preserving the significant signal morphology features upon reconstruction. First, the ECG is wavelet transformed into m subbands and the wavelet coefficients of each subband are thresholded using an optimal threshold level. Thresholding removes excessively small features and replaces them with zeroes. The threshold levels are defined for each signal so that the bit rate is minimized for a target distortion or, alternatively, the distortion is minimized for a target compression ratio. After thresholding, the resulting significant wavelet coefficients are coded using multi embedded zero tree (MEZW) coding technique. In order to assess the performance of the proposed compressor, records from the MIT-BIH Arrhythmia Database were compressed at different distortion levels, measured by the percentage rms difference (PRD), and compression ratios (CR). The method achieves good CR values with excellent reconstruction quality that compares favourably with various classical and state-of-the-art ECG compressors. Finally, it should be noted that the proposed method is flexible in controlling the quality of the reconstructed signals and the volume of the compressed signals by establishing a target PRD and a target CR a priori, respectively. PMID:19005960

  11. Self-Regulation among Youth in Four Western Cultures: Is There an Adolescence-Specific Structure of the Selection-Optimization-Compensation (SOC) Model?

    ERIC Educational Resources Information Center

    Gestsdottir, Steinunn; Geldhof, G. John; Paus, Tomáš; Freund, Alexandra M.; Adalbjarnardottir, Sigrun; Lerner, Jacqueline V.; Lerner, Richard M.

    2015-01-01

    We address how to conceptualize and measure intentional self-regulation (ISR) among adolescents from four cultures by assessing whether ISR (conceptualized by the SOC model of Selection, Optimization, and Compensation) is represented by three factors (as with adult samples) or as one "adolescence-specific" factor. A total of 4,057 14-…

  12. Genome-wide characterization and selection of expressed sequence tag simple sequence repeat primers for optimized marker distribution and reliability in peach

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Expressed sequence tag (EST) simple sequence repeats (SSRs) in Prunus were mined, and flanking primers designed and used for genome-wide characterization and selection of primers to optimize marker distribution and reliability. A total of 12,618 contigs were assembled from 84,727 ESTs, along with 34...

  13. Selection, Optimization, and Compensation: The Structure, Reliability, and Validity of Forced-Choice versus Likert-Type Measures in a Sample of Late Adolescents

    ERIC Educational Resources Information Center

    Geldhof, G. John; Gestsdottir, Steinunn; Stefansson, Kristjan; Johnson, Sara K.; Bowers, Edmond P.; Lerner, Richard M.

    2015-01-01

    Intentional self-regulation (ISR) undergoes significant development across the life span. However, our understanding of ISR's development and function remains incomplete, in part because the field's conceptualization and measurement of ISR vary greatly. A key sample case involves how Baltes and colleagues' Selection, Optimization,…

  14. ATAQS: A computational software tool for high throughput transition optimization and validation for selected reaction monitoring mass spectrometry

    PubMed Central

    2011-01-01

    Background Since its inception, proteomics has essentially operated in a discovery mode with the goal of identifying and quantifying the maximal number of proteins in a sample. Increasingly, proteomic measurements are also supporting hypothesis-driven studies, in which a predetermined set of proteins is consistently detected and quantified in multiple samples. Selected reaction monitoring (SRM) is a targeted mass spectrometric technique that supports the detection and quantification of specific proteins in complex samples at high sensitivity and reproducibility. Here, we describe ATAQS, an integrated software platform that supports all stages of targeted, SRM-based proteomics experiments including target selection, transition optimization and post acquisition data analysis. This software will significantly facilitate the use of targeted proteomic techniques and contribute to the generation of highly sensitive, reproducible and complete datasets that are particularly critical for the discovery and validation of targets in hypothesis-driven studies in systems biology. Result We introduce a new open source software pipeline, ATAQS (Automated and Targeted Analysis with Quantitative SRM), which consists of a number of modules that collectively support the SRM assay development workflow for targeted proteomic experiments (project management and generation of protein, peptide and transitions and the validation of peptide detection by SRM). ATAQS provides a flexible pipeline for end-users by allowing the workflow to start or end at any point of the pipeline, and for computational biologists, by enabling the easy extension of java algorithm classes for their own algorithm plug-in or connection via an external web site. This integrated system supports all steps in a SRM-based experiment and provides a user-friendly GUI that can be run by any operating system that allows the installation of the Mozilla Firefox web browser. Conclusions Targeted proteomics via SRM is a powerful

  15. A computational strategy to select optimized protein targets for drug development toward the control of cancer diseases.

    PubMed

    Carels, Nicolas; Tilli, Tatiana; Tuszynski, Jack A

    2015-01-01

    In this report, we describe a strategy for the optimized selection of protein targets suitable for drug development against neoplastic diseases taking the particular case of breast cancer as an example. We combined human interactome and transcriptome data from malignant and control cell lines because highly connected proteins that are up-regulated in malignant cell lines are expected to be suitable protein targets for chemotherapy with a lower rate of undesirable side effects. We normalized transcriptome data and applied a statistic treatment to objectively extract the sub-networks of down- and up-regulated genes whose proteins effectively interact. We chose the most connected ones that act as protein hubs, most being in the signaling network. We show that the protein targets effectively identified by the combination of protein connectivity and differential expression are known as suitable targets for the successful chemotherapy of breast cancer. Interestingly, we found additional proteins, not generally targeted by drug treatments, which might justify the extension of existing formulation by addition of inhibitors designed against these proteins with the consequence of improving therapeutic outcomes. The molecular alterations observed in breast cancer cell lines represent either driver events and/or driver pathways that are necessary for breast cancer development or progression. However, it is clear that signaling mechanisms of the luminal A, B and triple negative subtypes are different. Furthermore, the up- and down-regulated networks predicted subtype-specific drug targets and possible compensation circuits between up- and down-regulated genes. We believe these results may have significant clinical implications in the personalized treatment of cancer patients allowing an objective approach to the recycling of the arsenal of available drugs to the specific case of each breast cancer given their distinct qualitative and quantitative molecular traits. PMID:25625699

  16. A Computational Strategy to Select Optimized Protein Targets for Drug Development toward the Control of Cancer Diseases

    PubMed Central

    Carels, Nicolas; Tilli, Tatiana; Tuszynski, Jack A.

    2015-01-01

    In this report, we describe a strategy for the optimized selection of protein targets suitable for drug development against neoplastic diseases taking the particular case of breast cancer as an example. We combined human interactome and transcriptome data from malignant and control cell lines because highly connected proteins that are up-regulated in malignant cell lines are expected to be suitable protein targets for chemotherapy with a lower rate of undesirable side effects. We normalized transcriptome data and applied a statistic treatment to objectively extract the sub-networks of down- and up-regulated genes whose proteins effectively interact. We chose the most connected ones that act as protein hubs, most being in the signaling network. We show that the protein targets effectively identified by the combination of protein connectivity and differential expression are known as suitable targets for the successful chemotherapy of breast cancer. Interestingly, we found additional proteins, not generally targeted by drug treatments, which might justify the extension of existing formulation by addition of inhibitors designed against these proteins with the consequence of improving therapeutic outcomes. The molecular alterations observed in breast cancer cell lines represent either driver events and/or driver pathways that are necessary for breast cancer development or progression. However, it is clear that signaling mechanisms of the luminal A, B and triple negative subtypes are different. Furthermore, the up- and down-regulated networks predicted subtype-specific drug targets and possible compensation circuits between up- and down-regulated genes. We believe these results may have significant clinical implications in the personalized treatment of cancer patients allowing an objective approach to the recycling of the arsenal of available drugs to the specific case of each breast cancer given their distinct qualitative and quantitative molecular traits. PMID:25625699

  17. Pulse-fluence-specified optimal control simulation with applications to molecular orientation and spin-isomer-selective molecular alignment

    SciTech Connect

    Yoshida, Masataka; Nakashima, Kaoru; Ohtsuki, Yukiyoshi

    2015-12-31

    We propose an optimal control simulation with specified pulse fluence and amplitude. The simulation is applied to the orientation control of CO molecules to examine the optimal combination of THz and laser pulses, and to discriminate nuclear-spin isomers of {sup 14}N{sub 2} as spatially anisotropic distributions.

  18. A hybrid approach identifies metabolic signatures of high-producers for chinese hamster ovary clone selection and process optimization.

    PubMed

    Popp, Oliver; Müller, Dirk; Didzus, Katharina; Paul, Wolfgang; Lipsmeier, Florian; Kirchner, Florian; Niklas, Jens; Mauch, Klaus; Beaucamp, Nicola

    2016-09-01

    In-depth characterization of high-producer cell lines and bioprocesses is vital to ensure robust and consistent production of recombinant therapeutic proteins in high quantity and quality for clinical applications. This requires applying appropriate methods during bioprocess development to enable meaningful characterization of CHO clones and processes. Here, we present a novel hybrid approach for supporting comprehensive characterization of metabolic clone performance. The approach combines metabolite profiling with multivariate data analysis and fluxomics to enable a data-driven mechanistic analysis of key metabolic traits associated with desired cell phenotypes. We applied the methodology to quantify and compare metabolic performance in a set of 10 recombinant CHO-K1 producer clones and a host cell line. The comprehensive characterization enabled us to derive an extended set of clone performance criteria that not only captured growth and product formation, but also incorporated information on intracellular clone physiology and on metabolic changes during the process. These criteria served to establish a quantitative clone ranking and allowed us to identify metabolic differences between high-producing CHO-K1 clones yielding comparably high product titers. Through multivariate data analysis of the combined metabolite and flux data we uncovered common metabolic traits characteristic of high-producer clones in the screening setup. This included high intracellular rates of glutamine synthesis, low cysteine uptake, reduced excretion of aspartate and glutamate, and low intracellular degradation rates of branched-chain amino acids and of histidine. Finally, the above approach was integrated into a workflow that enables standardized high-content selection of CHO producer clones in a high-throughput fashion. In conclusion, the combination of quantitative metabolite profiling, multivariate data analysis, and mechanistic network model simulations can identify metabolic

  19. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines

    NASA Astrophysics Data System (ADS)

    Taormina, Riccardo; Chau, Kwok-Wing

    2015-10-01

    Selecting an adequate set of inputs is a critical step for successful data-driven streamflow prediction. In this study, we present a novel approach for Input Variable Selection (IVS) that employs Binary-coded discrete Fully Informed Particle Swarm optimization (BFIPS) and Extreme Learning Machines (ELM) to develop fast and accurate IVS algorithms. A scheme is employed to encode the subset of selected inputs and ELM specifications into the binary particles, which are evolved using single objective and multi-objective BFIPS optimization (MBFIPS). The performances of these ELM-based methods are assessed using the evaluation criteria and the datasets included in the comprehensive IVS evaluation framework proposed by Galelli et al. (2014). From a comparison with 4 major IVS techniques used in their original study it emerges that the proposed methods compare very well in terms of selection accuracy. The best performers were found to be (1) a MBFIPS-ELM algorithm based on the concurrent minimization of an error function and the number of selected inputs, and (2) a BFIPS-ELM algorithm based on the minimization of a variant of the Akaike Information Criterion (AIC). The first technique is arguably the most accurate overall, and is able to reach an almost perfect specification of the optimal input subset for a partially synthetic rainfall-runoff experiment devised for the Kentucky River basin. In addition, MBFIPS-ELM allows for the determination of the relative importance of the selected inputs. On the other hand, the BFIPS-ELM is found to consistently reach high accuracy scores while being considerably faster. By extrapolating the results obtained on the IVS test-bed, it can be concluded that the proposed techniques are particularly suited for rainfall-runoff modeling applications characterized by high nonlinearity in the catchment dynamics.

  20. Demonstration optimization analyses of pumping from selected Arapahoe aquifer municipal wells in the west-central Denver Basin, Colorado, 2010–2109

    USGS Publications Warehouse

    Banta, Edward R.; Paschke, Suzanne S.

    2012-01-01

    Declining water levels caused by withdrawals of water from wells in the west-central part of the Denver Basin bedrock-aquifer system have raised concerns with respect to the ability of the aquifer system to sustain production. The Arapahoe aquifer in particular is heavily used in this area. Two optimization analyses were conducted to demonstrate approaches that could be used to evaluate possible future pumping scenarios intended to prolong the productivity of the aquifer and to delay excessive loss of saturated thickness. These analyses were designed as demonstrations only, and were not intended as a comprehensive optimization study. Optimization analyses were based on a groundwater-flow model of the Denver Basin developed as part of a recently published U.S. Geological Survey groundwater-availability study. For each analysis an optimization problem was set up to maximize total withdrawal rate, subject to withdrawal-rate and hydraulic-head constraints, for 119 selected municipal water-supply wells located in 96 model cells. The optimization analyses were based on 50- and 100-year simulations of groundwater withdrawals. The optimized total withdrawal rate for all selected wells for a 50-year simulation time was about 58.8 cubic feet per second. For an analysis in which the simulation time and head-constraint time were extended to 100 years, the optimized total withdrawal rate for all selected wells was about 53.0 cubic feet per second, demonstrating that a reduction in withdrawal rate of about 10 percent may extend the time before the hydraulic-head constraints are violated by 50 years, provided that pumping rates are optimally distributed. Analysis of simulation results showed that initially, the pumping produces water primarily by release of water from storage in the Arapahoe aquifer. However, because confining layers between the Denver and Arapahoe aquifers are thin, in less than 5 years, most of the water removed by managed-flows pumping likely would be supplied

  1. Method for optimizing output in ultrashort-pulse multipass laser amplifiers with selective use of a spectral filter

    DOEpatents

    Backus, Sterling J.; Kapteyn, Henry C.

    2007-07-10

    A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.

  2. Magnetron tuner has locking feature

    NASA Technical Reports Server (NTRS)

    Martucci, V. J.

    1969-01-01

    Magnetron tuning arrangement features a means of moving a tuning ring axially within an anode cavity by a system of reduction gears engaging a threaded tuning shaft of lead screw. The shaft positions the tuning ring for the desired magnetron output frequency, and a washer prevents backlash.

  3. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    SciTech Connect

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H.

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  4. Salicornia as a crop plant in temperate regions: selection of genetically characterized ecotypes and optimization of their cultivation conditions

    PubMed Central

    Singh, Devesh; Buhmann, Anne K.; Flowers, Tim J.; Seal, Charlotte E.; Papenbrock, Jutta

    2014-01-01

    Rising sea levels and salinization of groundwater due to global climate change result in fast-dwindling sources of freshwater. Therefore, it is important to find alternatives to grow food crops and vegetables. Halophytes are naturally evolved salt-tolerant plants that are adapted to grow in environments that inhibit the growth of most glycophytic crop plants substantially. Members of the Salicornioideae are promising candidates for saline agriculture due to their high tolerance to salinity. Our aim was to develop genetically characterized lines of Salicornia and Sarcocornia for further breeding and to determine optimal cultivation conditions. To obtain a large and diverse genetic pool, seeds were collected from different countries and ecological conditions. The external transcribed spacer (ETS) sequence of 62 Salicornia and Sarcocornia accessions was analysed: ETS sequence data showed a clear distinction between the two genera and between different Salicornia taxa. However, in some cases the ETS was not sufficiently variable to resolve morphologically distinct species. For the determination of optimal cultivation conditions, experiments on germination, seedling establishment and growth to a harvestable size were performed using different accessions of Salicornia spp. Experiments revealed that the percentage germination was greatest at lower salinities and with temperatures of 20/10 °C (day/night). Salicornia spp. produced more harvestable biomass in hydroponic culture than in sand culture, but the nutrient concentration requires optimization as hydroponically grown plants showed symptoms of stress. Salicornia ramosissima produced more harvestable biomass than Salicornia dolichostachya in artificial sea water containing 257 mM NaCl. Based on preliminary tests on ease of cultivation, gain in biomass, morphology and taste, S. dolichostachya was investigated in more detail, and the optimal salinity for seedling establishment was found to be 100 mM. Harvesting of S

  5. An effective hybrid approach of gene selection and classification for microarray data based on clustering and particle swarm optimization.

    PubMed

    Han, Fei; Yang, Shanxiu; Guan, Jian

    2015-01-01

    In this paper, a hybrid approach based on clustering and Particle Swarm Optimisation (PSO) is proposed to perform gene selection and classification for microarray data. In the new method, firstly, genes are partitioned into a predetermined number of clusters by K-means method. Since the genes in each cluster have much redundancy, Max-Relevance Min-Redundancy (mRMR) strategy is used to reduce redundancy of the clustered genes. Then, PSO is used to perform further gene selection from the remaining clustered genes. Because of its better generalisation performance with much faster convergence rate than other learning algorithms for neural networks, Extreme Learning Machine (ELM) is chosen to evaluate candidate gene subsets selected by PSO and perform samples classification in this study. The proposed method selects less redundant genes as well as increases prediction accuracy and its efficiency and effectiveness are verified by extensive comparisons with other classical methods on three open microarray data. PMID:26547970

  6. Optimal Wavelengths Selection Using Hierarchical Evolutionary Algorithm for Prediction of Firmness and Soluble Solids Content in Apples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral scattering is a promising technique for rapid and noninvasive measurement of multiple quality attributes of apple fruit. A hierarchical evolutionary algorithm (HEA) approach, in combination with subspace decomposition and partial least squares (PLS) regression, was proposed to select o...

  7. Optimal selection of autoregressive model coefficients for early damage detectability with an application to wind turbine blades

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2016-03-01

    Data-driven vibration-based damage detection techniques can be competitive because of their lower instrumentation and data analysis costs. The use of autoregressive model coefficients (ARMCs) as damage sensitive features (DSFs) is one such technique. So far, like with other DSFs, either full sets of coefficients or subsets selected by trial-and-error have been used, but this can lead to suboptimal composition of multivariate DSFs and decreased damage detection performance. This study enhances the selection of ARMCs for statistical hypothesis testing for damage presence. Two approaches for systematic ARMC selection, based on either adding or eliminating the coefficients one by one or using a genetic algorithm (GA) are proposed. The methods are applied to a numerical model of an aerodynamically excited large composite wind turbine blade with disbonding damage. The GA out performs the other selection methods and enables building multivariate DSFs that markedly enhance early damage detectability and are insensitive to measurement noise.

  8. Systematic optimization of an engineered hydrogel allows for selective control of human neural stem cell survival and differentiation after transplantation in the stroke brain.

    PubMed

    Moshayedi, Pouria; Nih, Lina R; Llorente, Irene L; Berg, Andrew R; Cinkornpumin, Jessica; Lowry, William E; Segura, Tatiana; Carmichael, S Thomas

    2016-10-01

    Stem cell therapies have shown promise in promoting recovery in stroke but have been limited by poor cell survival and differentiation. We have developed a hyaluronic acid (HA)-based self-polymerizing hydrogel that serves as a platform for adhesion of structural motifs and a depot release for growth factors to promote transplant stem cell survival and differentiation. We took an iterative approach in optimizing the complex combination of mechanical, biochemical and biological properties of an HA cell scaffold. First, we optimized stiffness for a minimal reaction of adjacent brain to the transplant. Next hydrogel crosslinkers sensitive to matrix metalloproteinases (MMP) were incorporated as they promoted vascularization. Finally, candidate adhesion motifs and growth factors were systemically changed in vitro using a design of experiment approach to optimize stem cell survival or proliferation. The optimized HA hydrogel, tested in vivo, promoted survival of encapsulated human neural progenitor cells (iPS-NPCs) after transplantation into the stroke core and differentially tuned transplanted cell fate through the promotion of glial, neuronal or immature/progenitor states. This HA hydrogel can be tracked in vivo with MRI. A hydrogel can serve as a therapeutic adjunct in a stem cell therapy through selective control of stem cell survival and differentiation in vivo. PMID:27521617

  9. Optimization of the ITER Ion Cyclotron Heating Antenna Array

    NASA Astrophysics Data System (ADS)

    Ryan, P. M.; Swain, D. W.; Carter, M. D.; Taylor, D. J.; Bosia, G.; D'Ippolito, D. A.; Myra, J. R.

    1996-11-01

    The present design of the ITER ICH antenna array comprises two poloidal by four toroidal current elements in each of four ports. Each current element forms a resonant double loop (RDL) with power fed to a pretuned matchpoint on the strap; the matching is accomplished using slow-wave transmission lines as adjustable shorted-stub tuners on either end of the current strap. The power requirement is 12.5 MW per port over the frequency range of 40--70 MHz, with extended operation to 80 MHz desirable. The antenna design optimization process includes strap shaping to minimize strap voltages and rf E-fields along B-field lines, (2) frame/Faraday shield geometry design to improve plasma coupling, wave spectrum directivity, and phase control, and (3) Faraday shield/bumper geometry to minimize rf sheath-induced structure heating and impurity generation.

  10. The selection of optimal ICA algorithm parameters for robust AEP component estimates using 3 popular ICA algorithms.

    PubMed

    Castañeda-Villa, N; James, C J

    2008-01-01

    Many authors have used the Auditory Evoked Potential (AEP) recordings to evaluate the performance of their ICA algorithms and have demonstrated that this procedure can remove the typical EEG artifact in these recordings (i.e. blinking, muscle noise, line noise, etc.). However, there is little work in the literature about the optimal parameters, for each of those algorithms, for the estimation of the AEP components to reliably recover both the auditory response and the specific artifacts generated for the normal function of a Cochlear Implant (CI), used for the rehabilitation of deaf people. In this work we determine the optimal parameters of three ICA algorithms, each based on different independence criteria, and assess the resulting estimations of both the auditory response and CI artifact. We show that the algorithm utilizing temporal structure, such as TDSEP-ICA, is better in estimating the components of the auditory response, in recordings contaminated by CI artifacts, than higher order statistics based algorithms. PMID:19163893

  11. Highly Selective Bioconversion of Ginsenoside Rb1 to Compound K by the Mycelium of Cordyceps sinensis under Optimized Conditions.

    PubMed

    Wang, Wei-Nan; Yan, Bing-Xiong; Xu, Wen-Di; Qiu, Ye; Guo, Yun-Long; Qiu, Zhi-Dong

    2015-01-01

    Compound K (CK), a highly active and bioavailable derivative obtained from protopanaxadiol ginsenosides, displays a wide variety of pharmacological properties, especially antitumor activity. However, the inadequacy of natural sources limits its application in the pharmaceutical industry. In this study, we firstly discovered that Cordyceps sinensis was a potent biocatalyst for the biotransformation of ginsenoside Rb1 into CK. After a series of investigations on the biotransformation parameters, an optimal composition of the biotransformation culture was found to be lactose, soybean powder and MgSO₄ without controlling the pH. Also, an optimum temperature of 30 °C for the biotransformation process was suggested in a range of 25 °C-50 °C. Then, a biotransformation pathway of Rb1→Rd→F2→CK was established using high performance liquid chromatography/quadrupole time-of-flight mass spectrometry (HPLC-Q-TOF-MS). Our results demonstrated that the molar bioconversion rate of Rb1 to CK was more than 82% and the purity of CK produced by C. sinensis under the optimized conditions was more than 91%. In conclusion, the combination of C. sinensis and the optimized conditions is applicable for the industrial preparation of CK for medicinal purposes. PMID:26512632

  12. Toward automatic field selection and planning using Monte Carlo-based direct aperture optimization in modulated electron radiotherapy

    NASA Astrophysics Data System (ADS)

    Alexander, Andrew; DeBlois, François; Seuntjens, Jan

    2010-08-01

    Modulated electron radiotherapy (MERT) has been proven to produce optimal plans for shallow tumors. This study investigates automated approaches to the field determination process in generating optimal MERT plans for few-leaf electron collimator (FLEC)-based MERT, by generating a large database of pre-calculated beamlets stored as phase-space files. Beamlets can be used in an overlapping feathered pattern to reduce the effect of abutting fields, which can contribute to dose inhomogeneities within the target. Beamlet dose calculation was performed by Monte Carlo (MC) simulations prior to direct aperture optimization (DAO). The second part of the study examines a preliminary clinical comparison between FLEC-based MERT and helical TomoTherapy. A MERT plan for spinal irradiation was not able to conform to the PTV dose constraints as closely as the TomoTherapy plan, although the TomoTherapy plan was taken as is, i.e. not Monte Carlo re-calculated. Despite the remaining gradients in the PTV, the MERT plan was superior in reducing the low-dose bath typical of TomoTherapy plans. In conclusion, the FLEC-based MERT planning techniques developed within the study produced promising MERT plans with minimal user input. The phase-space database reduces the MC calculation time and the feathered field pattern improves target homogeneity. With further investigations, FLEC-based MERT will find an important niche in clinical radiation therapy.

  13. Construction of a novel selection system for endoglucanases exhibiting carbohydrate-binding modules optimized for biomass using yeast cell-surface engineering

    PubMed Central

    2012-01-01

    To permit direct cellulose degradation and ethanol fermentation, Saccharomyces cerevisiae BY4741 (Δsed1) codisplaying 3 cellulases (Trichoderma reesei endoglucanase II [EG], T. reesei cellobiohydrolase II [CBH], and Aspergillus aculeatus β-glucosidase I [BG]) was constructed by yeast cell-surface engineering. The EG used in this study consists of a family 1 carbohydrate-binding module (CBM) and a catalytic module. A comparison with family 1 CBMs revealed conserved amino acid residues and flexible amino acid residues. The flexible amino acid residues were at positions 18, 23, 26, and 27, through which the degrading activity for various cellulose structures in each biomass may have been optimized. To select the optimal combination of CBMs of EGs, a yeast mixture with comprehensively mutated CBM was constructed. The mixture consisted of yeasts codisplaying EG with mutated CBMs, in which 4 flexible residues were comprehensively mutated, CBH, and BG. The yeast mixture was inoculated in selection medium with newspaper as the sole carbon source. The surviving yeast consisted of RTSH yeast (the mutant sequence of CBM: N18R, S23T, S26S, and T27H) and wild-type yeast (CBM was the original) in a ratio of 1:46. The mixture (1 RTSH yeast and 46 wild-type yeasts) had a fermentation activity that was 1.5-fold higher than that of wild-type yeast alone in the early phase of saccharification and fermentation, which indicates that the yeast mixture with comprehensively mutated CBM could be used to select the optimal combination of CBMs suitable for the cellulose of each biomass. PMID:23092441

  14. Development and Optimization of Piperidyl-1,2,3-Triazole Ureas as Selective Chemical Probes of Endocannabinoid Biosynthesis

    PubMed Central

    Hsu, Ku-Lung; Tsuboi, Katsunori; Whitby, Landon R.; Speers, Anna E.; Pugh, Holly; Inloes, Jordon; Cravatt, Benjamin F.

    2014-01-01

    We have previously shown that 1,2,3-triazole ureas (1,2,3-TUs) act as versatile class of irreversible serine hydrolase inhibitors that can be tuned to create selective probes for diverse members of this large enzyme class, including diacylglycerol lipase-β (DAGLβ), a principal biosynthetic enzyme for the endocannabinoid 2-arachidonoylglycerol (2-AG). Here, we provide a detailed account of the discovery, synthesis, and structure-activity relationship (SAR) of (2-substituted)-piperidyl-1,2,3-TUs that selectively inactivate DAGLβ in living systems. Key to success was the use of activity-based protein profiling (ABPP) with broad-spectrum and tailored activity-based probes to guide our medicinal chemistry efforts. We also describe an expanded repertoire of DAGL-tailored activity-based probes that includes biotinylated and alkyne agents for enzyme enrichment coupled with mass spectrometry-based proteomics and assessment of proteome-wide selectivity. Our findings highlight the broad utility of 1,2,3-TUs for serine hydrolase inhibitor development and their application to create selective probes of endocannabinoid biosynthetic pathways. PMID:24152245

  15. A Study of the Relationship between Cognitive Emotion Regulation, Optimism, and Perceived Stress among Selected Teachers in Lutheran Schools

    ERIC Educational Resources Information Center

    Gliebe, Sudi Kate

    2012-01-01

    Problem: The problem of this study was to determine the relationship between perceived stress, as measured by the Perceived Stress Scale (PSS), and a specific set of predictor variables among selected teachers in Lutheran schools in the United States. These variables were cognitive emotion regulation strategies (positive reappraisal and…

  16. iVAX: An integrated toolkit for the selection and optimization of antigens and the design of epitope-driven vaccines

    PubMed Central

    Moise, Leonard; Gutierrez, Andres; Kibria, Farzana; Martin, Rebecca; Tassone, Ryan; Liu, Rui; Terry, Frances; Martin, Bill; De Groot, Anne S

    2015-01-01

    Computational vaccine design, also known as computational vaccinology, encompasses epitope mapping, antigen selection and immunogen design using computational tools. The iVAX toolkit is an integrated set of tools that has been in development since 1998 by De Groot and Martin. It comprises a suite of immunoinformatics algorithms for triaging candidate antigens, selecting immunogenic and conserved T cell epitopes, eliminating regulatory T cell epitopes, and optimizing antigens for immunogenicity and protection against disease. iVAX has been applied to vaccine development programs for emerging infectious diseases, cancer antigens and biodefense targets. Several iVAX vaccine design projects have had success in pre-clinical studies in animal models and are progressing toward clinical studies. The toolkit now incorporates a range of immunoinformatics tools for infectious disease and cancer immunotherapy vaccine design. This article will provide a guide to the iVAX approach to computational vaccinology. PMID:26155959

  17. Optimization of Potent and Selective Quinazolinediones: Inhibitors of Respiratory Syncytial Virus That Block RNA-Dependent RNA-Polymerase Complex Activity

    PubMed Central

    2015-01-01

    A quinazolinedione-derived screening hit 2 was discovered with cellular antiviral activity against respiratory syncytial virus (CPE EC50 = 2.1 μM), moderate efficacy in reducing viral progeny (4.2 log at 10 μM), and marginal cytotoxic liability (selectivity index, SI ∼ 24). Scaffold optimization delivered analogs with improved potency and selectivity profiles. Most notable were compounds 15 and 19 (EC50 = 300–500 nM, CC50 > 50 μM, SI > 100), which significantly reduced viral titer (>400,000-fold), and several analogs were shown to block the activity of the RNA-dependent RNA-polymerase complex of RSV. PMID:25399509

  18. End-to-end sensor simulation for spectral band selection and optimization with application to the Sentinel-2 mission.

    PubMed

    Segl, Karl; Richter, Rudolf; Küster, Theres; Kaufmann, Hermann

    2012-02-01

    An end-to-end sensor simulation is a proper tool for the prediction of the sensor's performance over a range of conditions that cannot be easily measured. In this study, such a tool has been developed that enables the assessment of the optimum spectral resolution configuration of a sensor based on key applications. It employs the spectral molecular absorption and scattering properties of materials that are used for the identification and determination of the abundances of surface and atmospheric constituents and their interdependence on spatial resolution and signal-to-noise ratio as a basis for the detailed design and consolidation of spectral bands for the future Sentinel-2 sensor. The developed tools allow the computation of synthetic Sentinel-2 spectra that form the frame for the subsequent twofold analysis of bands in the atmospheric absorption and window regions. One part of the study comprises the assessment of optimal spatial and spectral resolution configurations for those bands used for atmospheric correction, optimized with regard to the retrieval of aerosols, water vapor, and the detection of cirrus clouds. The second part of the study presents the optimization of thematic bands, mainly driven by the spectral characteristics of vegetation constituents and minerals. The investigation is performed for different wavelength ranges because most remote sensing applications require the use of specific band combinations rather than single bands. The results from the important "red-edge" and the "short-wave infrared" domains are presented. The recommended optimum spectral design predominantly confirms the sensor parameters given by the European Space Agency. The system is capable of retrieving atmospheric and geobiophysical parameters with enhanced quality compared to existing multispectral sensors. Minor spectral changes of single bands are discussed in the context of typical remote sensing applications, supplemented by the recommendation of a few new bands for

  19. Optimization of an extraction protocol for organic matter from soils and sediments using high resolution mass spectrometry: selectivity and biases

    NASA Astrophysics Data System (ADS)

    Chu, R. K.; Tfaily, M. M.; Tolic, N.; Kyle, J. E.; Robinson, E. R.; Hess, N. J.; Paša-Tolić, L.

    2015-12-01

    Soil organic matter (SOM) is a complex mixture of above and belowground plant litter and microbial residues, and is a key reservoir for carbon (C) and nutrient biogeochemical cycling in different ecosystems. A limited understanding of the molecular composition of SOM prohibits the ability to routinely decipher chemical processes within soil and predict how terrestrial C fluxes will response to changing climatic conditions. Here, we present that the choice of solvent can be used to selectively extract different compositional fractions from SOM to either target a specific class of compounds or gain a better understanding of the entire composition of the soil sample using 12T Fourier transform ion cyclotron resonance mass spectrometry. Specifically, we found that hexane and chloroform were selective for lipid-like compounds with very low O:C ratios; water was selective for carbohydrates with high O:C ratios; acetonitrile preferentially extracts lignin, condensed structures, and tannin polyphenolic compounds with O:C > 0.5; methanol has higher selectivity towards lignin and lipid compounds characterized with relatively low O:C < 0.5. Hexane, chloroform, methanol, acetonitrile and water increase the number and types of organic molecules extracted from soil for a broader range of chemically diverse soil types. Since each solvent extracts a selective group of compounds, using a suite of solvents with varying polarity for analysis results in more comprehensive representation of the diversity of organic molecules present in soil and a better representation of the whole spectrum of available substrates for microorganisms. Moreover, we have developed a sequential extraction protocol that permits sampling diverse classes of organic compounds while minimizing ionization competition during ESI while increasing sample throughput and decreasing sample volume. This allowed us to hypothesize about possible chemical reactions relating classes of organic molecules that reflect abiotic

  20. Optimization of cell line development in the GS-CHO expression system using a high-throughput, single cell-based clone selection system.

    PubMed

    Nakamura, Tsuyoshi; Omasa, Takeshi

    2015-09-01

    Therapeutic antibodies are commonly produced by high-expressing, clonal and recombinant Chinese hamster ovary (CHO) cell lines. Currently, CHO cells dominate as a commercial production host because of their ease of use, established regulatory track record, and safety profile. CHO-K1SV is a suspension, protein-free-adapted CHO-K1-derived cell line employing the glutamine synthetase (GS) gene expression system (GS-CHO expression system). The selection of high-producing mammalian cell lines is a crucial step in process development for the production of therapeutic antibodies. In general, cloning by the limiting dilution method is used to isolate high-producing monoclonal CHO cells. However, the limiting dilution method is time consuming and has a low probability of monoclonality. To minimize the duration and increase the probability of obtaining high-producing clones with high monoclonality, an automated single cell-based clone selector, the ClonePix FL system, is available. In this study, we applied the high-throughput ClonePix FL system for cell line development using CHO-K1SV cells and investigated efficient conditions for single cell-based clone selection. CHO-K1SV cell growth at the pre-picking stage was improved by optimizing the formulation of semi-solid medium. The efficiency of picking and cell growth at the post-picking stage was improved by optimization of the plating time without decreasing the diversity of clones. The conditions for selection, including the medium formulation, were the most important factors for the single cell-based clone selection system to construct a high-producing CHO cell line. PMID:25792187

  1. A distributed multichannel demand-adaptive P2P VoD system with optimized caching and neighbor-selection

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Chen, Minghua; Parekh, Abhay; Ramchandran, Kannan

    2011-09-01

    We design a distributed multi-channel P2P Video-on-Demand (VoD) system using "plug-and-play" helpers. Helpers are heterogenous "micro-servers" with limited storage, bandwidth and number of users they can serve simultaneously. Our proposed system has the following salient features: (1) it jointly optimizes over helper-user connection topology, video storage distribution and transmission bandwidth allocation; (2) it minimizes server load, and is adaptable to varying supply and demand patterns across multiple video channels irrespective of video popularity; and (3) it is fully distributed and requires little or no maintenance overhead. The combinatorial nature of the problem and the system demand for distributed algorithms makes the problem uniquely challenging. By utilizing Lagrangian decomposition and Markov chain approximation based arguments, we address this challenge by designing two distributed algorithms running in tandem: a primal-dual storage and bandwidth allocation algorithm and a "soft-worst-neighbor-choking" topology-building algorithm. Our scheme provably converges to a near-optimal solution, and is easy to implement in practice. Packet-level simulation results show that the proposed scheme achieves minimum sever load under highly heterogeneous combinations of supply and demand patterns, and is robust to system dynamics of user/helper churn, user/helper asynchrony, and random delays in the network.

  2. Development of functional beverages from blends of Hibiscus sabdariffa extract and selected fruit juices for optimal antioxidant properties.

    PubMed

    Ogundele, Oluwatoyin M A; Awolu, Olugbenga O; Badejo, Adebanjo A; Nwachukwu, Ifeanyi D; Fagbemi, Tayo N

    2016-09-01

    The demand for functional foods and drinks with health benefit is on the increase. The synergistic effect from mixing two or more of such drinks cannot be overemphasized. This study was carried out to formulate and investigate the effects of blends of two or more of pineapple, orange juices, carrot, and Hibiscus sabdariffa extracts (HSE) on the antioxidant properties of the juice formulations in order to obtain a combination with optimal antioxidant properties. Experimental design was carried out using optimal mixture model of response surface methodology which generated twenty experimental runs with antioxidant properties as the responses. The DPPH (1,1-diphenyl-2-picrylhydrazyl) and ABTS [2,2'-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid)] radical scavenging abilities, ferric reducing antioxidant potential (FRAP), vitamin C, total phenolics, and total carotenoids contents of the formulations were evaluated as a test of antioxidant property. In all the mixtures, formulations having HSE as part of the mixture showed the highest antioxidant potential. The statistical analyzes, however, showed that the formulations containing pineapple, carrot, orange, and HSE of 40.00, 16.49, 17.20, and 26.30%, respectively, produced optimum antioxidant potential and was shown to be acceptable to a research laboratory guidance panel, thus making them viable ingredients for the production of functional beverages possessing important antioxidant properties with potential health benefits. PMID:27625770

  3. Using multi-criteria decision making for selection of the optimal strategy for municipal solid waste management.

    PubMed

    Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica

    2016-09-01

    Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy. PMID:27354012

  4. Bioeconomic factors of beef heifer maturity to consider when establishing criteria to optimally select and/or retain herd replacements.

    PubMed

    Stockton, M C; Wilson, R K; Feuz, D M; Stalker, L A; Funston, R N

    2014-10-01

    Understanding the biology of heifer maturity and its relationship to calving difficulty and subsequent breeding success is a vital step in building a bioeconomic model to identify optimal production and profitability. A limited dependent variable probit model is used to quantify the responses among heifer maturities, measured by a maturity index (MI), on dystocia and second pregnancy. The MI account for heifer age, birth BW, prebreeding BW, nutrition level, and dam size and age and is found to be inversely related to dystocia occurrence. On average there is a 2.2% increase in the probability of dystocia with every 1 point drop in the MI between the MI scores of 50 and 70. Statistically, MI does not directly alter second pregnancy rate; however, dystocia does. The presence of dystocia reduced second pregnancy rates by 10.67%. Using the probability of dystocia predicted from the MI in the sample, it is found that on average, every 1 point increase in MI added 0.62% to the probability of the occurrence of second pregnancy over the range represented by the data. Relationships among MI, dystocia, and second pregnancy are nonlinear and exhibit diminishing marginal effects. These relationships indicate optimal production and profitability occur at varying maturities, which are altered by animal type, economic environment, production system, and management regime. With these captured relationships, any single group of heifers may be ranked by profitability given their physical characteristics and the applicable production, management, and economic conditions. PMID:25149330

  5. Selection and optimization of transfection enhancer additives for increased virus-like particle production in HEK293 suspension cell cultures.

    PubMed

    Cervera, Laura; Fuenmayor, Javier; González-Domínguez, Irene; Gutiérrez-Granados, Sonia; Segura, Maria Mercedes; Gòdia, Francesc

    2015-12-01

    The manufacturing of biopharmaceuticals in mammalian cells typically relies on the use of stable producer cell lines. However, in recent years, transient gene expression has emerged as a suitable technology for rapid production of biopharmaceuticals. Transient gene expression is particularly well suited for early developmental phases, where several potential therapeutic targets need to be produced and tested in vivo. As a relatively new bioprocessing modality, a number of opportunities exist for improving cell culture productivity upon transient transfection. For instance, several compounds have shown positive effects on transient gene expression. These transfection enhancers either facilitate entry of PEI/DNA transfection complexes into the cell or nucleus or increase levels of gene expression. In this work, the potential of combining transfection enhancers to increase Gag-based virus-like particle production levels upon transfection of suspension-growing HEK 293 cells is evaluated. Using Plackett-Burman design of experiments, it is first tested the effect of eight transfection enhancers: trichostatin A, valproic acid, sodium butyrate, dimethyl sulfoxide (DMSO), lithium acetate, caffeine, hydroxyurea, and nocodazole. An optimal combination of compounds exhibiting the highest effect on gene expression levels was subsequently identified using a surface response experimental design. The optimal consisted on the addition of 20 mM lithium acetate, 3.36 mM valproic acid, and 5.04 mM caffeine which increased VLP production levels 3.8-fold, while maintaining cell culture viability at 94%. PMID:26278533

  6. Identification of a novel selective H1-antihistamine with optimized pharmacokinetic properties for clinical evaluation in the treatment of insomnia.

    PubMed

    Moree, Wilna J; Li, Bin-Feng; Zamani-Kord, Said; Yu, Jinghua; Coon, Timothy; Huang, Charles; Marinkovic, Dragan; Tucci, Fabio C; Malany, Siobhan; Bradbury, Margaret J; Hernandez, Lisa M; Wen, Jianyun; Wang, Hua; Hoare, Samuel R J; Petroski, Robert E; Jalali, Kayvon; Yang, Chun; Sacaan, Aida; Madan, Ajay; Crowe, Paul D; Beaton, Graham

    2010-10-01

    Analogs of the known H(1)-antihistamine R-dimethindene with suitable selectivity for key GPCRs, P450 enzymes and hERG channel were assessed for metabolism profile and in vivo properties. Several analogs were determined to exhibit diverse metabolism. One of these compounds, 10a, showed equivalent efficacy in a rat EEG/EMG model to a previously identified clinical candidate and a potentially superior pharmacokinetic profile as determined from a human microdose study. PMID:20800486

  7. Optimizing limbic selective D2/D3 receptor occupancy by risperidone: a [123I]-epidepride SPET study.

    PubMed

    Bressan, Rodrigo A; Erlandsson, Kjell; Jones, Hugh M; Mulligan, Rachel S; Ell, Peter J; Pilowsky, Lyn S

    2003-02-01

    Selective action at limbic cortical dopamine D2-like receptors is a putative mechanism of atypical antipsychotic efficacy with few extrapyramidal side effects. Although risperidone is an atypical antipsychotic with high affinity for D2 receptors, low-dose risperidone treatment is effective without inducing extrapyramidal symptoms. The objective was to test the hypothesis that treatment with low-dose risperidone results in 'limbic selective' D2/D3 receptor blockade in vivo. Dynamic single photon emission tomography (SPET) sequences were obtained over 5 hours after injection of [123I]-epidepride (approximately 150 MBq), using a high-resolution triple-headed brain scanner (Marconi Prism 3000XP). Kinetic modelling was performed using the simplified reference region model to obtain binding potential values. Estimates of receptor occupancy were made relative to a normal volunteer control group (n = 5). Six patients treated with low-dose risperidone (mean = 2.6 mg) showed moderate levels of D2/D3 occupancy in striatum (49.9%), but higher levels of D2/D3 occupancy in thalamus (70.8%) and temporal cortex (75.2%). Occupancy values in striatum were significantly different from thalamus (F (1,4) = 26.3, p < 0.01) and from temporal cortex (F (1,4) = 53.4, p < 0.01). This is the first study to evaluate striatal and extrastriatal occupancy of risperidone. Low dose treatment with risperidone achieves a similar selectivity of limbic cortical over striatal D2/D3 receptor blockade to that of atypical antipsychotics with lower D2/D3 affinity such as clozapine, olanzapine and quetiapine. This finding is consistent with the relevance of 'limbic selective' D2/D3 receptor occupancy to the therapeutic efficacy of atypical antipsychotic drugs. PMID:12544369

  8. Rationale and design for an investigation to optimize selective serotonin reuptake inhibitor treatment for pregnant women with depression.

    PubMed

    Avram, M J; Stika, C S; Rasmussen-Torvik, L J; Ciolino, J D; Pinheiro, E; George, A L; Wisner, K L

    2016-07-01

    The physiological changes of pregnancy can affect the pharmacokinetics of a drug, thereby affecting its dose requirements. Because pharmacokinetic (PK) studies in pregnant women have rarely been conducted, evidence-based dosing adjustments are seldom available. In particular, despite the fact that the use of antidepressants has become increasingly common, pregnancy-associated PK changes of the selective serotonin reuptake inhibitors (SSRIs) are largely unknown. PMID:27037844

  9. Optimizing liquid waste treatment processing in PWRs: focus on modeling of the variation of ion-exchange resins selectivity coefficients

    SciTech Connect

    Gressier, Frederic; Van der Lee, Jan; Schneider, Helene; Bachet, Martin; Catalette, Hubert

    2007-07-01

    A bibliographic survey has highlighted the essential role of selectivity on resin efficiency, especially the variation of selectivity coefficients in function of the resin saturation state and the operating conditions. This phenomenon has been experimentally confirmed but is not yet implemented into an ion-exchange model specific for resins. This paper reviews the state of the art in predicting sorption capacity of ion-exchange resins. Different models accounting for ions activities inside the resin phase are available. Moreover, a comparison between the values found in the literature and our results has been done. The results of sorption experiments of cobalt chloride on a strong cationic gel type resin used in French PWRs are presented. The graph describing the variation of selectivity coefficient with respect to cobalt equivalent fraction is drawn. The parameters determined by the analysis of this graph are injected in a new physico-chemical law. Implementation of this model in the chemical speciation simulation code CHESS enables to study the overall effect of this approach for the sorption in a batch. (authors)

  10. Discovery and optimization of 1,7-disubstituted-2,2-dimethyl-2,3-dihydroquinazolin-4(1H)-ones as potent and selective PKCθ inhibitors.

    PubMed

    Katoh, Taisuke; Takai, Takafumi; Yukawa, Takafumi; Tsukamoto, Tetsuya; Watanabe, Etsurou; Mototani, Hideyuki; Arita, Takeo; Hayashi, Hiroki; Nakagawa, Hideyuki; Klein, Michael G; Zou, Hua; Sang, Bi-Ching; Snell, Gyorgy; Nakada, Yoshihisa

    2016-06-01

    A high-throughput screening campaign helped us to identify an initial lead compound (1) as a protein kinase C-θ (PKCθ) inhibitor. Using the docking model of compound 1 bound to PKCθ as a model, structure-based drug design was employed and two regions were identified that could be explored for further optimization, i.e., (a) a hydrophilic region around Thr442, unique to PKC family, in the inner part of the hinge region, and (b) a lipophilic region at the forefront of the ethyl moiety. Optimization of the hinge binder led us to find 1,3-dihydro-2H-imidazo[4,5-b]pyridin-2-one as a potent and selective hinge binder, which resulted in the discovery of compound 5. Filling the lipophilic region with a suitable lipophilic substituent boosted PKCθ inhibitory activity and led to the identification of compound 10. The co-crystal structure of compound 10 bound to PKCθ confirmed that both the hydrophilic and lipophilic regions were fully utilized. Further optimization of compound 10 led us to compound 14, which demonstrated an improved pharmacokinetic profile and inhibition of IL-2 production in a mouse. PMID:27117263

  11. Pharmaceutical Optimization of Peptide Toxins for Ion Channel Targets: Potent, Selective, and Long-Lived Antagonists of Kv1.3.

    PubMed

    Murray, Justin K; Qian, Yi-Xin; Liu, Benxian; Elliott, Robin; Aral, Jennifer; Park, Cynthia; Zhang, Xuxia; Stenkilsson, Michael; Salyers, Kevin; Rose, Mark; Li, Hongyan; Yu, Steven; Andrews, Kristin L; Colombero, Anne; Werner, Jonathan; Gaida, Kevin; Sickmier, E Allen; Miu, Peter; Itano, Andrea; McGivern, Joseph; Gegg, Colin V; Sullivan, John K; Miranda, Les P

    2015-09-10

    To realize the medicinal potential of peptide toxins, naturally occurring disulfide-rich peptides, as ion channel antagonists, more efficient pharmaceutical optimization technologies must be developed. Here, we show that the therapeutic properties of multiple cysteine toxin peptides can be rapidly and substantially improved by combining direct chemical strategies with high-throughput electrophysiology. We applied whole-molecule, brute-force, structure-activity analoging to ShK, a peptide toxin from the sea anemone Stichodactyla helianthus that inhibits the voltage-gated potassium ion channel Kv1.3, to effectively discover critical structural changes for 15× selectivity against the closely related neuronal ion channel Kv1.1. Subsequent site-specific polymer conjugation resulted in an exquisitely selective Kv1.3 antagonist (>1000× over Kv1.1) with picomolar functional activity in whole blood and a pharmacokinetic profile suitable for weekly administration in primates. The pharmacological potential of the optimized toxin peptide was demonstrated by potent and sustained inhibition of cytokine secretion from T cells, a therapeutic target for autoimmune diseases, in cynomolgus monkeys. PMID:26288216

  12. Experimental parameters optimization of instrumental neutron activation analysis in order to determine selected elements in some industrial soils in Turkey

    NASA Astrophysics Data System (ADS)

    Haciyakupoglu, Sevilay; Nur Esen, Ayse; Erenturk, Sema

    2014-08-01

    The purpose of this study is optimization of the experimental parameters for analysis of soil matrix by instrumental neutron activation analysis and quantitative determination of barium, cerium, lanthanum, rubidium, scandium and thorium in soil samples collected from industrialized urban areas near Istanbul. Samples were irradiated in TRIGA MARK II Research Reactor of Istanbul Technical University. Two types of reference materials were used to check the accuracy of the applied method. The achieved results were found to be in compliance with certified values of the reference materials. The calculated En numbers for mentioned elements were found to be less than 1. The presented data of element concentrations in soil samples will help to trace the pollution as an impact of urbanization and industrialization, as well as providing database for future studies.

  13. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    PubMed Central

    Ramyachitra, D.; Sofia, M.; Manikandan, P.

    2015-01-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  14. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    PubMed

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  15. Alcohol from whey permeate: strain selection, temperature, and medium optimization. [Candida pseudotropicalis, Kluyveromyces fragilis, and K. lactis

    SciTech Connect

    Vienne, P.; Von Stockar, U.

    1983-01-01

    A comparative study of shaken flask cultures of some yeast strains capable of fermenting lactose showed no significant differences in alcohol yield among the four best strains. Use of whey permeate concentrated three times did not affect the yields. An optimal growth temperature of 38/sup 0/C was determined for K. fragilis NRRL 665. Elemental analysis of both the permeate and the dry cell mass of two strains indicated the possibility of a stoichiometric limitation by nitrogen. Batch cultures in laboratory fermentors confirmed this finding and revealed in addition the presence of a limitation due to growth factors. Both types of limitations could be overcome by adding yeast extract. The maximum productivity of continuous cultures could thus be improved to 5.1 g/l-h. The maximum specific growth rate was of the order of 0.310 h/sup -1/. 15 references, 10 figures, 9 tables.

  16. Physics-based simulation modeling and optimization of microstructural changes induced by machining and selective laser melting processes in titanium and nickel based alloys

    NASA Astrophysics Data System (ADS)

    Arisoy, Yigit Muzaffer

    Manufacturing processes may significantly affect the quality of resultant surfaces and structural integrity of the metal end products. Controlling manufacturing process induced changes to the product's surface integrity may improve the fatigue life and overall reliability of the end product. The goal of this study is to model the phenomena that result in microstructural alterations and improve the surface integrity of the manufactured parts by utilizing physics-based process simulations and other computational methods. Two different (both conventional and advanced) manufacturing processes; i.e. machining of Titanium and Nickel-based alloys and selective laser melting of Nickel-based powder alloys are studied. 3D Finite Element (FE) process simulations are developed and experimental data that validates these process simulation models are generated to compare against predictions. Computational process modeling and optimization have been performed for machining induced microstructure that includes; i) predicting recrystallization and grain size using FE simulations and the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model, ii) predicting microhardness using non-linear regression models and the Random Forests method, and iii) multi-objective machining optimization for minimizing microstructural changes. Experimental analysis and computational process modeling of selective laser melting have been also conducted including; i) microstructural analysis of grain sizes and growth directions using SEM imaging and machine learning algorithms, ii) analysis of thermal imaging for spattering, heating/cooling rates and meltpool size, iii) predicting thermal field, meltpool size, and growth directions via thermal gradients using 3D FE simulations, iv) predicting localized solidification using the Phase Field method. These computational process models and predictive models, once utilized by industry to optimize process parameters, have the ultimate potential to improve performance of

  17. Optimal dose selection of fluticasone furoate nasal spray for the treatment of seasonal allergic rhinitis in adults and adolescents.

    PubMed

    Martin, Bruce G; Ratner, Paul H; Hampel, Frank C; Andrews, Charles P; Toler, Tom; Wu, Wei; Faris, Melissa A; Philpot, Edward E

    2007-01-01

    Efficacy and safety of fluticasone furoate nasal spray, administered using a unique side-actuated device, were evaluated in patients > or =12 years of age with seasonal allergic rhinitis to determine the optimal dose. A randomized, double-blind, parallel-group, placebo-controlled, dose-ranging study was performed on 641 patients who received placebo (n=128) or fluticasone furoate, 55 microg (n=127), 110 microg (n=127), 220 microg (n=129), or 440 microg (n=130), once daily for 2 weeks. Fluticasone furoate was significantly more effective than placebo for mean changes from baseline over the 2-week treatment period in daily reflective total nasal symptom score (primary end point; p < 0.001 each dose vs. placebo), morning predose instantaneous total nasal symptom score (p < 0.001 each dose versus placebo), daily reflective total ocular symptom score (p < or = 0.013 each dose versus placebo), and morning predose instantaneous total ocular symptom score (p < or = 0.019 for three highest doses versus placebo). The onset of action for fluticasone furoate nasal spray versus placebo was observed 8 hours after the first. dose of study medication in the 110 and 440 microg treatment groups (p < or = 0.032). The incidence of adverse events, results of clinical laboratory tests, and changes in 24-hour urinary cortisol values were similar between active treatment groups and placebo. The preliminary profile of fluticasone furoate is that of a rapidly effective therapy that confers 24-hour efficacy for both nasal and ocular symptoms with once-daily dosing. The 110-microg dose was chosen for phase III development because it achieved statistically significant and clinically meaningful results for all efficacy end points and provided the optimal risk-benefit ratio. PMID:17479608

  18. Molecular design and structural optimization of potent peptide hydroxamate inhibitors to selectively target human ADAM metallopeptidase domain 17.

    PubMed

    Wang, Zhengting; Wang, Lei; Fan, Rong; Zhou, Jie; Zhong, Jie

    2016-04-01

    Human ADAMs (a disintegrin and metalloproteinases) have been established as an attractive therapeutic target of inflammatory disorders such as inflammatory bowel disease (IBD). The ADAM metallopeptidase domain 17 (ADAM17 or TACE) and its close relative ADAM10 are two of the most important ADAM members that share high conservation in sequence, structure and function, but exhibit subtle difference in regulation of downstream cell signaling events. Here, we described a systematic protocol that combined computational modeling and experimental assay to discover novel peptide hydroxamate derivatives as potent and selective inhibitors for ADAM17 over ADAM10. In the procedure, a virtual combinatorial library of peptide hydroxamate compounds was generated by exploiting intermolecular interactions involved in crystal and modeled structures. The library was examined in detail to identify few promising candidates with both high affinity to ADAM17 and low affinity to ADAM10, which were then tested in vitro with enzyme inhibition assay. Consequently, two peptide hydroxamates Hxm-Phe-Ser-Asn and Hxm-Phe-Arg-Gln were found to exhibit potent inhibition against ADAM17 (Ki=92 and 47nM, respectively) and strong selectivity for ADAM17 over ADAM10 (∼7-fold and ∼5-fold, S=0.86 and 0.71, respectively). The structural basis and energetic property of ADAM17 and ADAM10 interactions with the designed inhibitors were also investigated systematically. It is found that the exquisite network of nonbonded interactions involving the side chains of peptide hydroxamates is primarily responsible for inhibitor selectivity, while the coordination interactions and hydrogen bonds formed by the hydroxamate moiety and backbone of peptide hydroxamates confer high affinity to inhibitor binding. PMID:26709988

  19. Time-Dependent Selection of an Optimal Set of Sources to Define a Stable Celestial Reference Frame

    NASA Technical Reports Server (NTRS)

    Le Bail, Karine; Gordon, David

    2010-01-01

    Temporal statistical position stability is required for VLBI sources to define a stable Celestial Reference Frame (CRF) and has been studied in many recent papers. This study analyzes the sources from the latest realization of the International Celestial Reference Frame (ICRF2) with the Allan variance, in addition to taking into account the apparent linear motions of the sources. Focusing on the 295 defining sources shows how they are a good compromise of different criteria, such as statistical stability and sky distribution, as well as having a sufficient number of sources, despite the fact that the most stable sources of the entire ICRF2 are mostly in the Northern Hemisphere. Nevertheless, the selection of a stable set is not unique: studying different solutions (GSF005a and AUG24 from GSFC and OPA from the Paris Observatory) over different time periods (1989.5 to 2009.5 and 1999.5 to 2009.5) leads to selections that can differ in up to 20% of the sources. Observing, recording, and network improvement are some of the causes, showing better stability for the CRF over the last decade than the last twenty years. But this may also be explained by the assumption of stationarity that is not necessarily right for some sources.

  20. PC2D simulation and optimization of the selective emitter solar cells fabricated by screen printing phosphoric paste method

    NASA Astrophysics Data System (ADS)

    Jia, Xiaojie; Ai, Bin; Deng, Youjun; Xu, Xinxiang; Peng, Hua; Shen, Hui

    2015-08-01

    On the basis of perfect PC2D simulation to the measured current density vs voltage (J-V) curve of the best selective emitter (SE) solar cell fabricated by the CSG Company using the screen printing phosphoric paste method, we systematically investigated the effect of the parameters of gridline, base, selective emitter, back surface field (BSF) layer and surface recombination rate on performance of the SE solar cell. Among these parameters, we identified that the base minority carrier lifetime, the front and back surface recombination rate and the ratio of the sheet-resistance of heavily and lightly doped region are the four largest efficiency-affecting factors. If all the parameters have ideal values, the SE solar cell fabricated on a p-type monocrystalline silicon wafer can even obtain the efficiency of 20.45%. In addition, the simulation also shows that fine gridline combining dense gridline and increasing bus bar number while keeping the lower area ratio can offer the other ways to improve the efficiency.

  1. Polarimetric SAR decomposition parameter subset selection and their optimal dynamic range evaluation for urban area classification using Random Forest

    NASA Astrophysics Data System (ADS)

    Hariharan, Siddharth; Tirodkar, Siddhesh; Bhattacharya, Avik

    2016-02-01

    Urban area classification is important for monitoring the ever increasing urbanization and studying its environmental impact. Two NASA JPL's UAVSAR datasets of L-band (wavelength: 23 cm) were used in this study for urban area classification. The two datasets used in this study are different in terms of urban area structures, building patterns, their geometric shapes and sizes. In these datasets, some urban areas appear oriented about the radar line of sight (LOS) while some areas appear non-oriented. In this study, roll invariant polarimetric SAR decomposition parameters were used to classify these urban areas. Random Forest (RF), which is an ensemble decision tree learning technique, was used in this study. RF performs parameter subset selection as a part of its classification procedure. In this study, parameter subsets were obtained and analyzed to infer scattering mechanisms useful for urban area classification. The Cloude-Pottier α, the Touzi dominant scattering amplitude αs1 and the anisotropy A were among the top six important parameters selected for both the datasets. However, it was observed that these parameters were ranked differently for the two datasets. The urban area classification using RF was compared with the Support Vector Machine (SVM) and the Maximum Likelihood Classifier (MLC) for both the datasets. RF outperforms SVM by 4% and MLC by 12% in Dataset 1. It also outperforms SVM and MLC by 3.5% and 11% respectively in Dataset 2.

  2. Effective adsorption of Cr(VI) on mesoporous Fe-functionalized Akadama clay: Optimization, selectivity, and mechanism

    NASA Astrophysics Data System (ADS)

    Ji, Min; Su, Xiao; Zhao, Yingxin; Qi, Wenfang; Wang, Yue; Chen, Guanyi; Zhang, Zhenya

    2015-07-01

    A Japanese volcanic soil, Akadama clay, was functionalized with metal salts (FeCl3, AlCl3, CaCl2, MgCl2, MnCl2) and tested for Cr(VI) removal from aqueous solution. FeCl3 was selected as the most efficient activation agent. To quantitatively investigate the independent or interactive contribution of influencing factors (solution pH, contact time, adsorbent dose, and initial concentration) to Cr(VI) adsorption onto Fe-functionalized AC (FFAC), factorial experimental design was applied. Results showed initial concentration contributed most to adsorption capacity of Cr(VI) (53.17%), followed by adsorbent dosage (45.15%), contact time (1.12%) and the interaction between adsorbent dosage and contact time (0.37%). The adsorption showed little dependence on solution pH from 2 to 8. Adsorption selectivity of Cr(VI) was evaluated through analyzing distribution coefficient, electrical double layer theory, as well as the valence and Pauling's ionic radii of co-existing anions (Cl-, SO42-, and PO43-). EDX and XPS analyses demonstrated the adsorption mechanism of Cr(VI) onto FFAC included electrostatic attraction, ligant exchange, and redox reaction. Improved treatment for tannery wastewater shows a potential application of FFAC as a cost-effective adsorbent for Cr(VI) removal.

  3. Theoretical-physics approach to selected problems in engineering electromagnetics: Evolutionary optimization and low-dimensional nanostructures

    NASA Astrophysics Data System (ADS)

    Mikki, Said M.

    Although electromagnetism was developed originally as a branch of theoretical physics, the wide spread proliferation of wireless communications and other applications since the turn of the 20th century quickly transformed the field into a well-defined discipline standing by itself as an autonomous part of engineering. This in turn accelerated the growth of both numerical techniques and practical designs aiming all to improve technology. However, one negative drawback was the increasing isolation between the practicality of engineering electromagnetism and the depth and sophistication of the tools that had been developed solely within electromagnetic theory as a branch of theoretical physics. In this dissertation, we propose a new look to engineering electromagnetism from the perspective of theoretical physics. We show that techniques usually associated with abstract physical models in theoretical physics can be successfully employed to enhance our understanding of problems in engineering electromagnetism. Also, such adaptations of theoretical methods allow for new kinds of applications to be invented. This dissertation is organized in two main parts. Part I is concerned with the particle swarm optimization (PSO) method. We first construct a physical theory for the particle swarm optimization and show how this could open the door not just for deeper understanding of the algorithm itself, but also for new techniques to improve the performance of the method when applied to engineering electromagnetics problems. Inspired by the wider perspective derived from physics, we apply quantum effects to the basic (classical) PSO and derive a new general quantum PSO (QPSO) algorithm suitable for engineering electromagnetism. The new method will be shown to be superior to the classical counterpart when applied to some practical problems. A detailed case study that was formulated extensively in our work is the infinitesimal dipole model (IDM), which can simulate arbitrary antennas

  4. Sea ice concentration from satellite passive microwave algorithms: inter-comparison, validation and selection of an optimal algorithm

    NASA Astrophysics Data System (ADS)

    Ivanova, Natalia; Pedersen, Leif T.; Lavergne, Thomas; Tonboe, Rasmus T.; Saldo, Roberto; Mäkynen, Marko; Heygster, Georg; Rösel, Anja; Kern, Stefan; Dybkjær, Gorm; Sørensen, Atle; Brucker, Ludovic; Shokr, Mohammed; Korosov, Anton; Hansen, Morten W.

    2015-04-01

    Sea ice concentration (SIC) has been derived globally from satellite passive microwave observations since the 1970s by a multitude of algorithms. However, existing datasets and algorithms, although agreeing in the large-scale picture, differ substantially in the details and have disadvantages in summer and fall due to presence of melt ponds and thin ice. There is thus a need for understanding of the causes for the differences and identifying the most suitable method to retrieve SIC. Therefore, during the ESA Climate Change Initiative effort 30 algorithms have been implemented, inter-compared and validated by a standardized reference dataset. The algorithms were evaluated over low and high sea ice concentrations and thin ice. Based on the findings, an optimal approach to retrieve sea ice concentration globally for climate purposes was suggested and validated. The algorithm was implemented with atmospheric correction and dynamical tie points in order to produce the final sea ice concentration dataset with per-pixel uncertainties. The issue of melt ponds was addressed in particular because they are interpreted as open water by the algorithms and thus SIC can be underestimated by up to 40%. To improve our understanding of this issue, melt-pond signatures in AMSR2 images were investigated based on their physical properties with help of observations of melt pond fraction from optical (MODIS and MERIS) and active microwave (SAR) satellite measurements.

  5. Line selection and parameter optimization for trace analysis of uranium in glass matrices by laser-induced breakdown spectroscopy (LIBS).

    PubMed

    Choi, Inhee; Chan, George C-Y; Mao, Xianglei; Perry, Dale L; Russo, Richard E

    2013-11-01

    Laser-induced breakdown spectroscopy (LIBS) has been evaluated for the determination of uranium in real-world samples such as uraninite. NIST Standard Reference Materials were used to evaluate the spectral interferences on detection of uranium. The study addresses the detection limit of LIBS for several uranium lines and their relationship to non-uranium lines, with emphasis on spectral interferences. The data are discussed in the context of optimizing the choice of emission lines for both qualitative and quantitative analyses from a complex spectrum of uranium in the presence of other elements. Temporally resolved spectral emission intensities, line width, and line shifts were characterized to demonstrate the parameter influence on these measurements. The measured uranium line width demonstrates that LIBS acquired with moderately high spectral resolution (e.g., by a 1.25 m spectrometer with a 2400 grooves/mm grating) can be utilized for isotope shift measurements in air at atmospheric pressure with single to tens of parts per million (ppm) level detection limits, as long as an appropriate transition is chosen for analysis. PMID:24160879

  6. Preparation of Mn-Based Selective Catalytic Reduction Catalysts by Three Methods and Optimization of Process Conditions

    PubMed Central

    Xing, Yi; Hong, Chen; Cheng, Bei; Zhang, Kun

    2013-01-01

    Mn-based catalysts enable high NOx conversion in the selective catalytic reduction of NOx with NH3. Three catalyst-production methods, namely, co-precipitation, impregnation, and sol-gel, were used in this study to determine the optimum method and parameters. The maximum catalytic activity was found for the catalyst prepared by sol-gel with a 0.5 Mn/Ti ratio. The denitrification efficiency using this catalyst was >90%, which was higher than those of catalysts prepared by the two other methods. The critical temperature of catalytic activity was 353 K. The optimum manganese acetate concentration and weathering time were 0.10 mol and 24 h, respectively. The gas hourly space velocity and O2 concentration were determined to be 12000 h-1 and 3%, respectively. PMID:24023841

  7. Spatial Niche Segregation of Sympatric Stone Marten and Pine Marten--Avoidance of Competition or Selection of Optimal Habitat?

    PubMed

    Wereszczuk, Anna; Zalewski, Andrzej

    2015-01-01

    Coexistence of ecologically similar species relies on differences in one or more dimensions of their ecological niches, such as space, time and resources in diel and/or seasonal scales. However, niche differentiation may result from other mechanisms such as avoidance of high predation pressure, different adaptations or requirements of ecologically similar species. Stone marten (Martes foina) and pine marten (Martes martes) occur sympatrically over a large area in Central Europe and utilize similar habitats and food, therefore it is expected that their coexistence requires differentiation in at least one of their niche dimensions or the mechanisms through which these dimensions are used. To test this hypothesis, we used differences in the species activity patterns and habitat selection, estimated with a resource selection function (RSF), to predict the relative probability of occurrence of the two species within a large forest complex in the northern geographic range of the stone marten. Stone martens were significantly heavier, have a longer body and a better body condition than pine martens. We found weak evidence for temporal niche segregation between the species. Stone and pine martens were both primarily nocturnal, but pine martens were active more frequently during the day and significantly reduced the duration of activity during autumn-winter. Stone and pine martens utilized different habitats and almost completely separated their habitat niches. Stone marten strongly preferred developed areas and avoided meadows and coniferous or deciduous forests. Pine marten preferred deciduous forest and small patches covered by trees, and avoided developed areas and meadows. We conclude that complete habitat segregation of the two marten species facilitates sympatric coexistence in this area. However, spatial niche segregation between these species was more likely due to differences in adaptation to cold climate, avoidance of high predator pressure and/or food

  8. Spatial Niche Segregation of Sympatric Stone Marten and Pine Marten – Avoidance of Competition or Selection of Optimal Habitat?

    PubMed Central

    Wereszczuk, Anna; Zalewski, Andrzej

    2015-01-01

    Coexistence of ecologically similar species relies on differences in one or more dimensions of their ecological niches, such as space, time and resources in diel and/or seasonal scales. However, niche differentiation may result from other mechanisms such as avoidance of high predation pressure, different adaptations or requirements of ecologically similar species. Stone marten (Martes foina) and pine marten (Martes martes) occur sympatrically over a large area in Central Europe and utilize similar habitats and food, therefore it is expected that their coexistence requires differentiation in at least one of their niche dimensions or the mechanisms through which these dimensions are used. To test this hypothesis, we used differences in the species activity patterns and habitat selection, estimated with a resource selection function (RSF), to predict the relative probability of occurrence of the two species within a large forest complex in the northern geographic range of the stone marten. Stone martens were significantly heavier, have a longer body and a better body condition than pine martens. We found weak evidence for temporal niche segregation between the species. Stone and pine martens were both primarily nocturnal, but pine martens were active more frequently during the day and significantly reduced the duration of activity during autumn-winter. Stone and pine martens utilized different habitats and almost completely separated their habitat niches. Stone marten strongly preferred developed areas and avoided meadows and coniferous or deciduous forests. Pine marten preferred deciduous forest and small patches covered by trees, and avoided developed areas and meadows. We conclude that complete habitat segregation of the two marten species facilitates sympatric coexistence in this area. However, spatial niche segregation between these species was more likely due to differences in adaptation to cold climate, avoidance of high predator pressure and/or food

  9. Optimization of bioenergy crop selection and placement based on a stream health indicator using an evolutionary algorithm.

    PubMed

    Herman, Matthew R; Nejadhashemi, A Pouyan; Daneshvar, Fariborz; Abouali, Mohammad; Ross, Dennis M; Woznicki, Sean A; Zhang, Zhen

    2016-10-01

    The emission of greenhouse gases continues to amplify the impacts of global climate change. This has led to the increased focus on using renewable energy sources, such as biofuels, due to their lower impact on the environment. However, the production of biofuels can still have negative impacts on water resources. This study introduces a new strategy to optimize bioenergy landscapes while improving stream health for the region. To accomplish this, several hydrological models including the Soil and Water Assessment Tool, Hydrologic Integrity Tool, and Adaptive Neruro Fuzzy Inference System, were linked to develop stream health predictor models. These models are capable of estimating stream health scores based on the Index of Biological Integrity. The coupling of the aforementioned models was used to guide a genetic algorithm to design watershed-scale bioenergy landscapes. Thirteen bioenergy managements were considered based on the high probability of adaptation by farmers in the study area. Results from two thousand runs identified an optimum bioenergy crops placement that maximized the stream health for the Flint River Watershed in Michigan. The final overall stream health score was 50.93, which was improved from the current stream health score of 48.19. This was shown to be a significant improvement at the 1% significant level. For this final bioenergy landscape the most often used management was miscanthus (27.07%), followed by corn-soybean-rye (19.00%), corn stover-soybean (18.09%), and corn-soybean (16.43%). The technique introduced in this study can be successfully modified for use in different regions and can be used by stakeholders and decision makers to develop bioenergy landscapes that maximize stream health in the area of interest. PMID:27420165

  10. Optimal body balance disturbance tolerance skills as a methodological basis for selection of firefighters to solve difficult rescue tasks.

    PubMed

    Jagiełło, Władysław; Wójcicki, Zbigniew; Barczyński, Bartłomiej J; Litwiniuk, Artur; Kalina, Roman Maciej

    2014-01-01

    The aim of this study is the methodology of optimal choice of firefighters to solve difficult rescue tasks. 27 firefighters were analyzed: aged from 22-50 years of age, and with 2-27 years of work experience. Body balance disturbance tolerance skills (BBDTS) measured by the 'Rotational Test' (RT) and time of transition (back and forth) on a 4 meter beam located 3 meters above the ground, was the criterion for simulation of a rescue task (SRT). RT and SRT were carried out first in a sports tracksuit and then in protective clothing. A total of 4 results of the RT and SRT is the substantive base of the 4 rankings. The correlation of the RT and SRT results with 3 criteria for estimating BBDTS and 2 categories ranged from 0.478 (p<0.01) - 0.884 (p<0.01) and the results of SRT 0.911 (p<0.01). The basic ranking very highly correlated indicators of SRT (0.860 and 0.844), while the 6 indicators of RT only 2 (0.396 and 0.381; p<0.05). There was no correlation between the results of the RT and SRT, but there was an important partial correlation of these variables, but only then was the effect stabilized. The Rotational Test is a simple and easy to use tool for measuring body balance disturbance tolerance skills. However, the BBDTS typology is an accurate criteria for forecasting on this basis, including the results of accurate motor simulations, and the periodic ability of firefighters to solve the most difficult rescue tasks. PMID:24738515

  11. An optimal baseline selection methodology for data-driven damage detection and temperature compensation in acousto-ultrasonics

    NASA Astrophysics Data System (ADS)

    Torres-Arredondo, M.-A.; Sierra-Pérez, Julián; Cabanes, Guénaël

    2016-05-01

    The process of measuring and analysing the data from a distributed sensor network all over a structural system in order to quantify its condition is known as structural health monitoring (SHM). For the design of a trustworthy health monitoring system, a vast amount of information regarding the inherent physical characteristics of the sources and their propagation and interaction across the structure is crucial. Moreover, any SHM system which is expected to transition to field operation must take into account the influence of environmental and operational changes which cause modifications in the stiffness and damping of the structure and consequently modify its dynamic behaviour. On that account, special attention is paid in this paper to the development of an efficient SHM methodology where robust signal processing and pattern recognition techniques are integrated for the correct interpretation of complex ultrasonic waves within the context of damage detection and identification. The methodology is based on an acousto-ultrasonics technique where the discrete wavelet transform is evaluated for feature extraction and selection, linear principal component analysis for data-driven modelling and self-organising maps for a two-level clustering under the principle of local density. At the end, the methodology is experimentally demonstrated and results show that all the damages were detectable and identifiable.

  12. The optimization of FA/O barrier slurry with respect to removal rate selectivity on patterned Cu wafers

    NASA Astrophysics Data System (ADS)

    Yi, Hu; Yan, Li; Yuling, Liu; Yangang, He

    2016-02-01

    Because the polishing of different materials is required in barrier chemical mechanical planarization (CMP) processes, the development of a kind of barrier slurry with improved removal rate selectivity for Cu/barrier/TEOS would reduce erosion and dishing defects on patterned Cu wafers. In this study, we developed a new benzotriazole-free barrier slurry named FA/O barrier slurry, containing 20 mL/L of the chelating agent FA/O, 5 mL/L surfactant, and a 1:5 concentration of abrasive particles. By controlling the polishing slurry ingredients, the removal rate of different materials could be controlled. For process integration considerations, the effect of the FA/O barrier slurry on the dielectric layer of the patterned Cu wafer was investigated. After CMP processing by the FA/O barrier slurry, the characteristics of the dielectric material were tested. The results showed that the dielectric characteristics met demands for industrial production. The current leakage was of pA scale. The resistance and capacitance were 2.4 kω and 2.3 pF, respectively. The dishing and erosion defects were both below 30 nm in size. CMP-processed wafers using this barrier slurry could meet industrial production demands. Project supported by the Special Project Items No. 2 in National Long-Term Technology Development Plan (No. 2009ZX02308), the Natural Science Foundation of Hebei Province (No. F2012202094), and the Doctoral Program Foundation of Xinjiang Normal University Plan (No. XJNUBS1226).

  13. Optimal site selection for sitting a solar park using multi-criteria decision analysis and geographical information systems

    NASA Astrophysics Data System (ADS)

    Georgiou, Andreas; Skarlatos, Dimitrios

    2016-07-01

    Among the renewable power sources, solar power is rapidly becoming popular because it is inexhaustible, clean, and dependable. It has also become more efficient since the power conversion efficiency of photovoltaic solar cells has increased. Following these trends, solar power will become more affordable in years to come and considerable investments are to be expected. Despite the size of solar plants, the sitting procedure is a crucial factor for their efficiency and financial viability. Many aspects influence such a decision: legal, environmental, technical, and financial to name a few. This paper describes a general integrated framework to evaluate land suitability for the optimal placement of photovoltaic solar power plants, which is based on a combination of a geographic information system (GIS), remote sensing techniques, and multi-criteria decision-making methods. An application of the proposed framework for the Limassol district in Cyprus is further illustrated. The combination of a GIS and multi-criteria methods produces an excellent analysis tool that creates an extensive database of spatial and non-spatial data, which will be used to simplify problems as well as solve and promote the use of multiple criteria. A set of environmental, economic, social, and technical constrains, based on recent Cypriot legislation, European's Union policies, and expert advice, identifies the potential sites for solar park installation. The pairwise comparison method in the context of the analytic hierarchy process (AHP) is applied to estimate the criteria weights in order to establish their relative importance in site evaluation. In addition, four different methods to combine information layers and check their sensitivity were used. The first considered all the criteria as being equally important and assigned them equal weight, whereas the others grouped the criteria and graded them according to their objective perceived importance. The overall suitability of the study

  14. Development of an optimized dose for coformulation of zidovudine with drugs that select for the K65R mutation using a population pharmacokinetic and enzyme kinetic simulation model.

    PubMed

    Hurwitz, Selwyn J; Asif, Ghazia; Kivel, Nancy M; Schinazi, Raymond F

    2008-12-01

    In vitro selection studies and data from large genotype databases from clinical studies have demonstrated that tenofovir disoproxil fumarate and abacavir sulfate select for the K65R mutation in the human immunodeficiency virus type 1 polymerase region. Furthermore, other novel non-thymine nucleoside reverse transcriptase (RT) inhibitors also select for this mutation in vitro. Studies performed in vitro and in humans suggest that viruses containing the K65R mutation remained susceptible to zidovudine (ZDV) and other thymine nucleoside antiretroviral agents. Therefore, ZDV could be coformulated with these agents as a "resistance repellent" agent for the K65R mutation. The approved ZDV oral dose is 300 mg twice a day (b.i.d.) and is commonly associated with bone marrow toxicity thought to be secondary to ZDV-5'-monophosphate (ZDV-MP) accumulation. A simulation study was performed in silico to optimize the ZDV dose for b.i.d. administration with K65R-selecting antiretroviral agents in virtual subjects using the population pharmacokinetic and cellular enzyme kinetic parameters of ZDV. These simulations predicted that a reduction in the ZDV dose from 300 to 200 mg b.i.d. should produce similar amounts of ZDV-5'-triphosphate (ZDV-TP) associated with antiviral efficacy (>97% overlap) and reduced plasma ZDV and cellular amounts of ZDV-MP associated with toxicity. The simulations also predicted reduced peak and trough amounts of cellular ZDV-TP after treatment with 600 mg ZDV once a day (q.d.) rather than 300 or 200 mg ZDV b.i.d., indicating that q.d. dosing with ZDV should be avoided. These in silico predictions suggest that 200 mg ZDV b.i.d. is an efficacious and safe dose that could delay the emergence of the K65R mutation. PMID:18838591

  15. Multi-residue method for determination of selected neonicotinoid insecticides in honey using optimized dispersive liquid-liquid microextraction combined with liquid chromatography-tandem mass spectrometry.

    PubMed

    Jovanov, Pavle; Guzsvány, Valéria; Franko, Mladen; Lazić, Sanja; Sakač, Marijana; Šarić, Bojana; Banjac, Vojislav

    2013-07-15

    The objective of this study was to develop analytical method based on optimized dispersive liquid-liquid microextraction (DLLME) as a pretreatment procedure combined with reversed phase liquid chromatographic separation on C18 column and isocratic elution for simultaneous MS/MS determination of selected neonicotinoid insecticides in honey. The LC-MS/MS parameters were optimized to unequivocally provide good chromatographic separation, low detection (LOD, 0.5-1.0 μg kg(-1)) and quantification (LOQ, 1.5-2.5 μg kg(-1)) limits for acetamiprid, clothianidin, thiamethoxam, imidacloprid, dinotefuran, thiacloprid and nitenpyram in honey samples. Using different types (chloroform, dichloromethane) and volumes of extraction (0.5-3.0 mL) and dispersive (acetonitrile; 0.0-1.0 mL) solvent and by mathematical modeling it was possible to establish the optimal sample preparation procedure. Matrix-matched calibration and blank honey sample spiked in the concentration range of LOQ-100.0 μg kg(-1) were used to compensate the matrix effect and to fulfill the requirements of SANCO/12495/2011 for the accuracy (R 74.3-113.9%) and precision (expressed in terms of repeatability (RSD 2.74-11.8%) and within-laboratory reproducibility (RSDs 6.64-16.2%)) of the proposed method. The rapid (retention times 1.5-9.9 min), sensitive and low solvent consumption procedure described in this work provides reliable, simultaneous, and quantitative method applicable for the routine laboratory analysis of seven neonicotinoid residues in real honey samples. PMID:23622535

  16. Strategies for selecting optimal sampling and work-up procedures for analysing alkylphenol polyethoxylates in effluents from non-activated sludge biofilm reactors.

    PubMed

    Stenholm, Ake; Holmström, Sara; Hjärthag, Sandra; Lind, Ola

    2012-01-01

    Trace-level analysis of alkylphenol polyethoxylates (APEOs) in wastewater containing sludge requires the prior removal of contaminants and preconcentration. In this study, the effects on optimal work-up procedures of the types of alkylphenols present, their degree of ethoxylation, the biofilm wastewater treatment and the sample matrix were investigated for these purposes. The sampling spot for APEO-containing specimens from an industrial wastewater treatment plant was optimized, including a box that surrounded the tubing outlet carrying the wastewater, to prevent sedimented sludge contaminating the collected samples. Following these changes, the sampling precision (in terms of dry matter content) at a point just under the tubing leading from the biofilm reactors was 0.7% RSD. The findings were applied to develop a work-up procedure for use prior to a high-performance liquid chromatography-fluorescence detection analysis method capable of quantifying nonylphenol polyethoxylates (NPEOs) and poorly investigated dinonylphenol polyethoxylates (DNPEOs) at low microg L(-1) concentrations in effluents from non-activated sludge biofilm reactors. The selected multi-step work-up procedure includes lyophilization and pressurized fluid extraction (PFE) followed by strong ion exchange solid phase extraction (SPE). The yields of the combined procedure, according to tests with NP10EO-spiked effluent from a wastewater treatment plant, were in the 62-78% range. PMID:22519096

  17. Determination of Nicotine in Tobacco by Chemometric Optimization and Cation-Selective Exhaustive Injection in Combination with Sweeping-Micellar Electrokinetic Chromatography

    PubMed Central

    Lin, Yi-Hui; Feng, Chia-Hsien; Wang, Shih-Wei; Ko, Po-Yun; Lee, Ming-Hsun; Chen, Yen-Ling

    2015-01-01

    Nicotine is a potent chemical that excites the central nervous system and refreshes people. It is also physically addictive and causes dependence. To reduce the harm of tobacco products for smokers, a law was introduced that requires tobacco product containers to be marked with the amount of nicotine as well as tar. In this paper, an online stacking capillary electrophoresis (CE) method with cation-selective exhaustive injection sweeping-micellar electrokinetic chromatography (CSEI-sweeping-MEKC) is proposed for the optimized analysis of nicotine in tobacco. A higher conductivity buffer (160 mM phosphate buffer (pH 3)) zone was injected into the capillary, allowing for the analytes to be electrokinetically injected at a voltage of 15 kV for 15 min. Using 50 mM sodium dodecyl sulfate and 25% methanol in the sweeping buffer, nicotine was detected with high sensitivity. Thus, optimized conditions adapted from a chemometric approach provided a 6000-fold increase in the nicotine detection sensitivity using the CSEI-sweeping-MEKC method in comparison to normal CZE. The limits of detection were 0.5 nM for nicotine. The stacking method in combination with direct injection which matrix components would not interfere with assay performance was successfully applied to the detection of nicotine in tobacco samples. PMID:26101695

  18. An efficient variable selection method based on the use of external memory in ant colony optimization. Application to QSAR/QSPR studies.

    PubMed

    Shamsipur, Mojtaba; Zare-Shahabadi, Vali; Hemmateenejad, Bahram; Akhond, Morteza

    2009-07-30

    A novel approach for the use of external memory in ant colony optimization strategy for solving descriptor selection problem in quantitative structure-activity/property relationship studies is described. In this approach, several ant colony system algorithms are run to build an external memory containing a number of elite ants. In the next step, all of the elite ants in the external memory are allowed to update the pheromones. Then the external memory is emptied and the updated pheromones are used again, by several ant colony system algorithms to build a new external memory. These steps are iteratively run for certain number of iterations. At the end, the memory will be containing several top solutions to the problem. The proposed approach was applied to solving variable selection problem in quantitative structure-activity/property relationship studies of rate constants of o-methylation of 36 phenol derivatives and activities of 31 antifilarial antimycin compounds, for which the obtained results revealed that both the speed and the solution quality are improved compared to conventional ant colony system algorithms. PMID:19523554

  19. Selection of Specific Protein Binders for Pre-Defined Targets from an Optimized Library of Artificial Helicoidal Repeat Proteins (alphaRep)

    PubMed Central

    Chevrel, Anne; Graille, Marc; Fourati-Kammoun, Zaineb; Desmadril, Michel; van Tilbeurgh, Herman; Minard, Philippe

    2013-01-01

    We previously designed a new family of artificial proteins named αRep based on a subgroup of thermostable helicoidal HEAT-like repeats. We have now assembled a large optimized αRep library. In this library, the side chains at each variable position are not fully randomized but instead encoded by a distribution of codons based on the natural frequency of side chains of the natural repeats family. The library construction is based on a polymerization of micro-genes and therefore results in a distribution of proteins with a variable number of repeats. We improved the library construction process using a “filtration” procedure to retain only fully coding modules that were recombined to recreate sequence diversity. The final library named Lib2.1 contains 1.7×109 independent clones. Here, we used phage display to select, from the previously described library or from the new library, new specific αRep proteins binding to four different non-related predefined protein targets. Specific binders were selected in each case. The results show that binders with various sizes are selected including relatively long sequences, with up to 7 repeats. ITC-measured affinities vary with Kd values ranging from micromolar to nanomolar ranges. The formation of complexes is associated with a significant thermal stabilization of the bound target protein. The crystal structures of two complexes between αRep and their cognate targets were solved and show that the new interfaces are established by the variable surfaces of the repeated modules, as well by the variable N-cap residues. These results suggest that αRep library is a new and versatile source of tight and specific binding proteins with favorable biophysical properties. PMID:24014183

  20. Optimization of 4D vessel-selective arterial spin labeling angiography using balanced steady-state free precession and vessel-encoding.

    PubMed

    Okell, Thomas W; Schmitt, Peter; Bi, Xiaoming; Chappell, Michael A; Tijssen, Rob H N; Sheerin, Fintan; Miller, Karla L; Jezzard, Peter

    2016-06-01

    Vessel-selective dynamic angiograms provide a wealth of useful information about the anatomical and functional status of arteries, including information about collateral flow and blood supply to lesions. Conventional x-ray techniques are invasive and carry some risks to the patient, so non-invasive alternatives are desirable. Previously, non-contrast dynamic MRI angiograms based on arterial spin labeling (ASL) have been demonstrated using both spoiled gradient echo (SPGR) and balanced steady-state free precession (bSSFP) readout modules, but no direct comparison has been made, and bSSFP optimization over a long readout period has not been fully explored. In this study bSSFP and SPGR are theoretically and experimentally compared for dynamic ASL angiography. Unlike SPGR, bSSFP was found to have a very low ASL signal attenuation rate, even when a relatively large flip angle and short repetition time were used, leading to a threefold improvement in the measured signal-to-noise ratio (SNR) efficiency compared with SPGR. For vessel-selective applications, SNR efficiency can be further improved over single-artery labeling methods by using a vessel-encoded pseudo-continuous ASL (VEPCASL) approach. The combination of a VEPCASL preparation with a time-resolved bSSFP readout allowed the generation of four-dimensional (4D; time-resolved three-dimensional, 3D) vessel-selective cerebral angiograms in healthy volunteers with 59 ms temporal resolution. Good quality 4D angiograms were obtained in all subjects, providing comparable structural information to 3D time-of-flight images, as well as dynamic information and vessel selectivity, which was shown to be high. A rapid 1.5 min dynamic two-dimensional version of the sequence yielded similar image features and would be suitable for a busy clinical protocol. Preliminary experiments with bSSFP that included the extracranial vessels showed signal loss in regions of poor magnetic field homogeneity. However, for intracranial vessel-selective