Science.gov

Sample records for optimal tuner selection

  1. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  2. Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay

    2012-01-01

    An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.

  3. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  4. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  5. Model-Based Control of an Aircraft Engine using an Optimal Tuner Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Chicatelli, Amy; Garg, Sanjay

    2012-01-01

    This paper covers the development of a model-based engine control (MBEC) method- ology applied to an aircraft turbofan engine. Here, a linear model extracted from the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) at a cruise operating point serves as the engine and the on-board model. The on-board model is up- dated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. MBEC provides the ability for a tighter control bound of thrust over the entire life cycle of the engine that is not achievable using traditional control feedback, which uses engine pressure ratio or fan speed. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC tighter thrust control. In addition, investigations of using the MBEC to provide a surge limit for the controller limit logic are presented that could provide benefits over a simple acceleration schedule that is currently used in engine control architectures.

  6. Model-Based Control of a Nonlinear Aircraft Engine Simulation using an Optimal Tuner Kalman Filter Approach

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob

    2013-01-01

    This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.

  7. Test of a coaxial blade tuner at HTS FNAL

    SciTech Connect

    Pischalnikov, Y.; Barbanotti, S.; Harms, E.; Hocker, A.; Khabiboulline, T.; Schappert, W.; Bosotti, A.; Pagani, C.; Paparella, R.; /LASA, Segrate

    2011-03-01

    A coaxial blade tuner has been selected for the 1.3GHz SRF cavities of the Fermilab SRF Accelerator Test Facility. Results from tuner cold tests in the Fermilab Horizontal Test Stand are presented. Fermilab is constructing the SRF Accelerator Test Facility, a facility for accelerator physics research and development. This facility will contain a total of six cryomodules, each containing eight 1.3 GHz nine-cell elliptical cavities. Each cavity will be equipped with a Slim Blade Tuner designed by INFN Milan. The blade tuner incorporates both a stepper motor and piezo actuators to allow for both slow and fast cavity tuning. The stepper motor allows the cavity frequency to be statically tuned over a range of 500 kHz with an accuracy of several Hz. The piezos provide up to 2 kHz of dynamic tuning for compensation of Lorentz force detuning and variations in the He bath pressure. The first eight blade tuners were built at INFN Milan, but the remainder are being manufactured commercially following the INFN design. To date, more than 40 of the commercial tuners have been delivered.

  8. Electromagnetic SCRF Cavity Tuner

    SciTech Connect

    Kashikhin, V.; Borissov, E.; Foster, G.W.; Makulski, A.; Pischalnikov, Y.; Khabiboulline, T.; /Fermilab

    2009-05-01

    A novel prototype of SCRF cavity tuner is being designed and tested at Fermilab. This is a superconducting C-type iron dominated magnet having a 10 mm gap, axial symmetry, and a 1 Tesla field. Inside the gap is mounted a superconducting coil capable of moving {+-} 1 mm and producing a longitudinal force up to {+-} 1.5 kN. The static force applied to the RF cavity flanges provides a long-term cavity geometry tuning to a nominal frequency. The same coil powered by fast AC current pulse delivers mechanical perturbation for fast cavity tuning. This fast mechanical perturbation could be used to compensate a dynamic RF cavity detuning caused by cavity Lorentz forces and microphonics. A special configuration of magnet system was designed and tested.

  9. LEB tuner made out of titanium alloy

    SciTech Connect

    Goren, Y.; Campbell, B.

    1991-09-01

    A proposed design of a closed shell tuner for the LEB cavity is presented. The tuner is made out of Ti alloy which has a high electrical resistivity as well as very good mechanical strength. Using this alloy results in a substantial reduction in the eddy current heating as well as allowing for faster frequency control. 9 figs.

  10. Inductive tuners for microwave driven discharge lamps

    SciTech Connect

    Simpson, J.E.

    1999-11-02

    An RF powered electrodeless lamp utilizing an inductive tuner in the waveguide which couples the RF power to the lamp cavity, for reducing reflected RF power and causing the lamp to operate efficiently.

  11. Inductive tuners for microwave driven discharge lamps

    DOEpatents

    Simpson, James E.

    1999-01-01

    An RF powered electrodeless lamp utilizing an inductive tuner in the waveguide which couples the RF power to the lamp cavity, for reducing reflected RF power and causing the lamp to operate efficiently.

  12. Enhanced production of electron cyclotron resonance plasma by exciting selective microwave mode on a large-bore electron cyclotron resonance ion source with permanent magnet

    SciTech Connect

    Kimura, Daiju Kurisu, Yosuke; Nozaki, Dai; Yano, Keisuke; Imai, Youta; Kumakura, Sho; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2014-02-15

    We are constructing a tandem type ECRIS. The first stage is large-bore with cylindrically comb-shaped magnet. We optimize the ion beam current and ion saturation current by a mobile plate tuner. They change by the position of the plate tuner for 2.45 GHz, 11–13 GHz, and multi-frequencies. The peak positions of them are close to the position where the microwave mode forms standing wave between the plate tuner and the extractor. The absorbed powers are estimated for each mode. We show a new guiding principle, which the number of efficient microwave mode should be selected to fit to that of multipole of the comb-shaped magnets. We obtained the excitation of the selective modes using new mobile plate tuner to enhance ECR efficiency.

  13. Methods to optimize selective hyperthermia

    NASA Astrophysics Data System (ADS)

    Cowan, Thomas M.; Bailey, Christopher A.; Liu, Hong; Chen, Wei R.

    2003-07-01

    Laser immunotherapy, a novel therapy for breast cancer, utilizes selective photothermal interaction to raise the temperature of tumor tissue above the cell damage threshold. Photothermal interaction is achieved with intratumoral injection of a laser absorbing dye followed by non-invasive laser irradiation. When tumor heating is used in combination with immunoadjuvant to stimulate an immune response, anti-tumor immunity can be achieved. In our study, gelatin phantom simulations were used to optimize therapy parameters such as laser power, laser beam radius, and dye concentration to achieve maximum heating of target tissue with the minimum heating of non-targeted tissue. An 805-nm diode laser and indocyanine green (ICG) were used to achieve selective photothermal interactions in a gelatin phantom. Spherical gelatin phantoms containing ICG were used to simulate the absorption-enhanced target tumors, which were embedded inside gelatin without ICG to simulate surrounding non-targeted tissue. Different laser powers and dye concentrations were used to treat the gelatin phantoms. The temperature distributions in the phantoms were measured, and the data were used to determine the optimal parameters used in selective hyperthermia (laser power and dye concentration for this case). The method involves an optimization coefficient, which is proportional to the difference between temperatures measured in targeted and non-targeted gel. The coefficient is also normalized by the difference between the most heated region of the target gel and the least heated region. A positive optimization coefficient signifies a greater temperature increase in targeted gelatin when compared to non-targeted gelatin, and therefore, greater selectivity. Comparisons were made between the optimization coefficients for varying laser powers in order to demonstrate the effectinvess of this method in finding an optimal parameter set. Our experimental results support the proposed use of an optimization

  14. Superconducting cavity tuner performance at CEBAF

    SciTech Connect

    Marshall, J.; Preble, J.; Schneider, W.

    1993-06-01

    At the Continuous Electron Beam Accelerator Facility (CEBAF), a 4 GeV, multipass CW electron beam is to be accelerated by 338 SRF, 5-cell niobium cavities operating at a resonant frequency of 1497 MHz. Eight cavities arranged as four pairs comprise a cyromodule, a croygenically isolated linac subdivision. The frequency is controlled by a mechanical tune attached to the first and fifth cell of the cavity which elastically deforms the cavity and thereby alters its resonant frequency. The tuner is driven by a stepper motor mounted external to the cryomodule that transfers torque through two rotary feedthroughs. A linear variable differential transducer (LVDT) mounted on the tuner monitors the displacement, and two limit switches interlock the movement beyond a 400 kHz bandwidth. Since the cavity has a loaded Q of 6.6 {center_dot} 10{sup 6}, the control system must maintain the frequency of the cavity to within {plus_minus} 50 Hz of the drive frequency for efficient coupling. This requirement is somewhat difficult to achieve since the difference in thermal contractions of the cavity and the tuner creates a frequency hystersis of approximately 10 kHz. The cavity is also subject to frequency shifts due to pressure fluctuations of the helium bath as well as radiation pressure. This requires that each cavity be characterized in terms of frequency change as a function of applied motor steps to allow proper tuning operations. This paper describes the electrical and mechanical performance of the cavity tuner during the commissioning and operation of the cryomodulus manufactured to date.

  15. Dependence of ion beam current on position of mobile plate tuner in multi-frequencies microwaves electron cyclotron resonance ion source.

    PubMed

    Kurisu, Yosuke; Kiriyama, Ryutaro; Takenaka, Tomoya; Nozaki, Dai; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2012-02-01

    We are constructing a tandem-type electron cyclotron resonance ion source (ECRIS). The first stage of this can supply 2.45 GHz and 11-13 GHz microwaves to plasma chamber individually and simultaneously. We optimize the beam current I(FC) by the mobile plate tuner. The I(FC) is affected by the position of the mobile plate tuner in the chamber as like a circular cavity resonator. We aim to clarify the relation between the I(FC) and the ion saturation current in the ECRIS against the position of the mobile plate tuner. We obtained the result that the variation of the plasma density contributes largely to the variation of the I(FC) when we change the position of the mobile plate tuner. PMID:22380157

  16. Dependence of ion beam current on position of mobile plate tuner in multi-frequencies microwaves electron cyclotron resonance ion source

    SciTech Connect

    Kurisu, Yosuke; Kiriyama, Ryutaro; Takenaka, Tomoya; Nozaki, Dai; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2012-02-15

    We are constructing a tandem-type electron cyclotron resonance ion source (ECRIS). The first stage of this can supply 2.45 GHz and 11-13 GHz microwaves to plasma chamber individually and simultaneously. We optimize the beam current I{sub FC} by the mobile plate tuner. The I{sub FC} is affected by the position of the mobile plate tuner in the chamber as like a circular cavity resonator. We aim to clarify the relation between the I{sub FC} and the ion saturation current in the ECRIS against the position of the mobile plate tuner. We obtained the result that the variation of the plasma density contributes largely to the variation of the I{sub FC} when we change the position of the mobile plate tuner.

  17. Fast Tuner R&D for RIA

    SciTech Connect

    Rusnak, B; Shen, S

    2003-08-19

    The limited cavity beam loading conditions anticipated for the Rare Isotope Accelerator (RIA) create a situation where microphonic-induced cavity detuning dominates radio frequency (RF) coupling and RF system architecture choices in the linac design process. Where most superconducting electron and proton linacs have beam-loaded bandwidths that are comparable to or greater than typical microphonic detuning bandwidths on the cavities, the beam-loaded bandwidths for many heavy-ion species in the RIA driver linac can be as much as a factor of 10 less than the projected 80-150 Hz microphonic control window for the RF structures along the driver, making RF control problematic. System studies indicate that for the low-{beta} driver linac alone, running the cavities with no fast tuner may cost 50% or more than an RF system employing a voltage controlled reactance (VCX) or other type of fast tuner. An update of these system cost studies, along with the status of the VCX work being done at Lawrence Livermore National Lab is presented.

  18. Feedback controlled hybrid fast ferrite tuners

    SciTech Connect

    Remsen, D.B.; Phelps, D.A.; deGrassie, J.S.; Cary, W.P.; Pinsker, R.I.; Moeller, C.P.; Arnold, W.; Martin, S.; Pivit, E.

    1993-09-01

    A low power ANT-Bosch fast ferrite tuner (FFT) was successfully tested into (1) the lumped circuit equivalent of an antenna strap with dynamic plasma loading, and (2) a plasma loaded antenna strap in DIII-D. When the FFT accessible mismatch range was phase-shifted to encompass the plasma-induced variation in reflection coefficient, the 50 {Omega} source was matched (to within the desired 1.4 : 1 voltage standing wave ratio). The time required to achieve this match (i.e., the response time) was typically a few hundred milliseconds, mostly due to a relatively slow network analyzer-computer system. The response time for the active components of the FFT was 10 to 20 msec, or much faster than the present state-of-the-art for dynamic stub tuners. Future FFT tests are planned, that will utilize the DIII-D computer (capable of submillisecond feedback control), as well as several upgrades to the active control circuit, to produce a FFT feedback control system with a response time approaching 1 msec.

  19. Characterization of CNRS Fizeau wedge laser tuner

    NASA Astrophysics Data System (ADS)

    A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom fabricated circuit board which contains a high speed fringe detection and locating circuit. This board includes a dc level discriminator type fringe detector, a counter circuit to determine fringe center, a pulsed laser triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data collection process and interprets the results.

  20. Characterization of CNRS Fizeau wedge laser tuner

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom fabricated circuit board which contains a high speed fringe detection and locating circuit. This board includes a dc level discriminator type fringe detector, a counter circuit to determine fringe center, a pulsed laser triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data collection process and interprets the results.

  1. Optimizing calcium selective fluorimetric nanospheres.

    PubMed

    Kisiel, Anna; Kłucińska, Katarzyna; Gniadek, Marianna; Maksymiuk, Krzysztof; Michalska, Agata

    2015-11-01

    Recently it was shown that optical nanosensors based on alternating polymers e.g. poly(maleic anhydride-alt-1-octadecene) were characterized by a linear dependence of emission intensity on logarithm of concentration over a few of orders of magnitude range. In this work we focus on the material used to prepare calcium selective nanosensors. It is shown that alternating polymer nanosensors offer competitive performance in the absence of calcium ionophore, due to interaction of the nanospheres building blocks with analyte ions. The emission increase corresponds to increase of calcium ions contents in the sample within the range from 10(-4) to 10(-1) M. Further improvement in sensitivity (from 10(-6) to 10(-1) M) and selectivity can be achieved by incorporating calcium ionophore in the nanospheres. The optimal results were obtained for core-shell nanospheres, where the core was prepared from poly(styrene-co-maleic anhydride) and the outer layer from poly(maleic anhydride-alt-1-octadecene). Thus obtained chemosensors were showing linear dependence of emission on logarithm of calcium ions concentration within the range from 10(-7) to 10(-1) M. PMID:26452839

  2. Fast Ferroelectric L-Band Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2011-03-01

    Analysis and modeling is presented for a fast microwave tuner to operate at 700 MHz which incorporates ferroelectric elements whose dielectric permittivity can be rapidly altered by application of an external voltage. This tuner could be used to correct unavoidable fluctuations in the resonant frequency of superconducting cavities in accelerator structures, thereby greatly reducing the RF power needed to drive the cavities. A planar test version of the tuner has been tested at low levels of RF power, but at 1300 MHz to minimize the physical size of the test structure. This test version comprises one-third of the final version. The tests show performance in good agreement with simulations, but with losses in the ferroelectric elements that are too large for practical use, and with issues in bonding of ferroelectric elements to the metal walls of the tuner structure.

  3. Quasi-optical equivalent of waveguide slide screw tuner

    NASA Technical Reports Server (NTRS)

    Kurpis, G. P.

    1970-01-01

    Tuner utilizes a metal plated dielectric grid inserted into the cross sectional plane of an oversized waveguide. It provides both variable susceptance and variable longitudinal position along the waveguide to provide a wide matching range.

  4. A wideband RF amplifier for satellite tuners

    NASA Astrophysics Data System (ADS)

    Xueqing, Hu; Zheng, Gong; Yin, Shi; Foster, Dai Fa

    2011-11-01

    This paper presents the design and measured performance of a wideband amplifier for a direct conversion satellite tuner. It is composed of a wideband low noise amplifier (LNA) and a two-stage RF variable gain amplifier (VGA) with linear gain in dB and temperature compensation schemes. To meet the system linearity requirement, an improved distortion compensation technique and a bypass mode are applied on the LNA to deal with the large input signal. Wideband matching is achieved by resistive feedback and an off-chip LC-ladder matching network. A large gain control range (over 80 dB) is achieved by the VGA with process voltage and temperature compensation and dB linearization. In total, the amplifier consumes up to 26 mA current from a 3.3 V power supply. It is fabricated in a 0.35-μm SiGe BiCMOS technology and occupies a silicon area of 0.25 mm2.

  5. Selected optimal shuttle entry computations

    NASA Technical Reports Server (NTRS)

    Sullivan, H. C.

    1974-01-01

    Parameterization and the Davidon-Fletcher-Powell method are used to study the characteristics of optimal shuttle entry trajectories. Two problems of thermal protective system weight minimization are considered: roll modulation and roll plus an angle-of-attack modulation. Both problems are targeted for the edges of the entry footprint. Results consistent with constraints on loads and control bounds are particularly well-behaved and strongly support 'energy' approximation results obtained for the case of symmetric flight by Kelley and Sullivan (1973). Furthermore, results indicate that optimal shuttle entry trajectories should be easy to duplicate and to analyze by using simple techniques.

  6. Fast Ferroelectric L-Band Tuner for ILC Cavities

    SciTech Connect

    Hirshfield, Jay L

    2010-03-15

    Design, analysis, and low-power tests are described on a 1.3 GHz ferroelectric tuner that could find application in the International Linear Collider or in Project X at Fermi National Accelerator Laboratory. The tuner configuration utilizes a three-deck sandwich imbedded in a WR-650 waveguide, in which ferroelectric bars are clamped between conducting plates that allow the tuning bias voltage to be applied. Use of a reduced one-third structure allowed tests of critical parameters of the configuration, including phase shift, loss, and switching speed. Issues that were revealed that require improvement include reducing loss tangent in the ferroelectric material, development of a reliable means of brazing ferroelectric elements to copper parts of the tuner, and simplification of the mechanical design of the configuration.

  7. Fast Ferroelectric L-Band Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2012-07-03

    Design, analysis, and low-power tests are described on a ferroelectric tuner concept that could be used for controlling external coupling to RF cavities for the superconducting Energy Recovery Linac (ERL) in the electron cooler of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL). The tuner configuration utilizes several small donut-shaped ferroelectric assemblies, which allow the design to be simpler and more flexible, as compared to previous designs. Design parameters for 704 and 1300 MHz versions of the tuner are given. Simulation results point to efficient performance that could reduce by a factor-of-ten the RF power levels required for driving superconducting cavities in the BNL ERL.

  8. Fast 704 MHz Ferroelectric Tuner for Superconducting Cavities

    SciTech Connect

    Jay L. Hirshfield

    2012-04-12

    The Omega-P SBIR project described in this Report has as its goal the development, test, and evaluation of a fast electrically-controlled L-band tuner for BNL Energy Recovery Linac (ERL) in the Electron Ion Collider (EIC) upgrade of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL). The tuner, that employs an electrically-controlled ferroelectric component, is to allow fast compensation to cavity resonance changes. In ERLs, there are several factors which significantly affect the amount of power required from the wall-plug to provide the RF-power level necessary for the operation. When beam loading is small, the power requirements are determined by (i) ohmic losses in cavity walls, (ii) fluctuations in amplitude and/or phase for beam currents, and (iii) microphonics. These factors typically require a substantial change in the coupling between the cavity and the feeding line, which results in an intentional broadening of the cavity bandwidth, which in turn demands a significant amount of additional RF power. If beam loading is not small, there is a variety of beam-drive phase instabilities to be managed, and microphonics will still remain an issue, so there remain requirements for additional power. Moreover ERL performance is sensitive to changes in beam arrival time, since any such change is equivalent to phase instability with its vigorous demands for additional power. In this Report, we describe the new modular coaxial tuner, with specifications suitable for the 704 MHz ERL application. The device would allow changing the RF-coupling during the cavity filling process in order to effect significant RF power savings, and also will provide rapid compensation for beam imbalance and allow for fast stabilization against phase fluctuations caused by microphonics, beam-driven instabilities, etc. The tuner is predicted to allow a reduction of about ten times in the required power from the RF source, as compared to a compensation system

  9. Broadband power amplifier tube: Klystron tube 5K70SK-WBT and step tuner VA-1470S

    NASA Technical Reports Server (NTRS)

    Cox, H. R.; Johnson, J. O.

    1974-01-01

    The design concept, the fabrication, and the acceptance testing of a wide band Klystron tube and remotely controlled step tuner for channel selection are discussed. The equipment was developed for the modification of an existing 20 KW Power Amplifier System which was provided to the contractor as GFE. The replacement Klystron covers a total frequency range of 2025 to 2120 MHz and is tuneable to six (6) each channel with a band width of 22 MHz or greater per channel. A 5 MHz overlap is provided between channels. Channels are selected at the control panel located in the front of the Klystron magnet or from one of three remote control stations connected in parallel with the step tuner. Included in this final report are the results of acceptance tests conducted at the vendor's plant and of the integrated system tests.

  10. Optimizing secondary tailgate support selection

    SciTech Connect

    Harwood, C.; Karmis, M.; Haycocks, C.; Luo, J.

    1996-12-01

    A model was developed to facilitate secondary tailgate support selection based on analysis of over 100 case studies, compiled from two different surveys of operating longwall coal mines in the United States. The ALPS (Analysis of Longwall Pillar Stability) program was used to determine adequacy of pillar design for the successful longwall case histories. A relationship was developed between the secondary support density necessary to maintain a stable tailgate entry during mining and the CMRR (Coal Mine Roof Rating). This relationship defines the lower bound of secondary support density currently used in longwall mines. The model used only successful tailgate case history data, with adequate ALPS SF according to the CMRR for each case. This model facilitates mine design by predicting secondary support density required for a tailgate entry depending on the ALPS SF and CMRR, which can result in significant economic benefits.

  11. Waveguide Stub Tuner Analysis for CEBAF Machine Application

    SciTech Connect

    Haipeng Wang; Michael Tiefenback

    2004-08-01

    Three-stub WR650 waveguide tuners have been used on the CEBAF superconducting cavities for two changes of the external quality factors (Qext): increasing the Qext from 3.4-7.6 x 10{sup 6} to 8 x 10{sup 6}6 on 5-cell cavities to reduce klystron power at operating gradients and decreasing the Qext from 1.7-2.4 x 10{sup 7} to 8 x 10{sup 6} on 7-cell cavities to simplify control of Lorenz Force detuning. To understand the reactive tuning effects in the machine operations with beam current and mechanical tuning, a network analysis model was developed. The S parameters of the stub tuner were simulated by MAFIA and measured on the bench. We used this stub tuner model to study tuning range, sensitivity, and frequency pulling, as well as cold waveguide (WG) and window heating problems. Detailed experimental results are compared against this model. Pros and cons of this stub tuner application are summarized.

  12. Self-extinction through optimizing selection

    PubMed Central

    Parvinen, Kalle; Dieckmann, Ulf

    2013-01-01

    Evolutionary suicide is a process in which selection drives a viable population to extinction. So far, such selection-driven self-extinction has been demonstrated in models with frequency-dependent selection. This is not surprising, since frequency-dependent selection can disconnect individual-level and population-level interests through environmental feedback. Hence it can lead to situations akin to the tragedy of the commons, with adaptations that serve the selfish interests of individuals ultimately ruining a population. For frequency-dependent selection to play such a role, it must not be optimizing. Together, all published studies of evolutionary suicide have created the impression that evolutionary suicide is not possible with optimizing selection. Here we disprove this misconception by presenting and analyzing an example in which optimizing selection causes self-extinction. We then take this line of argument one step further by showing, in a further example, that selection-driven self-extinction can occur even under frequency-independent selection. PMID:23583808

  13. Feature Selection via Modified Gravitational Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2015-03-01

    Feature selection is the process of selecting a subset of relevant and most informative features, which efficiently represents the input data. We proposed a feature selection algorithm based on n-dimensional gravitational optimization algorithm (NGOA), which is based on the principle of gravitational fields. The objective function of optimization algorithm is a non-linear function of variables, which are called masses and defined based on extracted features. The forces between the masses as well as their new locations are calculated using the value of the objective function and the values of masses. We extracted variety of features applying different wavelet transforms and statistical methods on FLAIR and T1-weighted MR brain images. There are two classes of normal and abnormal tissues. Extracted features are divided into groups of five features. The best feature is selected in each group using N-dimensional gravitational optimization algorithm and support vector machine classifier. Then the selected features from each group make several groups of five features again and so on till desired number of features is selected. The advantage of NGOA algorithm is that the possibility of being drawn into a local optimal solution is very low. The experimental results show that our method outperforms some standard feature selection algorithms on both real-data and simulated brain tumor data.

  14. Optimizing Site Selection for HEDS

    NASA Astrophysics Data System (ADS)

    Marshall, J. R.

    1999-01-01

    MSP 2001 will be conducting environmental assessment for the Human exploration and Development of Space (HEDS) Program in order to safeguard future human exploration of the planet, in addition to geological studies being addressed by the APEX payload. In particular, the MECA experiment (see other abstracts, this volume), will address chemical toxicity of the soil, the presence of adhesive or abrasive soil dust components, and the geoelectrical-triboelectrical character of the surface environment. The attempt will be to quantify hazards to humans and machinery structures deriving from compounds that poison, corrode, abrade, invade (lungs or machinery), contaminate, or electrically interfere with the human presence. The DART experiment, will also address the size and electrical nature of airborne dust. Photo-imaging of the local scene with RAC and Pancam will be able to assess dust raising events such as local thermal vorticity-driven dust devils. The need to introduce discussion of HEDS landing site requirements stems from potential conflict, but also potential synergism with other '01 site requirements. In-situ Resource Utilization (ISRU) mission components desire as much solar radiation as possible, with some very limited amount of dust available; the planetary-astrobiology mission component desires sufficient rock abundance without inhibiting rover activities (and an interesting geological niche if available), the radiation component may again have special requirements, as will the engineers concerned with mission safety and mission longevity. The '01 mission affords an excellent opportunity to emphasize HEDS landing site requirements, given the constraint that both recent missions (Pathfinder, Mars '98) and future missions (MSP '03 & '05) have had or will have strong geological science drivers in the site selection process. What type of landing site best facilitates investigation of the physical, chemical, and behavioral properties of soil and dust? There are

  15. Feature Selection via Chaotic Antlion Optimization

    PubMed Central

    Zawbaa, Hossam M.; Emary, E.; Grosan, Crina

    2016-01-01

    Background Selecting a subset of relevant properties from a large set of features that describe a dataset is a challenging machine learning task. In biology, for instance, the advances in the available technologies enable the generation of a very large number of biomarkers that describe the data. Choosing the more informative markers along with performing a high-accuracy classification over the data can be a daunting task, particularly if the data are high dimensional. An often adopted approach is to formulate the feature selection problem as a biobjective optimization problem, with the aim of maximizing the performance of the data analysis model (the quality of the data training fitting) while minimizing the number of features used. Results We propose an optimization approach for the feature selection problem that considers a “chaotic” version of the antlion optimizer method, a nature-inspired algorithm that mimics the hunting mechanism of antlions in nature. The balance between exploration of the search space and exploitation of the best solutions is a challenge in multi-objective optimization. The exploration/exploitation rate is controlled by the parameter I that limits the random walk range of the ants/prey. This variable is increased iteratively in a quasi-linear manner to decrease the exploration rate as the optimization progresses. The quasi-linear decrease in the variable I may lead to immature convergence in some cases and trapping in local minima in other cases. The chaotic system proposed here attempts to improve the tradeoff between exploration and exploitation. The methodology is evaluated using different chaotic maps on a number of feature selection datasets. To ensure generality, we used ten biological datasets, but we also used other types of data from various sources. The results are compared with the particle swarm optimizer and with genetic algorithm variants for feature selection using a set of quality metrics. PMID:26963715

  16. Optimizing Clinical Research Participant Selection with Informatics

    PubMed Central

    Weng, Chunhua

    2015-01-01

    Clinical research participants are often not reflective of the real-world patients due to overly restrictive eligibility criteria. Meanwhile, unselected participants introduce confounding factors and reduce research efficiency. Biomedical Informatics, especially Big Data increasingly made available from electronic health records, offers promising aids to optimize research participant selection through data-driven transparency. PMID:26549161

  17. A SiGe BiCMOS multi-band tuner for mobile TV applications

    NASA Astrophysics Data System (ADS)

    Xueqing, Hu; Zheng, Gong; Jinxin, Zhao; Lei, Wang; Peng, Yu; Yin, Shi

    2012-04-01

    This paper presents the circuit design and measured performance of a multi-band tuner for mobile TV applications. The tuner RFIC is composed of a wideband front-end, an analog baseband, a full integrated fractional-N synthesizer and an I2C digital interface. To meet the stringent adjacent channel rejection (ACR) requirements of mobile TV standards while keeping low power consumption and low cost, direct conversion architecture with a local AGC scheme is adopted in this design. Eighth-order elliptic active-RC filters with large stop band attenuation and a sharp transition band are chosen as the channel select filter to further improve the ACR preference. The chip is fabricated in a 0.35-μm SiGe BiCMOS technology and occupies a silicon area of 5.5 mm2. It draws 50 mA current from a 3.0 V power supply. In CMMB application, it achieves a sensitivity of -97 dBm with 1/2 coding QPSK signal input and over 40 dB ACR.

  18. A hydrogen maser with cavity auto-tuner for timekeeping

    NASA Technical Reports Server (NTRS)

    Lin, C. F.; He, J. W.; Zhai, Z. C.

    1992-01-01

    A hydrogen maser frequency standard for timekeeping was worked on at the Shanghai Observatory. The maser employs a fast cavity auto-tuner, which can detect and compensate the frequency drift of the high-Q resonant cavity with a short time constant by means of a signal injection method, so that the long term frequency stability of the maser standard is greatly improved. The cavity auto-tuning system and some maser data obtained from the atomic time comparison are described.

  19. Optimal probe selection in diagnostic search

    NASA Technical Reports Server (NTRS)

    Bhandari, Inderpal S.; Simon, Herbert A.; Siewiorek, Daniel P.

    1990-01-01

    Probe selection (PS) in machine diagnosis is viewed as a collection of models that apply under specific conditions. This makes it possible for three polynomial-time optimal algorithms to be developed for simplified PS models that allow different probes to have different costs. The work is compared with the research of Simon and Kadane (1975), who developed a collection of models for optimal problem-solving search. The relationship between these models and the three newly developed algorithms for PS is explored. Two of the algorithms are unlike the ones discussed by Simon and Kadane. The third cannot be related to the problem-solving models.

  20. Testing of the new tuner design for the CEBAF 12 GeV upgrade SRF cavities

    SciTech Connect

    Edward Daly; G. Davis; William Hicks

    2005-05-01

    The new tuner design for the 12 GeV Upgrade SRF cavities consists of a coarse mechanical tuner and a fine piezoelectric tuner. The mechanism provides a 30:1 mechanical advantage, is pre-loaded at room temperature and tunes the cavities in tension only. All of the components are located in the insulating vacuum space and attached to the helium vessel, including the motor, harmonic drive and piezoelectric actuators. The requirements and detailed design are presented. Measurements of range and resolution of the coarse tuner are presented and discussed.

  1. Optimal selection theory for superconcurrency. Technical document

    SciTech Connect

    Freund, R.F.

    1989-10-01

    This paper describes a mathematical programming approach to finding an optimal, heterogeneous suite of processors to solve supercomputing problems. This technique, called superconcurrency, works best when the computational requirements are diverse and significant portions of the code are not tightly-coupled. It is also dependent on new methods of benchmarking and code profiling, as well as eventual use of AI techniques for intelligent management of the selected superconcurrent suite.

  2. Optimal remediation policy selection under general conditions

    SciTech Connect

    Wang, M.; Zheng, C.

    1997-09-01

    A new simulation-optimization model has been developed for the optimal design of ground-water remediation systems under a variety of field conditions. The model couples genetic algorithm (GA), a global search technique inspired by biological evolution, with MODFLOW and MT3D, two commonly used ground-water flow and solute transport codes. The model allows for multiple management periods in which optimal pumping/injection rates vary with time to reflect the changes in the flow and transport conditions during the remediation process. The objective function of the model incorporates multiple cost terms including the drilling cost, the installation cost, and the costs to extract and treat the contaminated ground water. The simulation-optimization model is first applied to a typical two-dimensional pump-and-treat example with one and three management periods to demonstrate the effectiveness and robustness of the new model. The model is then applied to a large-scale three-dimensional field problem to determine the minimum pumping needed to contain an existing contaminant plume. The optimal solution as determined in this study is compared with a previous solution based on trial-and-error selection.

  3. Optimal Sensor Selection for Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  4. Separator profile selection for optimal battery performance

    NASA Astrophysics Data System (ADS)

    Whear, J. Kevin

    Battery performance, depending on the application, is normally defined by power delivery, electrical capacity, cycling regime and life in service. In order to meet the various performance goals, the Battery Design Engineer can vary things such as grid alloys, paste formulations, number of plates and methods of construction. Another design option available to optimize the battery performance is the separator profile. The goal of this paper is to demonstrate how separator profile selection can be utilized to optimize battery performance and manufacturing efficiencies. Also time will be given to explore novel separator profiles which may bring even greater benefits in the future. All major lead-acid application will be considered including automotive, motive power and stationary.

  5. Optimal Portfolio Selection Under Concave Price Impact

    SciTech Connect

    Ma Jin; Song Qingshuo; Xu Jing; Zhang Jianfeng

    2013-06-15

    In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solution to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.

  6. DESIGN CONSIDERATIONS FOR THE MECHANICAL TUNER OF THE RHIC ELECTRON COOLER RF CAVITY.

    SciTech Connect

    RANK, J.; BEN-ZVI,I.; HAHN,G.; MCINTYRE,G.; DALY,E.; PREBLE,J.

    2005-05-16

    The ECX Project, Brookhaven Lab's predecessor to the RHIC e-Cooler, includes a prototype RF tuner mechanism capable of both coarse and fast tuning. This tuner concept, adapted originally from a DESY design, has longer stroke and significantly higher loads attributable to the very stiff ECX cavity shape. Structural design, kinematics, controls, thermal and RF issues are discussed and certain improvements are proposed.

  7. Selectively-informed particle swarm optimization.

    PubMed

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-01-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315

  8. Selectively-informed particle swarm optimization

    PubMed Central

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-01-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315

  9. Selectively-informed particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-03-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors.

  10. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2016-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a "Color-Enhanced" sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  11. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2015-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a 'Color-Enhanced' sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  12. Tests of a tuner for a 325 MHz SRF spoke resonator

    SciTech Connect

    Pishchalnikov, Y.; Borissov, E.; Khabiboulline, T.; Madrak, R.; Pilipenko, R.; Ristori, L.; Schappert, W.; /Fermilab

    2011-03-01

    Fermilab is developing 325 MHz SRF spoke cavities for the proposed Project X. A compact fast/slow tuner has been developed for final tuning of the resonance frequency of the cavity after cooling down to operating temperature and to compensate microphonics and Lorentz force detuning [2]. The modified tuner design and results of 4.5K tests of the first prototype are presented. The performance of lever tuners for the SSR1 spoke resonator prototype has been measured during recent CW and pulsed tests in the Fermilab SCTF. The tuner met or exceeded all design goals and has been used to successfully: (1) Bring the cold cavity to the operating frequency; (2) Compensate for dynamic Lorentz force detuning; and (3) Compensate for frequency detuning of the cavity due to changes in the He bath pressure.

  13. Proof-of-principle Experiment of a Ferroelectric Tuner for the 1.3 GHz Cavity

    SciTech Connect

    Choi,E.M.; Hahn, H.; Shchelkunov, S. V.; Hirshfield, J.; Kazakov, S.

    2009-01-01

    A novel tuner has been developed by the Omega-P company to achieve fast control of the accelerator RF cavity frequency. The tuner is based on the ferroelectric property which has a variable dielectric constant as function of applied voltage. Tests using a Brookhaven National Laboratory (BNL) 1.3 GHz electron gun cavity have been carried out for a proof-of-principle experiment of the ferroelectric tuner. Two different methods were used to determine the frequency change achieved with the ferroelectric tuner (FT). The first method is based on a S11 measurement at the tuner port to find the reactive impedance change when the voltage is applied. The reactive impedance change then is used to estimate the cavity frequency shift. The second method is a direct S21 measurement of the frequency shift in the cavity with the tuner connected. The estimated frequency change from the reactive impedance measurement due to 5 kV is in the range between 3.2 kHz and 14 kHz, while 9 kHz is the result from the direct measurement. The two methods are in reasonable agreement. The detail description of the experiment and the analysis are discussed in the paper.

  14. Optimized source selection for intracavitary low dose rate brachytherapy

    SciTech Connect

    Nurushev, T.; Kim, Jinkoo

    2005-05-01

    A procedure has been developed for automating optimal selection of sources from an available inventory for the low dose rate brachytherapy, as a replacement for the conventional trial-and-error approach. The method of optimized constrained ratios was applied for clinical source selection for intracavitary Cs-137 implants using Varian BRACHYVISION software as initial interface. However, this method can be easily extended to another system with isodose scaling and shaping capabilities. Our procedure provides optimal source selection results independent of the user experience and in a short amount of time. This method also generates statistics on frequently requested ideal source strengths aiding in ordering of clinically relevant sources.

  15. On Optimal Input Design and Model Selection for Communication Channels

    SciTech Connect

    Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  16. Perpendicularly Biased YIG Tuners for the Fermilab Recycler 52.809 MHz Cavities

    SciTech Connect

    Madrak, R.; Kashikhin, V.; Makarov, A.; Wildman, D.

    2013-09-13

    For NOvA and future experiments requiring high intensity proton beams, Fermilab is in the process of upgrading the existing accelerator complex for increased proton production. One such improvement is to reduce the Main Injector cycle time, by performing slip stacking, previously done in the Main Injector, in the now repurposed Recycler Ring. Recycler slip stacking requires new tuneable RF cavities, discussed separately in these proceedings. These are quarter wave cavities resonant at 52.809 MHz with a 10 kHz tuning range. The 10 kHz range is achieved by use of a tuner which has an electrical length of approximately one half wavelength at 52.809 MHz. The tuner is constructed from 3⅛″ diameter rigid coaxial line, with 5 inches of its length containing perpendicularly biased, Al doped Yttrium Iron Garnet (YIG). The tuner design, measurements, and high power test results are presented.

  17. Design and test of frequency tuner for a CAEP high power THz free-electron laser

    NASA Astrophysics Data System (ADS)

    Mi, Zheng-Hui; Zhao, Dan-Yang; Sun, Yi; Pan, Wei-Min; Lin, Hai-Ying; Lu, Xiang-Yang; Quan, Sheng-Wen; Luo, Xing; Li, Ming; Yang, Xing-Fan; Wang, Guang-Wei; Dai, Jian-Ping; Li, Zhong-Quan; Ma, Qiang; Sha, Peng

    2015-02-01

    Peking University is developing a 1.3 GHz superconducting accelerating section highpower THz free-electron laser for the China Academy of Engineering Physics (CAEP). A compact fast/slow tuner has been developed by the Institute of High Energy Physics (IHEP) for the accelerating section to control Lorentz detuning, compensate for beam loading effect, microphonics and liquid helium pressure fluctuations. The tuner design, warm test and cold test of the first prototype are presented, which has a guiding significance for the manufacture of the formal tuner and cryomodule assembly. Supported by the 500 MHz superconducting cavity electromechanical tuning system (Y190KFEOHD), NSAF (11176003) and National Major Scientific Instrument and Equipment Development projects(2011YQ130018)

  18. Optimization of ultrasonic transducers for selective guided wave actuation

    NASA Astrophysics Data System (ADS)

    Miszczynski, Mateusz; Packo, Pawel; Zbyrad, Paulina; Stepinski, Tadeusz; Uhl, Tadeusz; Lis, Jerzy; Wiatr, Kazimierz

    2016-04-01

    The application of guided waves using surface-bonded piezoceramic transducers for nondestructive testing (NDT) and Structural Health Monitoring (SHM) have shown great potential. However, due to difficulty in identification of individual wave modes resulting from their dispersive and multi-modal nature, selective mode excitement methods are highly desired. The presented work focuses on an optimization-based approach to design of a piezoelectric transducer for selective guided waves generation. The concept of the presented framework involves a Finite Element Method (FEM) model in the optimization process. The material of the transducer is optimized in topological sense with the aim of tuning piezoelectric properties for actuation of specific guided wave modes.

  19. Optimizing of selective laser sintering method

    SciTech Connect

    Guo Suiyan

    1996-12-31

    In a SLS process, a computer-controlled laser scanner moves laser beam spot on flat powder bed and the laser beam heat the powder to cause sintering in the specific area. A series of such flat planes is linked together to construct a 3D object. SLS is a complex process which involves many process parameters. The laser beam properties, such as laser beam profile, intensity, and wave length, as well as its scanning speed and scanning path, are very important parameters. Laser properties, powder properties and sintering environment work together in a SLS process to determine whether SLS is successful. The objective of SLS is to make a part which has the same size as the CAD data. The accuracy of the final part from SLS is affected by a lot of parameters as mentioned above. How to control these parameters is a key to produce an acceptable final part. Laser parameters, powder material properties and processing environment can all affect the quality of SLS part. A lot of effort has been made in parametric analysis, material properties and processing environment for SLS by other researchers. The focus of this paper is to optimize laser parameters and scanning path to improve quality of SLS part and the processing speed. A scanning method is discussed to improve the quality and speed together.

  20. Digital logic optimization using selection operators

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor); Cameron, Eric G. (Inventor); Gambles, Jody W. (Inventor)

    2004-01-01

    According to the invention, a digital design method for manipulating a digital circuit netlist is disclosed. In one step, a first netlist is loaded. The first netlist is comprised of first basic cells that are comprised of first kernel cells. The first netlist is manipulated to create a second netlist. The second netlist is comprised of second basic cells that are comprised of second kernel cells. A percentage of the first and second kernel cells are selection circuits. There is less chip area consumed in the second basic cells than in the first basic cells. The second netlist is stored. In various embodiments, the percentage could be 2% or more, 5% or more, 10% or more, 20% or more, 30% or more, or 40% or more.

  1. Optimized Selective Coatings for Solar Collectors

    NASA Technical Reports Server (NTRS)

    Mcdonald, G.; Curtis, H. B.

    1967-01-01

    The spectral reflectance properties of black nickel electroplated over stainless steel and of black copper produced by oxidation of copper sheet were measured for various plating times of black nickel and for various lengths of time of oxidation of the copper sheet, and compared to black chrome over nickel and to converted zinc. It was determined that there was an optimum time for both plating of black nickel and for the oxidation of copper black. At this time the solar selective properties show high absorptance in the solar spectrum and low emittance in the infrared. The conditions are compared for production of optimum optical properties for black nickel, black copper, black chrome, and two black zinc conversions which at the same conditions had absorptances of 0.84, 0.90, 0.95, 0.84, and 0.92, respectively, and emittances of 0.18, 0.08, 0.09, 0.10, and 0.08, respectively.

  2. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  3. Opposing selection and environmental variation modify optimal timing of breeding.

    PubMed

    Tarwater, Corey E; Beissinger, Steven R

    2013-09-17

    Studies of evolution in wild populations often find that the heritable phenotypic traits of individuals producing the most offspring do not increase proportionally in the population. This paradox may arise when phenotypic traits influence both fecundity and viability and when there is a tradeoff between these fitness components, leading to opposing selection. Such tradeoffs are the foundation of life history theory, but they are rarely investigated in selection studies. Timing of breeding is a classic example of a heritable trait under directional selection that does not result in an evolutionary response. Using a 22-y study of a tropical parrot, we show that opposing viability and fecundity selection on the timing of breeding is common and affects optimal breeding date, defined by maximization of fitness. After accounting for sampling error, the directions of viability (positive) and fecundity (negative) selection were consistent, but the magnitude of selection fluctuated among years. Environmental conditions (rainfall and breeding density) primarily and breeding experience secondarily modified selection, shifting optimal timing among individuals and years. In contrast to other studies, viability selection was as strong as fecundity selection, late-born juveniles had greater survival than early-born juveniles, and breeding later in the year increased fitness under opposing selection. Our findings provide support for life history tradeoffs influencing selection on phenotypic traits, highlight the need to unify selection and life history theory, and illustrate the importance of monitoring survival as well as reproduction for understanding phenological responses to climate change. PMID:24003118

  4. Selecting optimal partitioning schemes for phylogenomic datasets

    PubMed Central

    2014-01-01

    Background Partitioning involves estimating independent models of molecular evolution for different subsets of sites in a sequence alignment, and has been shown to improve phylogenetic inference. Current methods for estimating best-fit partitioning schemes, however, are only computationally feasible with datasets of fewer than 100 loci. This is a problem because datasets with thousands of loci are increasingly common in phylogenetics. Methods We develop two novel methods for estimating best-fit partitioning schemes on large phylogenomic datasets: strict and relaxed hierarchical clustering. These methods use information from the underlying data to cluster together similar subsets of sites in an alignment, and build on clustering approaches that have been proposed elsewhere. Results We compare the performance of our methods to each other, and to existing methods for selecting partitioning schemes. We demonstrate that while strict hierarchical clustering has the best computational efficiency on very large datasets, relaxed hierarchical clustering provides scalable efficiency and returns dramatically better partitioning schemes as assessed by common criteria such as AICc and BIC scores. Conclusions These two methods provide the best current approaches to inferring partitioning schemes for very large datasets. We provide free open-source implementations of the methods in the PartitionFinder software. We hope that the use of these methods will help to improve the inferences made from large phylogenomic datasets. PMID:24742000

  5. High-power RF testing of a 352-MHZ fast-ferrite RF cavity tuner at the Advanced Photon Source.

    SciTech Connect

    Horan, D.; Cherbak, E.; Accelerator Systems Division

    2006-01-01

    A 352-MHz fast-ferrite rf cavity tuner, manufactured by Advanced Ferrite Technology, was high-power tested on a single-cell copper rf cavity at the Advanced Photon Source. These tests measured the fast-ferrite tuner performance in terms of power handling capability, tuning bandwidth, tuning speed, stability, and rf losses. The test system comprises a single-cell copper rf cavity fitted with two identical coupling loops, one for input rf power and the other for coupling the fast-ferrite tuner to the cavity fields. The fast-ferrite tuner rf circuit consists of a cavity coupling loop, a 6-1/8-inch EIA coaxial line system with directional couplers, and an adjustable 360{sup o} mechanical phase shifter in series with the fast-ferrite tuner. A bipolar DC bias supply, controlled by a low-level rf cavity tuning loop consisting of an rf phase detector and a PID amplifier, is used to provide a variable bias current to the tuner ferrite material to maintain the test cavity at resonance. Losses in the fast-ferrite tuner are calculated from cooling water calorimetry. Test data will be presented.

  6. Optimal Selection of Parameters for Nonuniform Embedding of Chaotic Time Series Using Ant Colony Optimization.

    PubMed

    Shen, Meie; Chen, Wei-Neng; Zhang, Jun; Chung, Henry Shu-Hung; Kaynak, Okyay

    2013-04-01

    The optimal selection of parameters for time-delay embedding is crucial to the analysis and the forecasting of chaotic time series. Although various parameter selection techniques have been developed for conventional uniform embedding methods, the study of parameter selection for nonuniform embedding is progressed at a slow pace. In nonuniform embedding, which enables different dimensions to have different time delays, the selection of time delays for different dimensions presents a difficult optimization problem with combinatorial explosion. To solve this problem efficiently, this paper proposes an ant colony optimization (ACO) approach. Taking advantage of the characteristic of incremental solution construction of the ACO, the proposed ACO for nonuniform embedding (ACO-NE) divides the solution construction procedure into two phases, i.e., selection of embedding dimension and selection of time delays. In this way, both the embedding dimension and the time delays can be optimized, along with the search process of the algorithm. To accelerate search speed, we extract useful information from the original time series to define heuristics to guide the search direction of ants. Three geometry- or model-based criteria are used to test the performance of the algorithm. The optimal embeddings found by the algorithm are also applied in time-series forecasting. Experimental results show that the ACO-NE is able to yield good embedding solutions from both the viewpoints of optimization performance and prediction accuracy. PMID:23144038

  7. Development of a Movable Plunger Tuner for the High Power RF Cavity for the PEP II B Factory

    SciTech Connect

    Schwarz, H.D.; Fant, K.; Neubauer, Mark Stephen; Rimmer, R.A.; /LBL, Berkeley

    2011-08-26

    A 10 cm diameter by 5 cm travel plunger tuner was developed for the PEP-II RF copper cavity system. The single cell cavity including the tuner is designed to operate up to 150 kW of dissipated RF power. Spring finger contacts to protect the bellows from RF power are specially placed 8.5 cm away from the inside wall of the cavity to avoid fundamental and higher order mode resonances. The spring fingers are made of dispersion-strengthened copper to accommodate relatively high heating. The design, alignment, testing and performance of the tuner is described.

  8. Training set optimization under population structure in genomic selection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The optimization of the training set (TRS) in genomic selection (GS) has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the Coefficient of D...

  9. Optimal Financial Aid Policies for a Selective University.

    ERIC Educational Resources Information Center

    Ehrenberg, Ronald G.; Sherman, Daniel R.

    1984-01-01

    This paper provides a model of optimal financial aid policies for a selective university. The model implies that the financial aid package to be offered to each category of admitted applicants depends on the elasticity of the fraction who accept offers of admission with respect to the financial aid package offered them. (Author/SSH)

  10. Training set optimization under population structure in genomic selection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determ...

  11. Multidimensional Adaptive Testing with Optimal Design Criteria for Item Selection

    ERIC Educational Resources Information Center

    Mulder, Joris; van der Linden, Wim J.

    2009-01-01

    Several criteria from the optimal design literature are examined for use with item selection in multidimensional adaptive testing. In particular, it is examined what criteria are appropriate for adaptive testing in which all abilities are intentional, some should be considered as a nuisance, or the interest is in the testing of a composite of the…

  12. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate

  13. Optimizing Ligand Efficiency of Selective Androgen Receptor Modulators (SARMs).

    PubMed

    Handlon, Anthony L; Schaller, Lee T; Leesnitzer, Lisa M; Merrihew, Raymond V; Poole, Chuck; Ulrich, John C; Wilson, Joseph W; Cadilla, Rodolfo; Turnbull, Philip

    2016-01-14

    A series of selective androgen receptor modulators (SARMs) containing the 1-(trifluoromethyl)benzyl alcohol core have been optimized for androgen receptor (AR) potency and drug-like properties. We have taken advantage of the lipophilic ligand efficiency (LLE) parameter as a guide to interpret the effect of structural changes on AR activity. Over the course of optimization efforts the LLE increased over 3 log units leading to a SARM 43 with nanomolar potency, good aqueous kinetic solubility (>700 μM), and high oral bioavailability in rats (83%). PMID:26819671

  14. Improved Clonal Selection Algorithm Combined with Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Gao, Shangce; Wang, Wei; Dai, Hongwei; Li, Fangjia; Tang, Zheng

    Both the clonal selection algorithm (CSA) and the ant colony optimization (ACO) are inspired by natural phenomena and are effective tools for solving complex problems. CSA can exploit and explore the solution space parallely and effectively. However, it can not use enough environment feedback information and thus has to do a large redundancy repeat during search. On the other hand, ACO is based on the concept of indirect cooperative foraging process via secreting pheromones. Its positive feedback ability is nice but its convergence speed is slow because of the little initial pheromones. In this paper, we propose a pheromone-linker to combine these two algorithms. The proposed hybrid clonal selection and ant colony optimization (CSA-ACO) reasonably utilizes the superiorities of both algorithms and also overcomes their inherent disadvantages. Simulation results based on the traveling salesman problems have demonstrated the merit of the proposed algorithm over some traditional techniques.

  15. Monte Carlo optimization for site selection of new chemical plants.

    PubMed

    Cai, Tianxing; Wang, Sujing; Xu, Qiang

    2015-11-01

    Geographic distribution of chemical manufacturing sites has significant impact on the business sustainability of industrial development and regional environmental sustainability as well. The common site selection rules have included the evaluation of the air quality impact of a newly constructed chemical manufacturing site to surrounding communities. In order to achieve this target, the simultaneous consideration should cover the regional background air-quality information, the emissions of new manufacturing site, and statistical pattern of local meteorological conditions. According to the above information, the risk assessment can be conducted for the potential air-quality impacts from candidate locations of a new chemical manufacturing site, and thus the optimization of the final site selection can be achieved by minimizing its air-quality impacts. This paper has provided a systematic methodology for the above purpose. There are total two stages of modeling and optimization work: i) Monte Carlo simulation for the purpose to identify background pollutant concentration based on currently existing emission sources and regional statistical meteorological conditions; and ii) multi-objective (simultaneous minimization of both peak pollutant concentration and standard deviation of pollutant concentration spatial distribution at air-quality concern regions) Monte Carlo optimization for optimal location selection of new chemical manufacturing sites according to their design data of potential emission. This study can be helpful to both determination of the potential air-quality impact for geographic distribution of multiple chemical plants with respect to regional statistical meteorological conditions, and the identification of an optimal site for each new chemical manufacturing site with the minimal environment impact to surrounding communities. The efficacy of the developed methodology has been demonstrated through the case studies. PMID:26283263

  16. Hyperopt: a Python library for model selection and hyperparameter optimization

    NASA Astrophysics Data System (ADS)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  17. A technique for monitoring fast tuner piezoactuator preload forces for superconducting rf cavities

    SciTech Connect

    Pischalnikov, Y.; Branlard, J.; Carcagno, R.; Chase, B.; Edwards, H.; Orris, D.; Makulski, A.; McGee, M.; Nehring, R.; Poloubotko, V.; Sylvester, C.; /Fermilab

    2007-06-01

    The technology for mechanically compensating Lorentz Force detuning in superconducting RF cavities has already been developed at DESY. One technique is based on commercial piezoelectric actuators and was successfully demonstrated on TESLA cavities [1]. Piezo actuators for fast tuners can operate in a frequency range up to several kHz; however, it is very important to maintain a constant static force (preload) on the piezo actuator in the range of 10 to 50% of its specified blocking force. Determining the preload force during cool-down, warm-up, or re-tuning of the cavity is difficult without instrumentation, and exceeding the specified range can permanently damage the piezo stack. A technique based on strain gauge technology for superconducting magnets has been applied to fast tuners for monitoring the preload on the piezoelectric assembly. The design and testing of piezo actuator preload sensor technology is discussed. Results from measurements of preload sensors installed on the tuner of the Capture Cavity II (CCII)[2] tested at FNAL are presented. These results include measurements during cool-down, warmup, and cavity tuning along with dynamic Lorentz force compensation.

  18. State-Selective Excitation of Quantum Systems via Geometrical Optimization.

    PubMed

    Chang, Bo Y; Shin, Seokmin; Sola, Ignacio R

    2015-09-01

    We lay out the foundations of a general method of quantum control via geometrical optimization. We apply the method to state-selective population transfer using ultrashort transform-limited pulses between manifolds of levels that may represent, e.g., state-selective transitions in molecules. Assuming that certain states can be prepared, we develop three implementations: (i) preoptimization, which implies engineering the initial state within the ground manifold or electronic state before the pulse is applied; (ii) postoptimization, which implies engineering the final state within the excited manifold or target electronic state, after the pulse; and (iii) double-time optimization, which uses both types of time-ordered manipulations. We apply the schemes to two important dynamical problems: To prepare arbitrary vibrational superposition states on the target electronic state and to select weakly coupled vibrational states. Whereas full population inversion between the electronic states only requires control at initial time in all of the ground vibrational levels, only very specific superposition states can be prepared with high fidelity by either pre- or postoptimization mechanisms. Full state-selective population inversion requires manipulating the vibrational coherences in the ground electronic state before the optical pulse is applied and in the excited electronic state afterward, but not during all times. PMID:26575896

  19. PDZ Domain Binding Selectivity Is Optimized Across the Mouse Proteome

    PubMed Central

    Stiffler, Michael A.; Chen, Jiunn R.; Grantcharova, Viara P.; Lei, Ying; Fuchs, Daniel; Allen, John E.; Zaslavskaia, Lioudmila A.; MacBeath, Gavin

    2009-01-01

    PDZ domains have long been thought to cluster into discrete functional classes defined by their peptide-binding preferences. We used protein microarrays and quantitative fluorescence polarization to characterize the binding selectivity of 157 mouse PDZ domains with respect to 217 genome-encoded peptides. We then trained a multidomain selectivity model to predict PDZ domain–peptide interactions across the mouse proteome with an accuracy that exceeds many large-scale, experimental investigations of protein-protein interactions. Contrary to the current paradigm, PDZ domains do not fall into discrete classes; instead, they are evenly distributed throughout selectivity space, which suggests that they have been optimized across the proteome to minimize cross-reactivity. We predict that focusing on families of interaction domains, which facilitates the integration of experimentation and modeling, will play an increasingly important role in future investigations of protein function. PMID:17641200

  20. Field of view selection for optimal airborne imaging sensor performance

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.; Barnard, P. Werner; Fildis, Halidun; Erbudak, Mustafa; Senger, Tolga; Alpman, Mehmet E.

    2014-05-01

    The choice of the Field of View (FOV) of imaging sensors used in airborne targeting applications has major impact on the overall performance of the system. Conducting a market survey from published data on sensors used in stabilized airborne targeting systems shows a trend of ever narrowing FOVs housed in smaller and lighter volumes. This approach promotes the ever increasing geometric resolution provided by narrower FOVs, while it seemingly ignores the influences the FOV selection has on the sensor's sensitivity, the effects of diffraction, the influences of sight line jitter and collectively the overall system performance. This paper presents a trade-off methodology to select the optimal FOV for an imaging sensor that is limited in aperture diameter by mechanical constraints (such as space/volume available and window size) by balancing the influences FOV has on sensitivity and resolution and thereby optimizing the system's performance. The methodology may be applied to staring array based imaging sensors across all wavebands from visible/day cameras through to long wave infrared thermal imagers. Some examples of sensor analysis applying the trade-off methodology are given that highlights the performance advantages that can be gained by maximizing the aperture diameters and choosing the optimal FOV for an imaging sensor used in airborne targeting applications.

  1. Some useful upper bounds for the selection of optimal profiles

    NASA Astrophysics Data System (ADS)

    Daripa, Prabir

    2012-08-01

    In enhanced oil recovery by chemical flooding within tertiary oil recovery, it is often necessary to choose optimal viscous profiles of the injected displacing fluids that reduce growth rates of hydrodynamic instabilities the most thereby substantially reducing the well-known fingering problem and improving oil recovery. Within the three-layer Hele-Shaw model, we show in this paper that selection of the optimal monotonic viscous profile of the middle-layer fluid based on well known theoretical upper bound formula [P. Daripa, G. Pasa, A simple derivation of an upper bound in the presence of a viscosity gradient in three-layer Hele-Shaw flows, Journal of Statistical Mechanics (2006) 11. http://dx.doi.org/10.1088/1742-5468/2006/01/P01014] agrees very well with that based on the computation of maximum growth rate of instabilities from solving the linearized stability problem. Thus, this paper proposes a very simple, fast method for selection of the optimal monotonic viscous profiles of the displacing fluids in multi-layer flows.

  2. Implementing stationary-phase optimized selectivity in supercritical fluid chromatography.

    PubMed

    Delahaye, Sander; Lynen, Frédéric

    2014-12-16

    The performance of stationary-phase optimized selectivity liquid chromatography (SOS-LC) for improved separation of complex mixtures has been demonstrated before. A dedicated kit containing column segments of different lengths and packed with different stationary phases is commercially available together with algorithms capable of predicting and ranking isocratic and gradient separations over vast amounts of possible column combinations. Implementation in chromatographic separations involving compressible fluids, as is the case in supercritical fluid chromatography, had thus far not been attempted. The challenge of this approach is the dependency of solute retention with the mobile-phase density, complicating linear extrapolation of retention over longer or shorter columns segments, as is the case in conventional SOS-LC. In this study, the possibilities of performing stationary-phase optimized selectivity supercritical fluid chromatography (SOS-SFC) are demonstrated with typical low density mobile phases (94% CO2). The procedure is optimized with the commercially available column kit and with the classical isocratic SOS-LC algorithm. SOS-SFC appears possible without any density correction, although optimal correspondence between prediction and experiment is obtained when isopycnic conditions are maintained. As also the influence of the segment order appears significantly less relevant than expected, the use of the approach in SFC appears as promising as is the case in HPLC. Next to the classical use of SOS for faster baseline separation of all solutes in a mixture, the benefits of the approach for predicting as wide as possible separation windows around to-be-purified solutes in semipreparative SFC are illustrated, leading to significant production rate improvements in (semi)preparative SFC. PMID:25393519

  3. Optimal Selection of Threshold Value 'r' for Refined Multiscale Entropy.

    PubMed

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar

    2015-12-01

    Refined multiscale entropy (RMSE) technique was introduced to evaluate complexity of a time series over multiple scale factors 't'. Here threshold value 'r' is updated as 0.15 times SD of filtered scaled time series. The use of fixed threshold value 'r' in RMSE sometimes assigns very close resembling entropy values to certain time series at certain temporal scale factors and is unable to distinguish different time series optimally. The present study aims to evaluate RMSE technique by varying threshold value 'r' from 0.05 to 0.25 times SD of filtered scaled time series and finding optimal 'r' values for each scale factor at which different time series can be distinguished more effectively. The proposed RMSE was used to evaluate over HRV time series of normal sinus rhythm subjects, patients suffering from sudden cardiac death, congestive heart failure, healthy adult male, healthy adult female and mid-aged female groups as well as over synthetic simulated database for different datalengths 'N' of 3000, 3500 and 4000. The proposed RMSE results in improved discrimination among different time series. To enhance the computational capability, empirical mathematical equations have been formulated for optimal selection of threshold values 'r' as a function of SD of filtered scaled time series and datalength 'N' for each scale factor 't'. PMID:26577486

  4. Optimizing Hammermill Performance Through Screen Selection and Hammer Design

    SciTech Connect

    Neal A. Yancey; Tyler L. Westover; Christopher T. Wright

    2013-01-01

    Background: Mechanical preprocessing, which includes particle size reduction and mechanical separation, is one of the primary operations in the feedstock supply system for a lignocellulosic biorefinery. It is the means by which raw biomass from the field or forest is mechanically transformed into an on-spec feedstock with characteristics better suited for the fuel conversion process. Results: This work provides a general overview of the objectives and methodologies of mechanical preprocessing and then presents experimental results illustrating (1) improved size reduction via optimization of hammer mill configuration, (2) improved size reduction via pneumatic-assisted hammer milling, and (3) improved control of particle size and particle size distribution through proper selection of grinder process parameters. Conclusion: Optimal grinder configuration for maximal process throughput and efficiency is strongly dependent on feedstock type and properties, such moisture content. Tests conducted using a HG200 hammer grinder indicate that increasing the tip speed, optimizing hammer geometry, and adding pneumatic assist can increase grinder throughput as much as 400%.

  5. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGESBeta

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  6. Optimal subinterval selection approach for power system transient stability simulation

    SciTech Connect

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.

  7. Tuner and radiation shield for planar electron paramagnetic resonance microresonators

    SciTech Connect

    Narkowicz, Ryszard; Suter, Dieter

    2015-02-15

    Planar microresonators provide a large boost of sensitivity for small samples. They can be manufactured lithographically to a wide range of target parameters. The coupler between the resonator and the microwave feedline can be integrated into this design. To optimize the coupling and to compensate manufacturing tolerances, it is sometimes desirable to have a tuning element available that can be adjusted when the resonator is connected to the spectrometer. This paper presents a simple design that allows one to bring undercoupled resonators into the condition for critical coupling. In addition, it also reduces radiation losses and thereby increases the quality factor and the sensitivity of the resonator.

  8. Optimizing Site Selection in Urban Areas in Northern Switzerland

    NASA Astrophysics Data System (ADS)

    Plenkers, K.; Kraft, T.; Bethmann, F.; Husen, S.; Schnellmann, M.

    2012-04-01

    There is a need to observe weak seismic events (M<2) in areas close to potential nuclear-waste repositories or nuclear power plants, in order to analyze the underlying seismo-tectonic processes and estimate their seismic hazard. We are therefore densifying the existing Swiss Digital Seismic Network in northern Switzerland by additional 20 stations. The new network that will be in operation by the end of 2012, aims at observing seismicity in northern Switzerland with a completeness of M_c=1.0 and a location error < 0.5 km in epicenter and < 2 km in focal depth. Monitoring of weak seismic events in this region is challenging, because the area of interest is densely populated and geology is dominated by the Swiss molasse basin. A optimal network-design and a thoughtful choice for station-sites is, therefore, mandatory. To help with decision making we developed a step-wise approach to find the optimum network configuration. Our approach is based on standard network optimization techniques regarding the localization error. As a new feature, our approach uses an ambient noise model to compute expected signal-to-noise ratios for a given site. The ambient noise model uses information on land use and major infrastructures such as highways and train lines. We ran a series of network optimizations with increasing number of stations until the requirements regarding localization error and magnitude of completeness are reached. The resulting network geometry serves as input for the site selection. Site selection is done by using a newly developed multi-step assessment-scheme that takes into account local noise level, geology, infrastructure, and costs necessary to realize the station. The assessment scheme is weighting the different parameters and the most promising sites are identified. In a first step, all potential sites are classified based on information from topographic maps and site inspection. In a second step, local noise conditions are measured at selected sites. We

  9. Multiobjective Optimization for Model Selection in Kernel Methods in Regression

    PubMed Central

    You, Di; Benitez-Quiroz, C. Fabian; Martinez, Aleix M.

    2016-01-01

    Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-vs-variance trade-off. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a trade-off between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared to methods in the state of the art. PMID:25291740

  10. Multiobjective optimization for model selection in kernel methods in regression.

    PubMed

    You, Di; Benitez-Quiroz, Carlos Fabian; Martinez, Aleix M

    2014-10-01

    Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-versus-variance tradeoff. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a tradeoff between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition, and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared with methods in the state of the art. PMID:25291740

  11. Optimized bioregenerative space diet selection with crew choice

    NASA Technical Reports Server (NTRS)

    Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean

    2003-01-01

    Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.

  12. Theoretical Analysis of Triple Liquid Stub Tuner Impedance Matching for ICRH on Tokamaks

    NASA Astrophysics Data System (ADS)

    Du, Dan; Gong, Xueyu; Yin, Lan; Xiang, Dong; Li, Jingchun

    2015-12-01

    The impedance matching is crucial for continuous wave operation of ion cyclotron resonance heating (ICRH) antennae with high power injection into plasmas. A sudden increase in the reflected radio frequency power due to an impedance mismatch of the ICRH system is an issue which must be solved for present-day and future fusion reactors. This paper presents a method for theoretical analysis of ICRH system impedance matching for a triple liquid stub tuner under plasma operational conditions. The relationship of the antenna input impedance with the plasma parameters and operating frequency is first obtained using a global solution. Then, the relations of the plasma parameters and operating frequency with the matching liquid heights are indirectly obtained through numerical simulation according to transmission line theory and matching conditions. The method provides an alternative theoretical method, rather than measurements, to study triple liquid stub tuner impedance matching for ICRH, which may be beneficial for the design of ICRH systems on tokamaks. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2014GB108002, 2013GB107001), National Natural Science Foundation of China (Nos. 11205086, 11205053, 11375085, and 11405082), the Construct Program of Fusion and Plasma Physics Innovation Team in Hunan Province, China (No. NHXTD03), the Natural Science Foundation of Hunan Province, China (No. 2015JJ4044)

  13. ICRF antenna matching system with ferrite tuners for the Alcator C-Mod tokamak

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Binus, A.; Wukitch, S. J.; Koert, P.; Murray, R.; Pfeiffer, A.

    2015-12-01

    Real-time fast ferrite tuning (FFT) has been successfully implemented on the ICRF antennas on Alcator C-Mod. The former prototypical FFT system on the E-port 2-strap antenna has been upgraded using new ferrite tuners that have been designed specifically for the operational parameters of the Alcator C-Mod ICRF system (˜ 80 MHz). Another similar FFT system, with two ferrite tuners and one fixed-length stub, has been installed on the transmission line of the D-port 2-strap antenna. These two systems share a Linux-server-based real-time controller. These FFT systems are able to achieve and maintain the reflected power to the transmitters to less than 1% in real time during the plasma discharges under almost all plasma conditions, and help ensure reliable high power operation of the antennas. The innovative field-aligned (FA) 4-strap antenna on J-port has been found to have an interesting feature of loading insensitivity vs. plasma conditions. This feature allows us to significantly improve the matching for the FA J-port antenna by installing carefully designed stubs on the two transmission lines. The reduction of the RF voltages in the transmission lines has enabled the FA J-port antenna to deliver 3.7 MW RF power to plasmas out of the 4 MW source power in high performance I-mode plasmas.

  14. A proof-of-principle experiment of the ferroelectric tuner for the 1.3 GHz gun cavity

    SciTech Connect

    Hahn,H.; Choi, E.; Shchelkunov, S. V.; Hirshfield, J.; Kazakov, S.; Shschelkunov, S.

    2009-05-04

    A novel ferroelectric frequency tuner was developed by the Ornega-P company and was tested at the Brookhaven National Laboratory on a 1.3 GHz RF cavity at room temperature. The tuner is based on the ferroelectric property of having a permittivity variable with an applied electric field. The achievable frequency tuning range can be estimated from the reactive impedance change due to an applied voltage via a S{sub 11} measurement at the tuner port. The frequency shift can be measured directly with a S{sub 21} measurement across the gun cavity with the tuner connected and activated. The frequency change due to an applied 5 kV obtained from the two methods is in reasonable agreement. The reactive impedance measurement yields a value in the range between 3.2 kHz and 14 kHz, while 9 kHz is the result from the direct measurement. The detail description of the experiment and the analysis will be discussed in the paper.

  15. Making the optimal decision in selecting protective clothing

    SciTech Connect

    Price, J. Mark

    2007-07-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of employee dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered, including - Protection; - Personnel Contamination; - Cost; - Radwaste; - Comfort; - Convenience; - Logistics/Rad Material Considerations; - Reject Rate of Laundered Clothing; - Durability; - Security; - Personnel Safety including Heat Stress; - Disposition of Gloves and Booties. In addition, over the last several years there has been a trend of nuclear power plants either running trials or switching to Single Use Protective Clothing (SUPC) from traditional protective clothing. In some cases, after trial usage of SUPC, plants have chosen not to switch. In other cases after switching to SUPC for a period of time, some plants have chosen to switch back to laundering. Based on these observations, this paper reviews the 'real' drivers, issues, and interrelating factors regarding the selection and use of protective clothing throughout the nuclear industry. (authors)

  16. Applications of Optimal Building Energy System Selection and Operation

    SciTech Connect

    Marnay, Chris; Stadler, Michael; Siddiqui, Afzal; DeForest, Nicholas; Donadee, Jon; Bhattacharya, Prajesh; Lai, Judy

    2011-04-01

    Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated by description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.

  17. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  18. A dual molecular analogue tuner for dissecting protein function in mammalian cells.

    PubMed

    Brosh, Ran; Hrynyk, Iryna; Shen, Jessalyn; Waghray, Avinash; Zheng, Ning; Lemischka, Ihor R

    2016-01-01

    Loss-of-function studies are fundamental for dissecting gene function. Yet, methods to rapidly and effectively perturb genes in mammalian cells, and particularly in stem cells, are scarce. Here we present a system for simultaneous conditional regulation of two different proteins in the same mammalian cell. This system harnesses the plant auxin and jasmonate hormone-induced degradation pathways, and is deliverable with only two lentiviral vectors. It combines RNAi-mediated silencing of two endogenous proteins with the expression of two exogenous proteins whose degradation is induced by external ligands in a rapid, reversible, titratable and independent manner. By engineering molecular tuners for NANOG, CHK1, p53 and NOTCH1 in mammalian stem cells, we have validated the applicability of the system and demonstrated its potential to unravel complex biological processes. PMID:27230261

  19. A dual molecular analogue tuner for dissecting protein function in mammalian cells

    PubMed Central

    Brosh, Ran; Hrynyk, Iryna; Shen, Jessalyn; Waghray, Avinash; Zheng, Ning; Lemischka, Ihor R.

    2016-01-01

    Loss-of-function studies are fundamental for dissecting gene function. Yet, methods to rapidly and effectively perturb genes in mammalian cells, and particularly in stem cells, are scarce. Here we present a system for simultaneous conditional regulation of two different proteins in the same mammalian cell. This system harnesses the plant auxin and jasmonate hormone-induced degradation pathways, and is deliverable with only two lentiviral vectors. It combines RNAi-mediated silencing of two endogenous proteins with the expression of two exogenous proteins whose degradation is induced by external ligands in a rapid, reversible, titratable and independent manner. By engineering molecular tuners for NANOG, CHK1, p53 and NOTCH1 in mammalian stem cells, we have validated the applicability of the system and demonstrated its potential to unravel complex biological processes. PMID:27230261

  20. A quadratic weight selection algorithm. [for optimal flight control

    NASA Technical Reports Server (NTRS)

    Broussard, J. R.

    1981-01-01

    A new numerical algorithm is presented which determines a positive semi-definite state weighting matrix in the linear-quadratic optimal control design problem. The algorithm chooses the weighting matrix by placing closed-loop eigenvalues and eigenvectors near desired locations using optimal feedback gains. A simplified flight control design example is used to illustrate the algorithms capabilities.

  1. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2016-06-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  2. 75 FR 39437 - Optimizing the Security of Biological Select Agents and Toxins in the United States

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ..., 2010. [FR Doc. 2010-16864 Filed 7-7-10; 11:15 am] Billing code 3195-W0-P ... Executive Order 13546--Optimizing the Security of Biological Select Agents and Toxins in the United States... July 2, 2010 Optimizing the Security of Biological Select Agents and Toxins in the United States By...

  3. To Eat or Not to Eat: An Easy Simulation of Optimal Diet Selection in the Classroom

    ERIC Educational Resources Information Center

    Ray, Darrell L.

    2010-01-01

    Optimal diet selection, a component of optimal foraging theory, suggests that animals should select a diet that either maximizes energy or nutrient consumption per unit time or minimizes the foraging time needed to attain required energy or nutrients. In this exercise, students simulate the behavior of foragers that either show no foraging…

  4. Polyhedral Interpolation for Optimal Reaction Control System Jet Selection

    NASA Technical Reports Server (NTRS)

    Gefert, Leon P.; Wright, Theodore

    2014-01-01

    An efficient algorithm is described for interpolating optimal values for spacecraft Reaction Control System jet firing duty cycles. The algorithm uses the symmetrical geometry of the optimal solution to reduce the number of calculations and data storage requirements to a level that enables implementation on the small real time flight control systems used in spacecraft. The process minimizes acceleration direction errors, maximizes control authority, and minimizes fuel consumption.

  5. Age-Related Differences in Goals: Testing Predictions from Selection, Optimization, and Compensation Theory and Socioemotional Selectivity Theory

    ERIC Educational Resources Information Center

    Penningroth, Suzanna L.; Scott, Walter D.

    2012-01-01

    Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two…

  6. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  7. Transferability of optimally-selected climate models in the quantification of climate change impacts on hydrology

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe

    2016-02-01

    Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.

  8. Optimal design and selection of magneto-rheological brake types based on braking torque and mass

    NASA Astrophysics Data System (ADS)

    Nguyen, Q. H.; Lang, V. T.; Choi, S. B.

    2015-06-01

    In developing magnetorheological brakes (MRBs), it is well known that the braking torque and the mass of the MRBs are important factors that should be considered in the product’s design. This research focuses on the optimal design of different types of MRBs, from which we identify an optimal selection of MRB types, considering braking torque and mass. In the optimization, common types of MRBs such as disc-type, drum-type, hybrid-type, and T-shape types are considered. The optimization problem is to find an optimal MRB structure that can produce the required braking torque while minimizing its mass. After a brief description of the configuration of the MRBs, the MRBs’ braking torque is derived based on the Herschel-Bulkley rheological model of the magnetorheological fluid. Then, the optimal designs of the MRBs are analyzed. The optimization objective is to minimize the mass of the brake while the braking torque is constrained to be greater than a required value. In addition, the power consumption of the MRBs is also considered as a reference parameter in the optimization. A finite element analysis integrated with an optimization tool is used to obtain optimal solutions for the MRBs. Optimal solutions of MRBs with different required braking torque values are obtained based on the proposed optimization procedure. From the results, we discuss the optimal selection of MRB types, considering braking torque and mass.

  9. Optimization of Swine Breeding Programs Using Genomic Selection with ZPLAN.

    PubMed

    Lopez, B M; Kang, H S; Kim, T H; Viterbo, V S; Kim, H S; Na, C S; Seo, K S

    2016-05-01

    The objective of this study was to evaluate the present conventional selection program of a swine nucleus farm and compare it with a new selection strategy employing genomic enhanced breeding value (GEBV) as the selection criteria. The ZPLAN+ software was employed to calculate and compare the genetic gain, total cost, return and profit of each selection strategy. The first strategy reflected the current conventional breeding program, which was a progeny test system (CS). The second strategy was a selection scheme based strictly on genomic information (GS1). The third scenario was the same as GS1, but the selection by GEBV was further supplemented by the performance test (GS2). The last scenario was a mixture of genomic information and progeny tests (GS3). The results showed that the accuracy of the selection index of young boars of GS1 was 26% higher than that of CS. On the other hand, both GS2 and GS3 gave 31% higher accuracy than CS for young boars. The annual monetary genetic gain of GS1, GS2 and GS3 was 10%, 12%, and 11% higher, respectively, than that of CS. As expected, the discounted costs of genomic selection strategies were higher than those of CS. The costs of GS1, GS2 and GS3 were 35%, 73%, and 89% higher than those of CS, respectively, assuming a genotyping cost of $120. As a result, the discounted profit per animal of GS1 and GS2 was 8% and 2% higher, respectively, than that of CS while GS3 was 6% lower. Comparison among genomic breeding scenarios revealed that GS1 was more profitable than GS2 and GS3. The genomic selection schemes, especially GS1 and GS2, were clearly superior to the conventional scheme in terms of monetary genetic gain and profit. PMID:26954222

  10. Optimization of Swine Breeding Programs Using Genomic Selection with ZPLAN+

    PubMed Central

    Lopez, B. M.; Kang, H. S.; Kim, T. H.; Viterbo, V. S.; Kim, H. S.; Na, C. S.; Seo, K. S.

    2016-01-01

    The objective of this study was to evaluate the present conventional selection program of a swine nucleus farm and compare it with a new selection strategy employing genomic enhanced breeding value (GEBV) as the selection criteria. The ZPLAN+ software was employed to calculate and compare the genetic gain, total cost, return and profit of each selection strategy. The first strategy reflected the current conventional breeding program, which was a progeny test system (CS). The second strategy was a selection scheme based strictly on genomic information (GS1). The third scenario was the same as GS1, but the selection by GEBV was further supplemented by the performance test (GS2). The last scenario was a mixture of genomic information and progeny tests (GS3). The results showed that the accuracy of the selection index of young boars of GS1 was 26% higher than that of CS. On the other hand, both GS2 and GS3 gave 31% higher accuracy than CS for young boars. The annual monetary genetic gain of GS1, GS2 and GS3 was 10%, 12%, and 11% higher, respectively, than that of CS. As expected, the discounted costs of genomic selection strategies were higher than those of CS. The costs of GS1, GS2 and GS3 were 35%, 73%, and 89% higher than those of CS, respectively, assuming a genotyping cost of $120. As a result, the discounted profit per animal of GS1 and GS2 was 8% and 2% higher, respectively, than that of CS while GS3 was 6% lower. Comparison among genomic breeding scenarios revealed that GS1 was more profitable than GS2 and GS3. The genomic selection schemes, especially GS1 and GS2, were clearly superior to the conventional scheme in terms of monetary genetic gain and profit. PMID:26954222

  11. Self-Selection, Optimal Income Taxation, and Redistribution

    ERIC Educational Resources Information Center

    Amegashie, J. Atsu

    2009-01-01

    The author makes a pedagogical contribution to optimal income taxation. Using a very simple model adapted from George A. Akerlof (1978), he demonstrates a key result in the approach to public economics and welfare economics pioneered by Nobel laureate James Mirrlees. He shows how incomplete information, in addition to the need to preserve…

  12. Optimizing drilling performance using a selected drilling fluid

    DOEpatents

    Judzis, Arnis; Black, Alan D.; Green, Sidney J.; Robertson, Homer A.; Bland, Ronald G.; Curry, David Alexander; Ledgerwood, III, Leroy W.

    2011-04-19

    To improve drilling performance, a drilling fluid is selected based on one or more criteria and to have at least one target characteristic. Drilling equipment is used to drill a wellbore, and the selected drilling fluid is provided into the wellbore during drilling with the drilling equipment. The at least one target characteristic of the drilling fluid includes an ability of the drilling fluid to penetrate into formation cuttings during drilling to weaken the formation cuttings.

  13. Optimizing selection with several constraints in poultry breeding.

    PubMed

    Chapuis, H; Pincent, C; Colleau, J J

    2016-02-01

    Poultry breeding schemes permanently face the need to control the evolution of coancestry and some critical traits, while selecting for a main breeding objective. The main aims of this article are first to present an efficient selection algorithm adapted to this situation and then to measure how the severity of constraints impacted on the degree of loss for the main trait, compared to BLUP selection on the main trait, without any constraint. Broiler dam and sire line schemes were mimicked by simulation over 10 generations and selection was carried out on the main trait under constraints for coancestry and for another trait, antagonistic with the main trait. The selection algorithm was a special simulated annealing (adaptative simulated annealing (ASA)). It was found to be rapid and able to meet constraints very accurately. A constraint on the second trait was found to induce an impact similar to or even greater than the impact of the constraint on coancestry. The family structure of selected poultry populations made it easy to control the evolution of coancestry at a reasonable cost but was not as useful for reducing the cost of controlling evolution of the antagonistic traits. Multiple constraints impacted almost additively on the genetic gain for the main trait. Adding constraints for several traits would therefore be justified in real life breeding schemes, possibly after evaluating their impact through simulated annealing. PMID:26220593

  14. Optimal search-based gene subset selection for gene array cancer classification.

    PubMed

    Li, Jiexun; Su, Hua; Chen, Hsinchun; Futscher, Bernard W

    2007-07-01

    High dimensionality has been a major problem for gene array-based cancer classification. It is critical to identify marker genes for cancer diagnoses. We developed a framework of gene selection methods based on previous studies. This paper focuses on optimal search-based subset selection methods because they evaluate the group performance of genes and help to pinpoint global optimal set of marker genes. Notably, this paper is the first to introduce tabu search (TS) to gene selection from high-dimensional gene array data. Our comparative study of gene selection methods demonstrated the effectiveness of optimal search-based gene subset selection to identify cancer marker genes. TS was shown to be a promising tool for gene subset selection. PMID:17674622

  15. Optimization of Metamaterial Selective Emitters for Use in Thermophotovoltaic Applications

    NASA Astrophysics Data System (ADS)

    Pfiester, Nicole A.

    The increasing costs of fossil fuels, both financial and environmental, has motivated many to look into sustainable energy sources. Thermophotovoltaics (TPVs), specialized photovoltaic cells focused on the infrared range, offer an opportunity to achieve both primary energy capture, similar to traditional photovoltaics, as well as secondary energy capture in the form of waste heat. However, to become a feasible energy source, TPV systems must become more efficient. One way to do this is through the development of selective emitters tailored to the bandgap of the TPV diode in question. This thesis proposes the use of metamaterial emitters as an engineerable, highly selective emitter that can withstand the temperatures required to collect waste heat. Metamaterial devices made of platinum and a dielectric such as alumina or silicon nitride were initially designed and tested as perfect absorbers. High temperature robustness testing demonstrates the device's ability to withstand the rigors of operating as a selective emitter.

  16. Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation

    PubMed Central

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  17. Selecting radiotherapy dose distributions by means of constrained optimization problems.

    PubMed

    Alfonso, J C L; Buttazzo, G; García-Archilla, B; Herrero, M A; Núñez, L

    2014-05-01

    The main steps in planning radiotherapy consist in selecting for any patient diagnosed with a solid tumor (i) a prescribed radiation dose on the tumor, (ii) bounds on the radiation side effects on nearby organs at risk and (iii) a fractionation scheme specifying the number and frequency of therapeutic sessions during treatment. The goal of any radiotherapy treatment is to deliver on the tumor a radiation dose as close as possible to that selected in (i), while at the same time conforming to the constraints prescribed in (ii). To this day, considerable uncertainties remain concerning the best manner in which such issues should be addressed. In particular, the choice of a prescription radiation dose is mostly based on clinical experience accumulated on the particular type of tumor considered, without any direct reference to quantitative radiobiological assessment. Interestingly, mathematical models for the effect of radiation on biological matter have existed for quite some time, and are widely acknowledged by clinicians. However, the difficulty to obtain accurate in vivo measurements of the radiobiological parameters involved has severely restricted their direct application in current clinical practice.In this work, we first propose a mathematical model to select radiation dose distributions as solutions (minimizers) of suitable variational problems, under the assumption that key radiobiological parameters for tumors and organs at risk involved are known. Second, by analyzing the dependence of such solutions on the parameters involved, we then discuss the manner in which the use of those minimizers can improve current decision-making processes to select clinical dosimetries when (as is generally the case) only partial information on model radiosensitivity parameters is available. A comparison of the proposed radiation dose distributions with those actually delivered in a number of clinical cases strongly suggests that solutions of our mathematical model can be

  18. Selection of optimal auxiliary soil nutrient variables for Cokriging interpolation.

    PubMed

    Song, Genxin; Zhang, Jing; Wang, Ke

    2014-01-01

    In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129

  19. Optimizing selection of decentralized stormwater management strategies in urbanized regions

    NASA Astrophysics Data System (ADS)

    Yu, Z.; Montalto, F.

    2011-12-01

    A variety of decentralized stormwater options are available for implementation in urbanized regions. These strategies, which include bio-retention, porous pavement, green roof etc., vary in terms of cost, ability to reduce runoff, and site applicability. This paper explores the tradeoffs between different types of stormwater control meastures that could be applied in a typical urban study area. A nested optimization strategy first identifies the most cost-effective (e.g. runoff reduction / life cycle cost invested ) options for individual land parcel typologies, and then scales up the results with detailed attention paid to uncertainty in adoption rates, life cycle costs, and hydrologic performance. The study is performed with a custom built stochastic rainfall-runoff model (Monte Carlo techniques are used to quantify uncertainties associated with phased implementation of different strategies and different land parcel typologies under synthetic precipitation ensembles). The results are presented as a comparison of cost-effectiveness over the time span of 30 years, and state an optimized strategy on the cumulative cost-effectiveness over the period.

  20. A high-speed mixed-signal down-scaling circuit for DAB tuners

    NASA Astrophysics Data System (ADS)

    Lu, Tang; Zhigong, Wang; Jiahui, Xuan; Yang, Yang; Jian, Xu; Yong, Xu

    2012-07-01

    A high-speed mixed-signal down-scaling circuit with low power consumption and low phase noise for use in digital audio broadcasting tuners has been realized and characterized. Some new circuit techniques are adopted to improve its performance. A dual-modulus prescaler (DMP) with low phase noise is realized with a kind of improved source-coupled logic (SCL) D-flip-flop (DFF) in the synchronous divider and a kind of improved complementary metal oxide semiconductor master-slave (CMOS MS)-DFF in the asynchronous divider. A new more accurate wire-load model is used to realize the pulse-swallow counter (PS counter). Fabricated in a 0.18-μm CMOS process, the total chip size is 0.6 × 0.2 mm2. The DMP in the proposed down-scaling circuit exhibits a low phase noise of -118.2 dBc/Hz at 10 kHz off the carrier frequency. At a supply voltage of 1.8 V, the power consumption of the down-scaling circuit's core part is only 2.7 mW.

  1. Update on RF System Studies and VCX Fast Tuner Work for the RIA Drive Linac

    SciTech Connect

    Rusnak, B; Shen, S

    2003-05-06

    The limited cavity beam loading conditions anticipated for the Rare Isotope Accelerator (RIA) create a situation where microphonic-induced cavity detuning dominates radio frequency (RF) coupling and RF system architecture choices in the linac design process. Where most superconducting electron and proton linacs have beam-loaded bandwidths that are comparable to or greater than typical microphonic detuning bandwidths on the cavities, the beam-loaded bandwidths for many heavy-ion species in the RIA driver linac can be as much as a factor of 10 less than the projected 80-150 Hz microphonic control window for the RF structures along the driver, making RF control problematic. While simply overcoupling the coupler to the cavity can mitigate this problem to some degree, system studies indicate that for the low-{beta} driver linac alone, this approach may cost 50% or more than an RF system employing a voltage controlled reactance (VCX) fast tuner. An update of these system cost studies, along with the status of the VCX work being done at Lawrence Livermore National Lab is presented here.

  2. Optimization of gene sequences under constant mutational pressure and selection

    NASA Astrophysics Data System (ADS)

    Kowalczuk, M.; Gierlik, A.; Mackiewicz, P.; Cebrat, S.; Dudek, M. R.

    1999-12-01

    We have analyzed the influence of constant mutational pressure and selection on the nucleotide composition of DNA sequences of various size, which were represented by the genes of the Borrelia burgdorferi genome. With the help of MC simulations we have found that longer DNA sequences accumulate much less base substitutions per sequence length than short sequences. This leads us to the conclusion that the accuracy of replication may determine the size of genome.

  3. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2008-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service these research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, defendable sensor suite to address system health assessment requirements.

  4. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2007-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service. These research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, dependable sensor suite to address system health assessment requirements.

  5. A method to optimize selection on multiple identified quantitative trait loci

    PubMed Central

    Chakraborty, Reena; Moreau, Laurence; Dekkers, Jack CM

    2002-01-01

    A mathematical approach was developed to model and optimize selection on multiple known quantitative trait loci (QTL) and polygenic estimated breeding values in order to maximize a weighted sum of responses to selection over multiple generations. The model allows for linkage between QTL with multiple alleles and arbitrary genetic effects, including dominance, epistasis, and gametic imprinting. Gametic phase disequilibrium between the QTL and between the QTL and polygenes is modeled but polygenic variance is assumed constant. Breeding programs with discrete generations, differential selection of males and females and random mating of selected parents are modeled. Polygenic EBV obtained from best linear unbiased prediction models can be accommodated. The problem was formulated as a multiple-stage optimal control problem and an iterative approach was developed for its solution. The method can be used to develop and evaluate optimal strategies for selection on multiple QTL for a wide range of situations and genetic models. PMID:12081805

  6. Automated selection of appropriate pheromone representations in ant colony optimization.

    PubMed

    Montgomery, James; Randall, Marcus; Hendtlass, Tim

    2005-01-01

    Ant colony optimization (ACO) is a constructive metaheuristic that uses an analogue of ant trail pheromones to learn about good features of solutions. Critically, the pheromone representation for a particular problem is usually chosen intuitively rather than by following any systematic process. In some representations, distinct solutions appear multiple times, increasing the effective size of the search space and potentially misleading ants as to the true learned value of those solutions. In this article, we present a novel system for automatically generating appropriate pheromone representations, based on the characteristics of the problem model that ensures unique pheromone representation of solutions. This is the first stage in the development of a generalized ACO system that could be applied to a wide range of problems with little or no modification. However, the system we propose may be used in the development of any problem-specific ACO algorithm. PMID:16053571

  7. Selection of optimal composition-control parameters for friable materials

    SciTech Connect

    Pak, Yu.N.; Vdovkin, A.V.

    1988-05-01

    A method for composition analysis of coal and minerals is proposed which uses scattered gamma radiation and does away with preliminary sample preparation to ensure homogeneous particle density, surface area, and size. Reduction of the error induced by material heterogeneity has previously been achieved by rotation of the control object during analysis. A further refinement is proposed which addresses the necessity that the contribution of the radiation scattered from each individual surface to the total intensity be the same. This is achieved by providing a constant linear rate of travel for the irradiated spot through back-and-forth motion of the sensor. An analytical expression is given for the laws of motion for the sensor and test tube which provides for uniform irradiated area movement along a path analogous to the Archimedes spiral. The relationships obtained permit optimization of measurement parameters in analyzing friable materials which are not uniform in grain size.

  8. Determination of an Optimal Recruiting-Selection Strategy to Fill a Specified Quota of Satisfactory Personnel.

    ERIC Educational Resources Information Center

    Sands, William A.

    Managers of military and civilian personnel systems justifiably demand an estimate of the payoff in dollars and cents, which can be expected to result from the implementation of a proposed selection program. The Cost of Attaining Personnel Requirements (CAPER) Model provides an optimal recruiting-selection strategy for personnel decisions which…

  9. SLOPE—ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION

    PubMed Central

    Bogdan, Małgorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; Candès, Emmanuel J.

    2015-01-01

    We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minb∈ℝp12‖y−Xb‖ℓ22+λ1|b|(1)+λ2|b|(2)+⋯+λp|b|(p),where λ1 ≥ λ2 ≥ … ≥ λp ≥ 0 and |b|(1)≥|b|(2)≥⋯≥|b|(p) are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical ℓ1 procedures such as the Lasso. Here, the regularizer is a sorted ℓ1 norm, which penalizes the regression coefficients according to their rank: the higher the rank—that is, stronger the signal—the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289–300] procedure (BH) which compares more significant p-values with more stringent thresholds. One notable choice of the sequence {λi} is given by the BH critical values λBH(i)=z(1−i⋅q/2p), where q ∈ (0, 1) and z(α) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data. PMID:26709357

  10. Selection for optimal crew performance - Relative impact of selection and training

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.

    1987-01-01

    An empirical study supporting Helmreich's (1986) theoretical work on the distinct manner in which training and selection impact crew coordination is presented. Training is capable of changing attitudes, while selection screens for stable personality characteristics. Training appears least effective for leadership, an area strongly influenced by personality. Selection is least effective for influencing attitudes about personal vulnerability to stress, which appear to be trained in resource management programs. Because personality correlates with attitudes before and after training, it is felt that selection may be necessary even with a leadership-oriented training cirriculum.

  11. Optimizing landfill site selection by using land classification maps.

    PubMed

    Eskandari, M; Homaee, M; Mahmoodi, S; Pazira, E; Van Genuchten, M Th

    2015-05-01

    Municipal solid waste disposal is a major environmental concern throughout the world. Proper landfill siting involves many environmental, economic, technical, and sociocultural challenges. In this study, a new quantitative method for landfill siting that reduces the number of evaluation criteria, simplifies siting procedures, and enhances the utility of available land evaluation maps was proposed. The method is demonstrated by selecting a suitable landfill site near the city of Marvdasht in Iran. The approach involves two separate stages. First, necessary criteria for preliminary landfill siting using four constraints and eight factors were obtained from a land classification map initially prepared for irrigation purposes. Thereafter, the criteria were standardized using a rating approach and then weighted to obtain a suitability map for landfill siting, with ratings in a 0-1 domain and divided into five suitability classes. Results were almost identical to those obtained with a more traditional environmental landfill siting approach. Because of far fewer evaluation criteria, the proposed weighting method was much easier to implement while producing a more convincing database for landfill siting. The classification map also considered land productivity. In the second stage, the six best alternative sites were evaluated for final landfill siting using four additional criteria. Sensitivity analyses were furthermore conducted to assess the stability of the obtained ranking. Results indicate that the method provides a precise siting procedure that should convince all pertinent stakeholders. PMID:25666474

  12. Selection of optimal muscle set for 16-channel standing neuroprosthesis

    PubMed Central

    Gartman, Steven J.; Audu, Musa L.; Kirsch, Robert F.; Triolo, Ronald J.

    2009-01-01

    The Case Western Reserve University/Department of Veterans Affairs 8-channel lower-limb neuroprosthesis can restore standing to selected individuals with paraplegia by application of functional electrical stimulation. The second generation of this system will include 16 channels of stimulation and a closed-loop control scheme to provide automatic postural corrections. This study used a musculoskeletal model of the legs and trunk to determine which muscles to target with the new system in order to maximize the range of postures that can be statically maintained, which should increase the system’s ability to provide adequate support to maintain standing when the user’s posture moves away from a neutral stance, either by an external disturbance or a volitional change in posture by the user. The results show that the prime muscle targets should be the medial gastrocnemius, tibialis anterior, vastus lateralis, semimembranosus, gluteus maximus, gluteus medius, adductor magnus, and erector spinae. This set of 16 muscles supports 42 percent of the standing postures that are attainable by the nondisabled model. Coactivation of the lateral gastrocnemius and peroneus longus with the medial gastrocnemius and of the peroneus tertius with the tibialis anterior increased the percentage of feasible postures to 71 percent. PMID:16847793

  13. Plastic scintillation dosimetry: Optimal selection of scintillating fibers and scintillators

    SciTech Connect

    Archambault, Louis; Arsenault, Jean; Gingras, Luc; Sam Beddar, A.; Roy, Rene; Beaulieu, Luc

    2005-07-15

    Scintillation dosimetry is a promising avenue for evaluating dose patterns delivered by intensity-modulated radiation therapy plans or for the small fields involved in stereotactic radiosurgery. However, the increase in signal has been the goal for many authors. In this paper, a comparison is made between plastic scintillating fibers and plastic scintillator. The collection of scintillation light was measured experimentally for four commercial models of scintillating fibers (BCF-12, BCF-60, SCSF-78, SCSF-3HF) and two models of plastic scintillators (BC-400, BC-408). The emission spectra of all six scintillators were obtained by using an optical spectrum analyzer and they were compared with theoretical behavior. For scintillation in the blue region, the signal intensity of a singly clad scintillating fiber (BCF-12) was 120% of that of the plastic scintillator (BC-400). For the multiclad fiber (SCSF-78), the signal reached 144% of that of the plastic scintillator. The intensity of the green scintillating fibers was lower than that of the plastic scintillator: 47% for the singly clad fiber (BCF-60) and 77% for the multiclad fiber (SCSF-3HF). The collected light was studied as a function of the scintillator length and radius for a cylindrical probe. We found that symmetric detectors with nearly the same spatial resolution in each direction (2 mm in diameter by 3 mm in length) could be made with a signal equivalent to those of the more commonly used asymmetric scintillators. With augmentation of the signal-to-noise ratio in consideration, this paper presents a series of comparisons that should provide insight into selection of a scintillator type and volume for development of a medical dosimeter.

  14. Biomass selection for optimal anaerobic treatment of olive mill wastewater.

    PubMed

    Sabbah, I; Yazbak, A; Haj, J; Saliba, A; Basheer, S

    2005-01-01

    This research was conducted to identify the most efficient biomass out of five different types of biomass sources for anaerobic treatment of Olive Mill Wastewater (OMW). This study was first focused on examining the selected biomass in anaerobic batch systems with sodium acetate solutions (control study). Then, the different types of biomass were tested with raw OMW (water-diluted) and with pretreated OMW by coagulation-flocculation using Poly Aluminum Chloride (PACl) combined with hydrated lime (Ca(OH)2). Two types of biomass from wastewater treatment systems of a citrus juice producing company "PriGat" and from a citric acid manufacturing factory "Gadot", were found to be the most efficient sources of microorganisms to anaerobically treat both sodium acetate solution and OMW. Both types of biomass were examined under different concentration ranges (1-40 g l(-1)) of OMW in order to detect the maximal COD tolerance for the microorganisms. The results show that 70-85% of COD removal was reached using Gadot biomass after 8-10 days when the initial concentration of OMW was up to 5 g l(-1), while a similar removal efficiency was achieved using OMW of initial COD concentration of 10 g l(-1) in 2-4 days of contact time with the PriGat biomass. The physico-chemical pretreatment of OMW was found to enhance the anaerobic activity for the treatment of OMW with initial concentration of 20 g l(-1) using PriGat biomass. This finding is attributed to reducing the concentrations of polyphenols and other toxicants originally present in OMW upon the applied pretreatment process. PMID:15747599

  15. Dose selection for optimal treatment results and avoidance of complications.

    PubMed

    Nagano, Hisato; Nakayama, Satoshi; Shuto, Takashi; Asada, Hiroyuki; Inomori, Shigeo

    2009-01-01

    What is the optimal treatment for metastatic brain tumors (MBTs)? We present our experience with gamma knife (GK) treatments for patients with five or more MBTs. Our new formula for predicting patient survival time (ST), which was derived by combining tumor control probability (TCP) calculated by Colombo's formula and normal tissue complication probability (NTCP) estimated by Flickinger's integrated logistic formula, was also evaluated. ST=a*[(C-NTCP)*TCP]+b; a, b, C: const. Forty-one patients (23 male, 18 female) with more than five MBTs were treated between March 1992 and February 2000. The tumors originated in the lung in 15 cases, in the breast in 8. Four patients had previously undergone whole brain irradiation (WBI). Ten patients were given concomitant WBI. Thirteen patients had additional extracranial metastatic lesions. TCP and NTCP were calculated using Excel add-in software. Cox's proportional hazards model was used to evaluate correlations between certain variables and ST. The independent variables evaluated were patient factors (age in years and performance status), tumor factors (total volume and number of tumors in each patient), treatment factors (TCP, NTCP and marginal dose) and the values of (C-NTCP)*TCP. Total tumor number was 403 (median 7, range 5-56). The median total tumor volume was 9.8 cm3 (range 0.8-111.8 cm3). The marginal dose ranged from 8 to 22 Gy (median 16.0Gy), TCP from 0.0% to 83% (median 15%) and NTCP from 0.0% to 31% (median 6.0%). (0.39-NTCP)*TCP ranged from 0.0 to 0.21 (median 0.055). Follow-up was 0.2 to 26.2 months, with a median of 5.4 months. Multiple-sample tests revealed no differences in STs among patients with MBTs of different origins (p=0.50). The 50% STs of patients with MBTs originating from the breast, lung and other sites were 5.9, 7.8 and 3.5 months, respectively. Only TCP and (0.39-NTCP)*TCP were statistically significant covariates (p=0.014, 0.001, respectively), and the latter was a more important predictor of

  16. Optimal neural network architecture selection: effects on computer-aided detection of mammographic microcalcifications

    NASA Astrophysics Data System (ADS)

    Gurcan, Metin N.; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Petrick, Nicholas; Helvie, Mark A.

    2002-05-01

    We evaluated the effectiveness of an optimal convolution neural network (CNN) architecture selected by simulated annealing for improving the performance of a computer-aided diagnosis (CAD) system designed for the detection of microcalcification clusters on digitized mammograms. The performances of the CAD programs with manually and optimally selected CNNs were compared using an independent test set. This set included 472 mammograms and contained 253 biopsy-proven malignant clusters. Free-response receiver operating characteristic (FROC) analysis was used for evaluation of the detection accuracy. At a false positive (FP) rate of 0.7 per image, the film-based sensitivity was 84.6% with the optimized CNN, in comparison with 77.2% with the manually selected CNN. If clusters having images in both craniocaudal and mediolateral oblique views were analyzed together and a cluster was considered to be detected when it was detected in one or both views, at 0.7 FPs/image, the sensitivity was 93.3% with the optimized CNN and 87.0% with the manually selected CNN. This study indicates that classification of true positive and FP signals is an important step of the CAD program and that the detection accuracy of the program can be considerably improved by optimizing this step with an automated optimization algorithm.

  17. Modeling Network Intrusion Detection System Using Feature Selection and Parameters Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Dong Seong; Park, Jong Sou

    Previous approaches for modeling Intrusion Detection System (IDS) have been on twofold: improving detection model(s) in terms of (i) feature selection of audit data through wrapper and filter methods and (ii) parameters optimization of detection model design, based on classification, clustering algorithms, etc. In this paper, we present three approaches to model IDS in the context of feature selection and parameters optimization: First, we present Fusion of Genetic Algorithm (GA) and Support Vector Machines (SVM) (FuGAS), which employs combinations of GA and SVM through genetic operation and it is capable of building an optimal detection model with only selected important features and optimal parameters value. Second, we present Correlation-based Hybrid Feature Selection (CoHyFS), which utilizes a filter method in conjunction of GA for feature selection in order to reduce long training time. Third, we present Simultaneous Intrinsic Model Identification (SIMI), which adopts Random Forest (RF) and shows better intrusion detection rates and feature selection results, along with no additional computational overheads. We show the experimental results and analysis of three approaches on KDD 1999 intrusion detection datasets.

  18. Debris Selection and Optimal Path Planning for Debris Removal on the SSO: Impulsive-Thrust Option

    NASA Astrophysics Data System (ADS)

    Olympio, J. T.; Frouvelle, N.

    2013-08-01

    The current paper deals with the mission design of a generic active space debris removal spacecraft. Considered debris are all on a sun-synchronous orbit. A perturbed Lambert's problem, modelling the transfer between two debris, is devised to take into account J2 perturbation, and to quickly evaluate mission scenarios. A robust approach, using techniques of global optimisation, is followed to find optimal debris sequence and mission strategy. Manoeuvres optimization is then performed to refine the selected trajectory scenarii.

  19. selectSNP – An R package for selecting SNPs optimal for genetic evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    There has been a huge increase in the number of SNPs in the public repositories. This has made it a challenge to design low and medium density SNP panels, which requires careful selection of available SNPs considering many criteria, such as map position, allelic frequency, possible biological functi...

  20. Optimal band selection for high dimensional remote sensing data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfeng; Sun, Quan; Li, Jonathan

    2009-06-01

    A 'fused' method may not be suitable for reducing the dimensionality of data and a band/feature selection method needs to be used for selecting an optimal subset of original data bands. This study examined the efficiency of GA in band selection for remote sensing classification. A GA-based algorithm for band selection was designed deliberately in which a Bhattacharyya distance index that indicates separability between classes of interest is used as fitness function. A binary string chromosome is designed in which each gene location has a value of 1 representing a feature being included or 0 representing a band being not included. The algorithm was implemented in MATLAB programming environment, and a band selection task for lithologic classification in the Chocolate Mountain area (California) was used to test the proposed algorithm. The proposed feature selection algorithm can be useful in multi-source remote sensing data preprocessing, especially in hyperspectral dimensionality reduction.

  1. Ant Colony Optimization Based Feature Selection Method for QEEG Data Classification

    PubMed Central

    Ozekes, Serhat; Gultekin, Selahattin; Tarhan, Nevzat

    2014-01-01

    Objective Many applications such as biomedical signals require selecting a subset of the input features in order to represent the whole set of features. A feature selection algorithm has recently been proposed as a new approach for feature subset selection. Methods Feature selection process using ant colony optimization (ACO) for 6 channel pre-treatment electroencephalogram (EEG) data from theta and delta frequency bands is combined with back propagation neural network (BPNN) classification method for 147 major depressive disorder (MDD) subjects. Results BPNN classified R subjects with 91.83% overall accuracy and 95.55% subjects detection sensitivity. Area under ROC curve (AUC) value after feature selection increased from 0.8531 to 0.911. The features selected by the optimization algorithm were Fp1, Fp2, F7, F8, F3 for theta frequency band and eliminated 7 features from 12 to 5 feature subset. Conclusion ACO feature selection algorithm improves the classification accuracy of BPNN. Using other feature selection algorithms or classifiers to compare the performance for each approach is important to underline the validity and versatility of the designed combination. PMID:25110496

  2. [The Near Infrared Spectral Bands Optimal Selection in the Application of Liquor Fermented Grains Composition Analysis].

    PubMed

    Xiong, Ya-ting; Li, Zong-peng; Wang, Jian; Zhang, Ying; Wang, Shu-jun; Yin, Jian-jun; Song, Quan-hou

    2016-01-01

    In order to improve the technical level of the rapid detection of liquor fermented grains, in this paper, use near infrared spectroscopy technology to quantitative analysis moisture, starch, acidity and alcohol of liquor fermented grains. Using CARS, iPLS and no information variable elimination method (UVE), realize the characteristics of spectral band selection. And use the multiple scattering correction (MSC), derivative and standard normal variable transformation (SNV) pretreatment method to optimize the models. Establish models of quantitative analysis of fermented grains by PLS, and in order to select the best modeling method, using R2, RMSEP and optimal number of main factors to evaluate models. The results showed that the band selection is vital to optimize the model and CARS is the best optimization of the most significant effect. The calculation results showed that R2 of moisture, starch, acidity and alcohol were 0.885, 0.915, 0.951, 0.885 respectively and RMSEP of moisture, starch, acidity and alcohol were 0.630, 0.519, 0.228, 0.234 respectively. After optimization, the model prediction effect is good, the models can satisfy the requirement of the rapid detection of liquor fermented grains, which has certain reference value in the practical. PMID:27228746

  3. Quantum-behaved particle swarm optimization: analysis of individual particle behavior and parameter selection.

    PubMed

    Sun, Jun; Fang, Wei; Wu, Xiaojun; Palade, Vasile; Xu, Wenbo

    2012-01-01

    Quantum-behaved particle swarm optimization (QPSO), motivated by concepts from quantum mechanics and particle swarm optimization (PSO), is a probabilistic optimization algorithm belonging to the bare-bones PSO family. Although it has been shown to perform well in finding the optimal solutions for many optimization problems, there has so far been little analysis on how it works in detail. This paper presents a comprehensive analysis of the QPSO algorithm. In the theoretical analysis, we analyze the behavior of a single particle in QPSO in terms of probability measure. Since the particle's behavior is influenced by the contraction-expansion (CE) coefficient, which is the most important parameter of the algorithm, the goal of the theoretical analysis is to find out the upper bound of the CE coefficient, within which the value of the CE coefficient selected can guarantee the convergence or boundedness of the particle's position. In the experimental analysis, the theoretical results are first validated by stochastic simulations for the particle's behavior. Then, based on the derived upper bound of the CE coefficient, we perform empirical studies on a suite of well-known benchmark functions to show how to control and select the value of the CE coefficient, in order to obtain generally good algorithmic performance in real world applications. Finally, a further performance comparison between QPSO and other variants of PSO on the benchmarks is made to show the efficiency of the QPSO algorithm with the proposed parameter control and selection methods. PMID:21905841

  4. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme

    PubMed Central

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    Purpose: Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. Methods: An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Results: Among these four methods, SFFS has highest efficacy, which takes 3%–5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC

  5. Selection, Optimization, and Compensation: An Action-Related Approach to Work and Partnership.

    ERIC Educational Resources Information Center

    Wiese, Bettina S.; Baltes, Paul B.; Freund, Alexandra M.

    2000-01-01

    Data from German professionals (n=206) were used to test selective optimization with compensation (SOC)--goal setting in career and partnership domains and use of means to achieve goals. A positive relationship was found between SOC behaviors and successful life management; it was more predictive for the partnership domain. (Contains 82…

  6. Optimization of a series of potent and selective ketone histone deacetylase inhibitors.

    PubMed

    Pescatore, Giovanna; Kinzel, Olaf; Attenni, Barbara; Cecchetti, Ottavia; Fiore, Fabrizio; Fonsi, Massimiliano; Rowley, Michael; Schultz-Fademrecht, Carsten; Serafini, Sergio; Steinkühler, Christian; Jones, Philip

    2008-10-15

    Histone deacetylase (HDAC) inhibitors offer a promising strategy for cancer therapy and the first generation HDAC inhibitors are currently in the clinic. Herein we describe the optimization of a series of ketone small molecule HDAC inhibitors leading to potent and selective class I HDAC inhibitors with good dog PK. PMID:18809328

  7. Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant features

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Jutzi, B.; Mallet, C.

    2014-08-01

    3D scene analysis by automatically assigning 3D points a semantic label has become an issue of major interest in recent years. Whereas the tasks of feature extraction and classification have been in the focus of research, the idea of using only relevant and more distinctive features extracted from optimal 3D neighborhoods has only rarely been addressed in 3D lidar data processing. In this paper, we focus on the interleaved issue of extracting relevant, but not redundant features and increasing their distinctiveness by considering the respective optimal 3D neighborhood of each individual 3D point. We present a new, fully automatic and versatile framework consisting of four successive steps: (i) optimal neighborhood size selection, (ii) feature extraction, (iii) feature selection, and (iv) classification. In a detailed evaluation which involves 5 different neighborhood definitions, 21 features, 6 approaches for feature subset selection and 2 different classifiers, we demonstrate that optimal neighborhoods for individual 3D points significantly improve the results of scene interpretation and that the selection of adequate feature subsets may even further increase the quality of the derived results.

  8. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems

    ERIC Educational Resources Information Center

    Robinson, Stephanie A.; Rickenbach, Elizabeth H.; Lachman, Margie E.

    2016-01-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to…

  9. Subjective Career Success and Emotional Well-Being: Longitudinal Predictive Power of Selection, Optimization, and Compensation.

    ERIC Educational Resources Information Center

    Wiese, Bettina S.; Freund, Alexandra M.; Baltes, Paul B.

    2002-01-01

    A 3-year study of 82 young professionals found that work-related well-being was predicted by selection (commitment to personal goals), optimization (application of goal-related skills), and compensation (maintaining goals in the face of loss). The degree of compensation predicted emotional well-being and job satisfaction 3 years later. (Contains…

  10. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    NASA Astrophysics Data System (ADS)

    Mohanty, Sankhya; Hattel, Jesper H.

    2015-03-01

    Selective laser melting is yet to become a standardized industrial manufacturing technique. The process continues to suffer from defects such as distortions, residual stresses, localized deformations and warpage caused primarily due to the localized heating, rapid cooling and high temperature gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge. In this paper, a methodology for generating reliable, optimized scanning paths and process parameters for selective laser melting of a standard sample is introduced. The processing of the sample is simulated by sequentially coupling a calibrated 3D pseudo-analytical thermal model with a 3D finite element mechanical model. The optimized processing parameters are subjected to a Monte Carlo method based uncertainty and reliability analysis. The reliability of the scanning paths are established using cumulative probability distribution functions for process output criteria such as sample density, thermal homogeneity, etc. A customized genetic algorithm is used along with the simulation model to generate optimized cellular scanning strategies and processing parameters, with an objective of reducing thermal asymmetries and mechanical deformations. The optimized scanning strategies are used for selective laser melting of the standard samples, and experimental and numerical results are compared.

  11. Optimization of a Dibenzodiazepine Hit to a Potent and Selective Allosteric PAK1 Inhibitor

    PubMed Central

    2015-01-01

    The discovery of inhibitors targeting novel allosteric kinase sites is very challenging. Such compounds, however, once identified could offer exquisite levels of selectivity across the kinome. Herein we report our structure-based optimization strategy of a dibenzodiazepine hit 1, discovered in a fragment-based screen, yielding highly potent and selective inhibitors of PAK1 such as 2 and 3. Compound 2 was cocrystallized with PAK1 to confirm binding to an allosteric site and to reveal novel key interactions. Compound 3 modulated PAK1 at the cellular level and due to its selectivity enabled valuable research to interrogate biological functions of the PAK1 kinase. PMID:26191365

  12. Exploring the optimal performances of irreversible single resonance energy selective electron refrigerators

    NASA Astrophysics Data System (ADS)

    Zhou, Junle; Chen, Lingen; Ding, Zemin; Sun, Fengrui

    2016-05-01

    Applying finite-time thermodynamics (FTT) and electronic transport theory, the optimal performances of irreversible single resonance energy selective electron (ESE) refrigerator are analyzed. The effects of heat leakage between two electron reservoirs on optimal performances are discussed. The influences of system operating parameters on cooling load, coefficient of performance (COP), figure of merit and ecological function are demonstrated using numerical examples. Comparative performance analyses among different objective functions show that performance characteristics at maximum ecological function and maximum figure of merit are of great practical significance. Combining the two optimization objectives of maximum ecological function and maximum figure of merit together, more specific optimal ranges of cooling load and COP are obtained. The results can provide some advices to the design of practical electronic machine systems.

  13. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  14. The effect of genomic information on optimal contribution selection in livestock breeding programs

    PubMed Central

    2013-01-01

    Background Long-term benefits in animal breeding programs require that increases in genetic merit be balanced with the need to maintain diversity (lost due to inbreeding). This can be achieved by using optimal contribution selection. The availability of high-density DNA marker information enables the incorporation of genomic data into optimal contribution selection but this raises the question about how this information affects the balance between genetic merit and diversity. Methods The effect of using genomic information in optimal contribution selection was examined based on simulated and real data on dairy bulls. We compared the genetic merit of selected animals at various levels of co-ancestry restrictions when using estimated breeding values based on parent average, genomic or progeny test information. Furthermore, we estimated the proportion of variation in estimated breeding values that is due to within-family differences. Results Optimal selection on genomic estimated breeding values increased genetic gain. Genetic merit was further increased using genomic rather than pedigree-based measures of co-ancestry under an inbreeding restriction policy. Using genomic instead of pedigree relationships to restrict inbreeding had a significant effect only when the population consisted of many large full-sib families; with a half-sib family structure, no difference was observed. In real data from dairy bulls, optimal contribution selection based on genomic estimated breeding values allowed for additional improvements in genetic merit at low to moderate inbreeding levels. Genomic estimated breeding values were more accurate and showed more within-family variation than parent average breeding values; for genomic estimated breeding values, 30 to 40% of the variation was due to within-family differences. Finally, there was no difference between constraining inbreeding via pedigree or genomic relationships in the real data. Conclusions The use of genomic estimated breeding

  15. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization

    SciTech Connect

    Kim, Minsun Stewart, Robert D.; Phillips, Mark H.

    2015-11-15

    Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T{sub d}), and the size and location of tumor target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D{sub mean} ≤ 45 Gy), lungs (D{sub mean} ≤ 20 Gy), cord (D{sub max} ≤ 45 Gy), esophagus (D{sub max} ≤ 63 Gy), and unspecified tissues (D{sub 05} ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D{sub 95} of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T{sub d} (3–100 days), tumor lag-time (T{sub k} = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D{sub 95} were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T{sub d} and T{sub k} used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating

  16. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-01

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/. PMID:25083512

  17. Collimator Width Optimization in X-Ray Luminescent Computed Tomography (XLCT) with Selective Excitation Scheme

    PubMed Central

    Mishra, S.; Kappiyoor, R.

    2015-01-01

    X-ray luminescent computed tomography (XLCT) is a promising new functional imaging modality based on computed tomography (CT). This imaging technique uses X-ray excitable nanophosphors to illuminate objects of interest in the visible spectrum. Though there are several validations of the underlying technology, none of them have addressed the issues of performance optimality for a given design of the imaging system. This study addresses the issue of obtaining best image quality through optimizing collimator width to balance the signal to noise ratio (SNR) and resolution. The results can be generalized as to any XLCT system employing a selective excitation scheme. PMID:25642356

  18. Space debris selection and optimal guidance for removal in the SSO with low-thrust propulsion

    NASA Astrophysics Data System (ADS)

    Olympio, J. T.; Frouvelle, N.

    2014-06-01

    The current paper deals with the mission design of a generic active space debris removal spacecraft. Considered space debris are all on sun-synchronous orbits. A perturbed Lambert's problem, modelling the transfer between two space debris is devised to take into account J2 perturbation, and to quickly evaluate mission scenarios. A robust approach, using techniques of global optimisation, is followed to find the optimal space debris sequence and mission strategy. Low-thrust optimisation is then performed to turn bi-impulse transfers into optimal low-thrust transfers, and refine the selected scenarios.

  19. Optimum selection of mechanism type for heavy manipulators based on particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Chen, Genliang; Wang, Hao; Lin, Zhongqin

    2013-07-01

    The mechanism type plays a decisive role in the mechanical performance of robotic manipulators. Feasible mechanism types can be obtained by applying appropriate type synthesis theory, but there is still a lack of effective and efficient methods for the optimum selection among different types of mechanism candidates. This paper presents a new strategy for the purpose of optimum mechanism type selection based on the modified particle swarm optimization method. The concept of sub-swarm is introduced to represent the different mechanisms generated by the type synthesis, and a competitive mechanism is employed between the sub-swarms to reassign their population size according to the relative performances of the mechanism candidates to implement the optimization. Combining with a modular modeling approach for fast calculation of the performance index of the potential candidates, the proposed method is applied to determine the optimum mechanism type among the potential candidates for the desired manipulator. The effectiveness and efficiency of the proposed method is demonstrated through a case study on the optimum selection of mechanism type of a heavy manipulator where six feasible candidates are considered with force capability as the specific performance index. The optimization result shows that the fitness of the optimum mechanism type for the considered heavy manipulator can be up to 0.578 5. This research provides the instruction in optimum selection of mechanism types for robotic manipulators.

  20. Natural selection fails to optimize mutation rates for long-term adaptation on rugged fitness landscapes.

    PubMed

    Clune, Jeff; Misevic, Dusan; Ofria, Charles; Lenski, Richard E; Elena, Santiago F; Sanjuán, Rafael

    2008-01-01

    The rate of mutation is central to evolution. Mutations are required for adaptation, yet most mutations with phenotypic effects are deleterious. As a consequence, the mutation rate that maximizes adaptation will be some intermediate value. Here, we used digital organisms to investigate the ability of natural selection to adjust and optimize mutation rates. We assessed the optimal mutation rate by empirically determining what mutation rate produced the highest rate of adaptation. Then, we allowed mutation rates to evolve, and we evaluated the proximity to the optimum. Although we chose conditions favorable for mutation rate optimization, the evolved rates were invariably far below the optimum across a wide range of experimental parameter settings. We hypothesized that the reason that mutation rates evolved to be suboptimal was the ruggedness of fitness landscapes. To test this hypothesis, we created a simplified landscape without any fitness valleys and found that, in such conditions, populations evolved near-optimal mutation rates. In contrast, when fitness valleys were added to this simple landscape, the ability of evolving populations to find the optimal mutation rate was lost. We conclude that rugged fitness landscapes can prevent the evolution of mutation rates that are optimal for long-term adaptation. This finding has important implications for applied evolutionary research in both biological and computational realms. PMID:18818724

  1. Natural Selection Fails to Optimize Mutation Rates for Long-Term Adaptation on Rugged Fitness Landscapes

    PubMed Central

    Clune, Jeff; Misevic, Dusan; Ofria, Charles; Lenski, Richard E.; Elena, Santiago F.; Sanjuán, Rafael

    2008-01-01

    The rate of mutation is central to evolution. Mutations are required for adaptation, yet most mutations with phenotypic effects are deleterious. As a consequence, the mutation rate that maximizes adaptation will be some intermediate value. Here, we used digital organisms to investigate the ability of natural selection to adjust and optimize mutation rates. We assessed the optimal mutation rate by empirically determining what mutation rate produced the highest rate of adaptation. Then, we allowed mutation rates to evolve, and we evaluated the proximity to the optimum. Although we chose conditions favorable for mutation rate optimization, the evolved rates were invariably far below the optimum across a wide range of experimental parameter settings. We hypothesized that the reason that mutation rates evolved to be suboptimal was the ruggedness of fitness landscapes. To test this hypothesis, we created a simplified landscape without any fitness valleys and found that, in such conditions, populations evolved near-optimal mutation rates. In contrast, when fitness valleys were added to this simple landscape, the ability of evolving populations to find the optimal mutation rate was lost. We conclude that rugged fitness landscapes can prevent the evolution of mutation rates that are optimal for long-term adaptation. This finding has important implications for applied evolutionary research in both biological and computational realms. PMID:18818724

  2. Optoelectronic optimization of mode selective converter based on liquid crystal on silicon

    NASA Astrophysics Data System (ADS)

    Wang, Yongjiao; Liang, Lei; Yu, Dawei; Fu, Songnian

    2016-03-01

    We carry out comprehensive optoelectronic optimization of mode selective converter used for the mode division multiplexing, based on liquid crystal on silicon (LCOS) in binary mode. The conversion error of digital-to-analog (DAC) is investigated quantitatively for the purpose of driving the LCOS in the application of mode selective conversion. Results indicate the DAC must have a resolution of 8-bit, in order to achieve high mode extinction ratio (MER) of 28 dB. On the other hand, both the fast axis position error of half-wave-plate (HWP) and rotation angle error of Faraday rotator (FR) have negative influence on the performance of mode selective conversion. However, the commercial products provide enough angle error tolerance for the LCOS-based mode selective converter, taking both of insertion loss (IL) and MER into account.

  3. In Vitro Selection of Optimal DNA Substrates for Ligation by a Water-Soluble Carbodiimide

    NASA Technical Reports Server (NTRS)

    Harada, Kazuo; Orgel, Leslie E.

    1994-01-01

    We have used in vitro selection to investigate the sequence requirements for efficient template-directed ligation of oligonucleotides at 0 deg C using a water-soluble carbodiimide as condensing agent. We find that only 2 bp at each side of the ligation junction are needed. We also studied chemical ligation of substrate ensembles that we have previously selected as optimal by RNA ligase or by DNA ligase. As anticipated, we find that substrates selected with DNA ligase ligate efficiently with a chemical ligating agent, and vice versa. Substrates selected using RNA ligase are not ligated by the chemical condensing agent and vice versa. The implications of these results for prebiotic chemistry are discussed.

  4. A fully integrated direct-conversion digital satellite tuner in 0.18 μm CMOS

    NASA Astrophysics Data System (ADS)

    Si, Chen; Zengwang, Yang; Mingliang, Gu

    2011-04-01

    A fully integrated direct-conversion digital satellite tuner for DVB-S/S2 and ABS-S applications is presented. A broadband noise-canceling Balun-LNA and passive quadrature mixers provided a high-linearity low noise RF front-end, while the synthesizer integrated the loop filter to reduce the solution cost and system debug time. Fabricated in 0.18 μm CMOS, the chip achieves a less than 7.6 dB noise figure over a 900-2150 MHz L-band, while the measured sensitivity for 4.42 MS/s QPSK-3/4 mode is -91 dBm at the PCB connector. The fully integrated integer-N synthesizer operating from 2150 to 4350 MHz achieves less than 1 °C integrated phase error. The chip consumes about 145 mA at a 3.3 V supply with internal integrated LDOs.

  5. Optimal precursor ion selection for LC-MALDI MS/MS

    PubMed Central

    2013-01-01

    Background Liquid chromatography mass spectrometry (LC-MS) maps in shotgun proteomics are often too complex to select every detected peptide signal for fragmentation by tandem mass spectrometry (MS/MS). Standard methods for precursor ion selection, commonly based on data dependent acquisition, select highly abundant peptide signals in each spectrum. However, these approaches produce redundant information and are biased towards high-abundance proteins. Results We present two algorithms for inclusion list creation that formulate precursor ion selection as an optimization problem. Given an LC-MS map, the first approach maximizes the number of selected precursors given constraints such as a limited number of acquisitions per RT fraction. Second, we introduce a protein sequence-based inclusion list that can be used to monitor proteins of interest. Given only the protein sequences, we create an inclusion list that optimally covers the whole protein set. Additionally, we propose an iterative precursor ion selection that aims at reducing the redundancy obtained with data dependent LC-MS/MS. We overcome the risk of erroneous assignments by including methods for retention time and proteotypicity predictions. We show that our method identifies a set of proteins requiring fewer precursors than standard approaches. Thus, it is well suited for precursor ion selection in experiments with limited sample amount or analysis time. Conclusions We present three approaches to precursor ion selection with LC-MALDI MS/MS. Using a well-defined protein standard and a complex human cell lysate, we demonstrate that our methods outperform standard approaches. Our algorithms are implemented as part of OpenMS and are available under http://www.openms.de. PMID:23418672

  6. Parallel medicinal chemistry approaches to selective HDAC1/HDAC2 inhibitor (SHI-1:2) optimization.

    PubMed

    Kattar, Solomon D; Surdi, Laura M; Zabierek, Anna; Methot, Joey L; Middleton, Richard E; Hughes, Bethany; Szewczak, Alexander A; Dahlberg, William K; Kral, Astrid M; Ozerova, Nicole; Fleming, Judith C; Wang, Hongmei; Secrist, Paul; Harsch, Andreas; Hamill, Julie E; Cruz, Jonathan C; Kenific, Candia M; Chenard, Melissa; Miller, Thomas A; Berk, Scott C; Tempest, Paul

    2009-02-15

    The successful application of both solid and solution phase library synthesis, combined with tight integration into the medicinal chemistry effort, resulted in the efficient optimization of a novel structural series of selective HDAC1/HDAC2 inhibitors by the MRL-Boston Parallel Medicinal Chemistry group. An initial lead from a small parallel library was found to be potent and selective in biochemical assays. Advanced compounds were the culmination of iterative library design and possess excellent biochemical and cellular potency, as well as acceptable PK and efficacy in animal models. PMID:19138845

  7. Discovery of GSK2656157: An Optimized PERK Inhibitor Selected for Preclinical Development.

    PubMed

    Axten, Jeffrey M; Romeril, Stuart P; Shu, Arthur; Ralph, Jeffrey; Medina, Jesús R; Feng, Yanhong; Li, William Hoi Hong; Grant, Seth W; Heerding, Dirk A; Minthorn, Elisabeth; Mencken, Thomas; Gaul, Nathan; Goetz, Aaron; Stanley, Thomas; Hassell, Annie M; Gampe, Robert T; Atkins, Charity; Kumar, Rakesh

    2013-10-10

    We recently reported the discovery of GSK2606414 (1), a selective first in class inhibitor of protein kinase R (PKR)-like endoplasmic reticulum kinase (PERK), which inhibited PERK activation in cells and demonstrated tumor growth inhibition in a human tumor xenograft in mice. In continuation of our drug discovery program, we applied a strategy to decrease inhibitor lipophilicity as a means to improve physical properties and pharmacokinetics. This report describes our medicinal chemistry optimization culminating in the discovery of the PERK inhibitor GSK2656157 (6), which was selected for advancement to preclinical development. PMID:24900593

  8. On the non-stationarity of financial time series: impact on optimal portfolio selection

    NASA Astrophysics Data System (ADS)

    Livan, Giacomo; Inoue, Jun-ichi; Scalas, Enrico

    2012-07-01

    We investigate the possible drawbacks of employing the standard Pearson estimator to measure correlation coefficients between financial stocks in the presence of non-stationary behavior, and we provide empirical evidence against the well-established common knowledge that using longer price time series provides better, more accurate, correlation estimates. Then, we investigate the possible consequences of instabilities in empirical correlation coefficient measurements on optimal portfolio selection. We rely on previously published works which provide a framework allowing us to take into account possible risk underestimations due to the non-optimality of the portfolio weights being used in order to distinguish such non-optimality effects from risk underestimations genuinely due to non-stationarities. We interpret such results in terms of instabilities in some spectral properties of portfolio correlation matrices.

  9. A new approach to optimal selection of services in health care organizations.

    PubMed

    Adolphson, D L; Baird, M L; Lawrence, K D

    1991-01-01

    A new reimbursement policy adopted by Medicare in 1983 caused financial difficulties for many hospitals and health care organizations. Several organizations responded to these difficulties by developing systems to carefully measure their costs of providing services. The purpose of such systems was to provide relevant information about the profitability of hospital services. This paper presents a new method of making hospital service selection decisions: it is based on an optimization model that avoids arbitrary cost allocations as a basis for computing the costs of offering a given service. The new method provides more reliable information about which services are profitable or unprofitable, and it provides an accurate measure of the degree to which a service is profitable or unprofitable. The new method also provides useful information about the sensitivity of the optimal decision to changes in costs and revenues. Specialized algorithms for the optimization model lead to very efficient implementation of the method, even for the largest health care organizations. PMID:10111676

  10. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  11. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  12. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    NASA Astrophysics Data System (ADS)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  13. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors.

    PubMed

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-01-01

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor. PMID:26151212

  14. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors

    PubMed Central

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-01-01

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor. PMID:26151212

  15. Imaging multicellular specimens with real-time optimized tiling light-sheet selective plane illumination microscopy

    PubMed Central

    Fu, Qinyi; Martin, Benjamin L.; Matus, David Q.; Gao, Liang

    2016-01-01

    Despite the progress made in selective plane illumination microscopy, high-resolution 3D live imaging of multicellular specimens remains challenging. Tiling light-sheet selective plane illumination microscopy (TLS-SPIM) with real-time light-sheet optimization was developed to respond to the challenge. It improves the 3D imaging ability of SPIM in resolving complex structures and optimizes SPIM live imaging performance by using a real-time adjustable tiling light sheet and creating a flexible compromise between spatial and temporal resolution. We demonstrate the 3D live imaging ability of TLS-SPIM by imaging cellular and subcellular behaviours in live C. elegans and zebrafish embryos, and show how TLS-SPIM can facilitate cell biology research in multicellular specimens by studying left-right symmetry breaking behaviour of C. elegans embryos. PMID:27004937

  16. Optimal Needle Grasp Selection for Automatic Execution of Suturing Tasks in Robotic Minimally Invasive Surgery

    PubMed Central

    Liu, Taoming; Çavuşoğlu, M. Cenk

    2015-01-01

    This paper presents algorithms for optimal selection of needle grasp, for autonomous robotic execution of the minimally invasive surgical suturing task. In order to minimize the tissue trauma during the suturing motion, the best practices of needle path planning that are used by surgeons are applied for autonomous robotic surgical suturing tasks. Once an optimal needle trajectory in a well-defined suturing scenario is chosen, another critical issue for suturing is the choice of needle grasp for the robotic system. Inappropriate needle grasp increases operating time requiring multiple re-grasps to complete the desired task. The proposed methods use manipulability, dexterity and torque metrics for needle grasp selection. A simulation demonstrates the proposed methods and recommends a variety of grasps. Then a realistic demonstration compares the performances of the manipulator using different grasps. PMID:26413382

  17. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1993-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  18. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1992-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  19. Analysis of double stub tuner control stability in a many element phased array antenna with strong cross-coupling

    NASA Astrophysics Data System (ADS)

    Wallace, G. M.; Fitzgerald, E.; Hillairet, J.; Johnson, D. K.; Kanojia, A. D.; Koert, P.; Lin, Y.; Murray, R.; Shiraiwa, S.; Terry, D. R.; Wukitch, S. J.

    2014-02-01

    Active stub tuning with a fast ferrite tuner (FFT) allows for the system to respond dynamically to changes in the plasma impedance such as during the L-H transition or edge localized modes (ELMs), and has greatly increased the effectiveness of fusion ion cyclotron range of frequency systems. A high power waveguide double-stub tuner is under development for use with the Alcator C-Mod lower hybrid current drive (LHCD) system. Exact impedance matching with a double-stub is possible for a single radiating element under most load conditions, with the reflection coefficient reduced from Γ to Γ2 in the "forbidden region." The relative phase shift between adjacent columns of a LHCD antenna is critical for control of the launched n∥ spectrum. Adding a double-stub tuning network will perturb the phase of the forward wave particularly if the unmatched reflection coefficient is high. This effect can be compensated by adjusting the phase of the low power microwave drive for each klystron amplifier. Cross-coupling of the reflected power between columns of the launcher must also be considered. The problem is simulated by cascading a scattering matrix for the plasma provided by a linear coupling model with the measured launcher scattering matrix and that of the FFTs. The solution is advanced in an iterative manner similar to the time-dependent behavior of the real system. System performance is presented under a range of edge density conditions from under-dense to over-dense and a range of launched n∥.

  20. Analysis of double stub tuner control stability in a many element phased array antenna with strong cross-coupling

    SciTech Connect

    Wallace, G. M.; Fitzgerald, E.; Johnson, D. K.; Kanojia, A. D.; Koert, P.; Lin, Y.; Murray, R.; Shiraiwa, S.; Terry, D. R.; Wukitch, S. J.; Hillairet, J.

    2014-02-12

    Active stub tuning with a fast ferrite tuner (FFT) allows for the system to respond dynamically to changes in the plasma impedance such as during the L-H transition or edge localized modes (ELMs), and has greatly increased the effectiveness of fusion ion cyclotron range of frequency systems. A high power waveguide double-stub tuner is under development for use with the Alcator C-Mod lower hybrid current drive (LHCD) system. Exact impedance matching with a double-stub is possible for a single radiating element under most load conditions, with the reflection coefficient reduced from Γ to Γ{sup 2} in the “forbidden region.” The relative phase shift between adjacent columns of a LHCD antenna is critical for control of the launched n{sub ∥} spectrum. Adding a double-stub tuning network will perturb the phase of the forward wave particularly if the unmatched reflection coefficient is high. This effect can be compensated by adjusting the phase of the low power microwave drive for each klystron amplifier. Cross-coupling of the reflected power between columns of the launcher must also be considered. The problem is simulated by cascading a scattering matrix for the plasma provided by a linear coupling model with the measured launcher scattering matrix and that of the FFTs. The solution is advanced in an iterative manner similar to the time-dependent behavior of the real system. System performance is presented under a range of edge density conditions from under-dense to over-dense and a range of launched n{sub ∥}.

  1. Enhanced selectivity and search speed for method development using one-segment-per-component optimization strategies.

    PubMed

    Tyteca, Eva; Vanderlinden, Kim; Favier, Maxime; Clicq, David; Cabooter, Deirdre; Desmet, Gert

    2014-09-01

    Linear gradient programs are very frequently used in reversed phase liquid chromatography to enhance the selectivity compared to isocratic separations. Multi-linear gradient programs on the other hand are only scarcely used, despite their intrinsically larger separation power. Because the gradient-conformity of the latest generation of instruments has greatly improved, a renewed interest in more complex multi-segment gradient liquid chromatography can be expected in the future, raising the need for better performing gradient design algorithms. We explored the possibilities of a new type of multi-segment gradient optimization algorithm, the so-called "one-segment-per-group-of-components" optimization strategy. In this gradient design strategy, the slope is adjusted after the elution of each individual component of the sample, letting the retention properties of the different analytes auto-guide the course of the gradient profile. Applying this method experimentally to four randomly selected test samples, the separation time could on average be reduced with about 40% compared to the best single linear gradient. Moreover, the newly proposed approach performed equally well or better than the multi-segment optimization mode of a commercial software package. Carrying out an extensive in silico study, the experimentally observed advantage could also be generalized over a statistically significant amount of different 10 and 20 component samples. In addition, the newly proposed gradient optimization approach enables much faster searches than the traditional multi-step gradient design methods. PMID:25039066

  2. Ant-cuckoo colony optimization for feature selection in digital mammogram.

    PubMed

    Jona, J B; Nagaveni, N

    2014-01-15

    Digital mammogram is the only effective screening method to detect the breast cancer. Gray Level Co-occurrence Matrix (GLCM) textural features are extracted from the mammogram. All the features are not essential to detect the mammogram. Therefore identifying the relevant feature is the aim of this work. Feature selection improves the classification rate and accuracy of any classifier. In this study, a new hybrid metaheuristic named Ant-Cuckoo Colony Optimization a hybrid of Ant Colony Optimization (ACO) and Cuckoo Search (CS) is proposed for feature selection in Digital Mammogram. ACO is a good metaheuristic optimization technique but the drawback of this algorithm is that the ant will walk through the path where the pheromone density is high which makes the whole process slow hence CS is employed to carry out the local search of ACO. Support Vector Machine (SVM) classifier with Radial Basis Kernal Function (RBF) is done along with the ACO to classify the normal mammogram from the abnormal mammogram. Experiments are conducted in miniMIAS database. The performance of the new hybrid algorithm is compared with the ACO and PSO algorithm. The results show that the hybrid Ant-Cuckoo Colony Optimization algorithm is more accurate than the other techniques. PMID:24783812

  3. Selection of optimal threshold to construct recurrence plot for structural operational vibration measurements

    NASA Astrophysics Data System (ADS)

    Yang, Dong; Ren, Wei-Xin; Hu, Yi-Ding; Li, Dan

    2015-08-01

    The structural health monitoring (SHM) involves the sampled operational vibration measurements over time so that the structural features can be extracted accordingly. The recurrence plot (RP) and corresponding recurrence quantification analysis (RQA) have become a useful tool in various fields due to its efficiency. The threshold selection is one of key issues to make sure that the constructed recurrence plot contains enough recurrence points. Different signals have in nature different threshold values. This paper is aiming at presenting an approach to determine the optimal threshold for the operational vibration measurements of civil engineering structures. The surrogate technique and Taguchi loss function are proposed to generate reliable data and to achieve the optimal discrimination power point where the threshold is optimum. The impact of selecting recurrence thresholds on different signals is discussed. It is demonstrated that the proposed method to identify the optimal threshold is applicable to the operational vibration measurements. The proposed method provides a way to find the optimal threshold for the best RP construction of structural vibration measurements under operational conditions.

  4. Pipe degradation investigations for optimization of flow-accelerated corrosion inspection location selection

    SciTech Connect

    Chandra, S.; Habicht, P.; Chexal, B.; Mahini, R.; McBrine, W.; Esselman, T.; Horowitz, J.

    1995-12-01

    A large amount of piping in a typical nuclear power plant is susceptible to Flow-Accelerated Corrosion (FAC) wall thinning to varying degrees. A typical PAC monitoring program includes the wall thickness measurement of a select number of components in order to judge the structural integrity of entire systems. In order to appropriately allocate resources and maintain an adequate FAC program, it is necessary to optimize the selection of components for inspection by focusing on those components which provide the best indication of system susceptibility to FAC. A better understanding of system FAC predictability and the types of FAC damage encountered can provide some of the insight needed to better focus and optimize the inspection plan for an upcoming refueling outage. Laboratory examination of FAC damaged components removed from service at Northeast Utilities` (NU) nuclear power plants provides a better understanding of the damage mechanisms involved and contributing causes. Selected results of this ongoing study are presented with specific conclusions which will help NU to better focus inspections and thus optimize the ongoing FAC inspection program.

  5. A Topography Analysis Incorporated Optimization Method for the Selection and Placement of Best Management Practices

    PubMed Central

    Shen, Zhenyao; Chen, Lei; Xu, Liang

    2013-01-01

    Best Management Practices (BMPs) are one of the most effective methods to control nonpoint source (NPS) pollution at a watershed scale. In this paper, the use of a topography analysis incorporated optimization method (TAIOM) was proposed, which integrates topography analysis with cost-effective optimization. The surface status, slope and the type of land use were evaluated as inputs for the optimization engine. A genetic algorithm program was coded to obtain the final optimization. The TAIOM was validated in conjunction with the Soil and Water Assessment Tool (SWAT) in the Yulin watershed in Southwestern China. The results showed that the TAIOM was more cost-effective than traditional optimization methods. The distribution of selected BMPs throughout landscapes comprising relatively flat plains and gentle slopes, suggests the need for a more operationally effective scheme, such as the TAIOM, to determine the practicability of BMPs before widespread adoption. The TAIOM developed in this study can easily be extended to other watersheds to help decision makers control NPS pollution. PMID:23349917

  6. Selection of optimal complexity for ENSO-EMR model by minimum description length principle

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.

    2012-12-01

    One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression

  7. Near optimal energy selective x-ray imaging system performance with simple detectors

    SciTech Connect

    Alvarez, Robert E.

    2010-02-15

    Purpose: This article describes a method to achieve near optimal performance with low energy resolution detectors. Tapiovaara and Wagner [Phys. Med. Biol. 30, 519-529 (1985)] showed that an energy selective x-ray system using a broad spectrum source can produce images with a larger signal to noise ratio (SNR) than conventional systems using energy integrating or photon counting detectors. They showed that there is an upper limit to the SNR and that it can be achieved by measuring full spectrum information and then using an optimal energy dependent weighting. Methods: A performance measure is derived by applying statistical detection theory to an abstract vector space of the line integrals of the basis set coefficients of the two function approximation to the x-ray attenuation coefficient. The approach produces optimal results that utilize all the available energy dependent data. The method can be used with any energy selective detector and is applied not only to detectors using pulse height analysis (PHA) but also to a detector that simultaneously measures the total photon number and integrated energy, as discussed by Roessl et al. [Med. Phys. 34, 959-966 (2007)]. A generalization of this detector that improves the performance is introduced. A method is described to compute images with the optimal SNR using projections in a ''whitened'' vector space transformed so the noise is uncorrelated and has unit variance in both coordinates. Material canceled images with optimal SNR can also be computed by projections in this space. Results: The performance measure is validated by showing that it provides the Tapiovaara-Wagner optimal results for a detector with full energy information and also a conventional detector. The performance with different types of detectors is compared to the ideal SNR as a function of x-ray tube voltage and subject thickness. A detector that combines two bin PHA with a simultaneous measurement of integrated photon energy provides near ideal

  8. New approach for automatic recognition of melanoma in profilometry: optimized feature selection using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Handels, Heinz; Ross, Th; Kreusch, J.; Wolff, H. H.; Poeppl, S. J.

    1998-06-01

    A new approach to computer supported recognition of melanoma and naevocytic naevi based on high resolution skin surface profiles is presented. Profiles are generated by sampling an area of 4 X 4 mm2 at a resolution of 125 sample points per mm with a laser profilometer at a vertical resolution of 0.1 micrometers . With image analysis algorithms Haralick's texture parameters, Fourier features and features based on fractal analysis are extracted. In order to improve classification performance, a subsequent feature selection process is applied to determine the best possible subset of features. Genetic algorithms are optimized for the feature selection process, and results of different approaches are compared. As quality measure for feature subsets, the error rate of the nearest neighbor classifier estimated with the leaving-one-out method is used. In comparison to heuristic strategies and greedy algorithms, genetic algorithms show the best results for the feature selection problem. After feature selection, several architectures of feed forward neural networks with error back-propagation are evaluated. Classification performance of the neural classifier is optimized using different topologies, learning parameters and pruning algorithms. The best neural classifier achieved an error rate of 4.5% and was found after network pruning. The best result in all with an error rate of 2.3% was obtained with the nearest neighbor classifier.

  9. Optimal Selection of Predictor Variables in Statistical Downscaling Models of Precipitation

    NASA Astrophysics Data System (ADS)

    Goly, A.; Teegavarapu, R. S. V.

    2014-12-01

    Statistical downscaling models developed for precipitation rely heavily on predictors chosen and on accurate relationships between regional scale predictand and GCM-scale predictor for providing future precipitation projections at different spatial and temporal scales. This study provides two new screening methods for selection of predictor variables for use in downscaling methods based on predictand-predictors relationships. Methods to characterize predictand-predictors relationships via rigid and flexible functional relationships using mixed integer nonlinear programming (MINLP) model with binary variables and artificial neural network (ANN) models respectively are developed and evaluated in this study. In addition to these two methods, a stepwise regression (SWR) and two models that do not use any pre-screening of variables are also evaluated. A two-step process is used to downscale precipitation data with optimal selection of predictors and using them in a statistical downscaling model based on support vector machine (SVM) approach. Experiments with the proposed two new methods and three additional methods based on correlation between predictors and predictand and the other based on principal component analysis are evaluated in this study. Results suggest that optimal selection of variables using MINLP albeit with linear relationship and ANN method provided improved performance and error measures compared to two other models that did not use these methods for screening the variables. Of all the three screening methods tested in this study, SWR method selected the least number of variables and also ranked lowest based on several performance measures.

  10. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  11. Optimal part and module selection for synthetic gene circuit design automation.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2014-08-15

    An integral challenge in synthetic circuit design is the selection of optimal parts to populate a given circuit topology, so that the resulting circuit behavior best approximates the desired one. In some cases, it is also possible to reuse multipart constructs or modules that have been already built and experimentally characterized. Efficient part and module selection algorithms are essential to systematically search the solution space, and their significance will only increase in the following years due to the projected explosion in part libraries and circuit complexity. Here, we address this problem by introducing a structured abstraction methodology and a dynamic programming-based algorithm that guaranties optimal part selection. In addition, we provide three extensions that are based on symmetry check, information look-ahead and branch-and-bound techniques, to reduce the running time and space requirements. We have evaluated the proposed methodology with a benchmark of 11 circuits, a database of 73 parts and 304 experimentally constructed modules with encouraging results. This work represents a fundamental departure from traditional heuristic-based methods for part and module selection and is a step toward maximizing efficiency in synthetic circuit design and construction. PMID:24933033

  12. Temporal artifact minimization in sonoelastography through optimal selection of imaging parameters.

    PubMed

    Torres, Gabriela; Chau, Gustavo R; Parker, Kevin J; Castaneda, Benjamin; Lavarello, Roberto J

    2016-07-01

    Sonoelastography is an ultrasonic technique that uses Kasai's autocorrelation algorithms to generate qualitative images of tissue elasticity using external mechanical vibrations. In the absence of synchronization between the mechanical vibration device and the ultrasound system, the random initial phase and finite ensemble length of the data packets result in temporal artifacts in the sonoelastography frames and, consequently, in degraded image quality. In this work, the analytic derivation of an optimal selection of acquisition parameters (i.e., pulse repetition frequency, vibration frequency, and ensemble length) is developed in order to minimize these artifacts, thereby eliminating the need for complex device synchronization. The proposed rule was verified through experiments with heterogeneous phantoms, where the use of optimally selected parameters increased the average contrast-to-noise ratio (CNR) by more than 200% and reduced the CNR standard deviation by 400% when compared to the use of arbitrarily selected imaging parameters. Therefore, the results suggest that the rule for specific selection of acquisition parameters becomes an important tool for producing high quality sonoelastography images. PMID:27475192

  13. Selection of optimal artificial boundary condition (ABC) frequencies for structural damage identification

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Lu, Yong

    2016-07-01

    In this paper, the sensitivities of artificial boundary condition (ABC) frequencies to the damages are investigated, and the optimal sensors are selected to provide the reliable structural damage identification. The sensitivity expressions for one-pin and two-pin ABC frequencies, which are the natural frequencies from structures with one and two additional constraints to its original boundary condition, respectively, are proposed. Based on the expressions, the contributions of the underlying mode shapes in the ABC frequencies can be calculated and used to select more sensitive ABC frequencies. Selection criteria are then defined for different conditions, and their performance in structural damage identification is examined with numerical studies. From the findings, conclusions are given.

  14. Achieving diverse and monoallelic olfactory receptor selection through dual-objective optimization design.

    PubMed

    Tian, Xiao-Jun; Zhang, Hang; Sannerud, Jens; Xing, Jianhua

    2016-05-24

    Multiple-objective optimization is common in biological systems. In the mammalian olfactory system, each sensory neuron stochastically expresses only one out of up to thousands of olfactory receptor (OR) gene alleles; at the organism level, the types of expressed ORs need to be maximized. Existing models focus only on monoallele activation, and cannot explain recent observations in mutants, especially the reduced global diversity of expressed ORs in G9a/GLP knockouts. In this work we integrated existing information on OR expression, and constructed a comprehensive model that has all its components based on physical interactions. Analyzing the model reveals an evolutionarily optimized three-layer regulation mechanism, which includes zonal segregation, epigenetic barrier crossing coupled to a negative feedback loop that mechanistically differs from previous theoretical proposals, and a previously unidentified enhancer competition step. This model not only recapitulates monoallelic OR expression, but also elucidates how the olfactory system maximizes and maintains the diversity of OR expression, and has multiple predictions validated by existing experimental results. Through making an analogy to a physical system with thermally activated barrier crossing and comparative reverse engineering analyses, the study reveals that the olfactory receptor selection system is optimally designed, and particularly underscores cooperativity and synergy as a general design principle for multiobjective optimization in biology. PMID:27162367

  15. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    PubMed

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness. PMID:23988713

  16. Tabu search and binary particle swarm optimization for feature selection using microarray data.

    PubMed

    Chuang, Li-Yeh; Yang, Cheng-Huei; Yang, Cheng-Hong

    2009-12-01

    Gene expression profiles have great potential as a medical diagnosis tool because they represent the state of a cell at the molecular level. In the classification of cancer type research, available training datasets generally have a fairly small sample size compared to the number of genes involved. This fact poses an unprecedented challenge to some classification methodologies due to training data limitations. Therefore, a good selection method for genes relevant for sample classification is needed to improve the predictive accuracy, and to avoid incomprehensibility due to the large number of genes investigated. In this article, we propose to combine tabu search (TS) and binary particle swarm optimization (BPSO) for feature selection. BPSO acts as a local optimizer each time the TS has been run for a single generation. The K-nearest neighbor method with leave-one-out cross-validation and support vector machine with one-versus-rest serve as evaluators of the TS and BPSO. The proposed method is applied and compared to the 11 classification problems taken from the literature. Experimental results show that our method simplifies features effectively and either obtains higher classification accuracy or uses fewer features compared to other feature selection methods. PMID:20047491

  17. Dynamic nuclear polarization and optimal control spatial-selective 13C MRI and MRS

    NASA Astrophysics Data System (ADS)

    Vinding, Mads S.; Laustsen, Christoffer; Maximov, Ivan I.; Søgaard, Lise Vejby; Ardenkjær-Larsen, Jan H.; Nielsen, Niels Chr.

    2013-02-01

    Aimed at 13C metabolic magnetic resonance imaging (MRI) and spectroscopy (MRS) applications, we demonstrate that dynamic nuclear polarization (DNP) may be combined with optimal control 2D spatial selection to simultaneously obtain high sensitivity and well-defined spatial restriction. This is achieved through the development of spatial-selective single-shot spiral-readout MRI and MRS experiments combined with dynamic nuclear polarization hyperpolarized [1-13C]pyruvate on a 4.7 T pre-clinical MR scanner. The method stands out from related techniques by facilitating anatomic shaped region-of-interest (ROI) single metabolite signals available for higher image resolution or single-peak spectra. The 2D spatial-selective rf pulses were designed using a novel Krotov-based optimal control approach capable of iteratively fast providing successful pulse sequences in the absence of qualified initial guesses. The technique may be important for early detection of abnormal metabolism, monitoring disease progression, and drug research.

  18. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  19. New indicator for optimal preprocessing and wavelength selection of near-infrared spectra.

    PubMed

    Skibsted, E T S; Boelens, H F M; Westerhuis, J A; Witte, D T; Smilde, A K

    2004-03-01

    Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing techniques and selecting different wavelengths is to compare prediction statistics computed with an independent set of data not used to make the actual calibration model. When the errors of reference value are large, no such values are available at all, or only a limited number of samples are available, other methods exist to evaluate the preprocessing method and wavelength selection. In this work we present a new indicator (SE) that only requires blank sample spectra, i.e., spectra of samples that are mixtures of the interfering constituents (everything except the analyte), a pure analyte spectrum, or alternatively, a sample spectrum where the analyte is present. The indicator is based on computing the net analyte signal of the analyte and the total error, i.e., instrumental noise and bias. By comparing the indicator values when different preprocessing techniques and wavelength selections are applied to the spectra, the optimal preprocessing technique and the optimal wavelength selection can be determined without knowledge of reference values, i.e., it minimizes the non-related spectral variation. The SE indicator is compared to two other indicators that also use net analyte signal computations. To demonstrate the feasibility of the SE indicator, two near-infrared spectral data sets from the pharmaceutical industry were used, i.e., diffuse reflectance spectra of powder samples and transmission spectra of tablets. Especially in pharmaceutical spectroscopic applications, it is expected beforehand that the non-related spectral variation is rather large and it is important to remove it. The indicator gave excellent results with respect to wavelength selection and optimal

  20. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  1. Impact of cultivar selection and process optimization on ethanol yield from different varieties of sugarcane

    PubMed Central

    2014-01-01

    Background The development of ‘energycane’ varieties of sugarcane is underway, targeting the use of both sugar juice and bagasse for ethanol production. The current study evaluated a selection of such ‘energycane’ cultivars for the combined ethanol yields from juice and bagasse, by optimization of dilute acid pretreatment optimization of bagasse for sugar yields. Method A central composite design under response surface methodology was used to investigate the effects of dilute acid pretreatment parameters followed by enzymatic hydrolysis on the combined sugar yield of bagasse samples. The pressed slurry generated from optimum pretreatment conditions (maximum combined sugar yield) was used as the substrate during batch and fed-batch simultaneous saccharification and fermentation (SSF) processes at different solid loadings and enzyme dosages, aiming to reach an ethanol concentration of at least 40 g/L. Results Significant variations were observed in sugar yields (xylose, glucose and combined sugar yield) from pretreatment-hydrolysis of bagasse from different cultivars of sugarcane. Up to 33% difference in combined sugar yield between best performing varieties and industrial bagasse was observed at optimal pretreatment-hydrolysis conditions. Significant improvement in overall ethanol yield after SSF of the pretreated bagasse was also observed from the best performing varieties (84.5 to 85.6%) compared to industrial bagasse (74.5%). The ethanol concentration showed inverse correlation with lignin content and the ratio of xylose to arabinose, but it showed positive correlation with glucose yield from pretreatment-hydrolysis. The overall assessment of the cultivars showed greater improvement in the final ethanol concentration (26.9 to 33.9%) and combined ethanol yields per hectare (83 to 94%) for the best performing varieties with respect to industrial sugarcane. Conclusions These results suggest that the selection of sugarcane variety to optimize ethanol

  2. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  3. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    NASA Astrophysics Data System (ADS)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model

  4. Optimal selection of space transportation fleet to meet multi-mission space program needs

    NASA Technical Reports Server (NTRS)

    Morgenthaler, George W.; Montoya, Alex J.

    1989-01-01

    A space program that spans several decades will be comprised of a collection of missions such as low earth orbital space station, a polar platform, geosynchronous space station, lunar base, Mars astronaut mission, and Mars base. The optimal selection of a fleet of several recoverable and expendable launch vehicles, upper stages, and interplanetary spacecraft necessary to logistically establish and support these space missions can be examined by means of a linear integer programming optimization model. Such a selection must be made because the economies of scale which comes from producing large quantities of a few standard vehicle types, rather than many, will be needed to provide learning curve effects to reduce the overall cost of space transportation if these future missions are to be affordable. Optimization model inputs come from data and from vehicle designs. Each launch vehicle currently in existence has a launch history, giving rise to statistical estimates of launch reliability. For future, not-yet-developed launch vehicles, theoretical reliabilities corresponding to the maturity of the launch vehicles' technology and the degree of design redundancy must be estimated. Also, each such launch vehicle has a certain historical or estimated development cost, tooling cost, and a variable cost. The cost of a launch used in this paper includes the variable cost plus an amortized portion of the fixed and development costs. The integer linear programming model will have several constraint equations based on assumptions of mission mass requirements, volume requirements, and number of astronauts needed. The model will minimize launch vehicle logistic support cost and will select the most desirable launch vehicle fleet.

  5. Optimal Spectral Domain Selection for Maximizing Archaeological Signatures: Italy Case Studies

    PubMed Central

    Cavalli, Rosa Maria; Pascucci, Simone; Pignatti, Stefano

    2009-01-01

    Different landscape elements, including archaeological remains, can be automatically classified when their spectral characteristics are different, but major difficulties occur when extracting and classifying archaeological spectral features, as archaeological remains do not have unique shape or spectral characteristics. The spectral anomaly characteristics due to buried remains depend strongly on vegetation cover and/or soil types, which can make feature extraction more complicated. For crop areas, such as the test sites selected for this study, soil and moisture changes within near-surface archaeological deposits can influence surface vegetation patterns creating spectral anomalies of various kinds. In this context, this paper analyzes the usefulness of hyperspectral imagery, in the 0.4 to 12.8 μm spectral region, to identify the optimal spectral range for archaeological prospection as a function of the dominant land cover. MIVIS airborne hyperspectral imagery acquired in five different archaeological areas located in Italy has been used. Within these archaeological areas, 97 test sites with homogenous land cover and characterized by a statistically significant number of pixels related to the buried remains have been selected. The archaeological detection potential for all MIVIS bands has been assessed by applying a Separability Index on each spectral anomaly-background system of the test sites. A scatterplot analysis of the SI values vs. the dominant land cover fractional abundances, as retrieved by spectral mixture analysis, was performed to derive the optimal spectral ranges maximizing the archaeological detection. This work demonstrates that whenever we know the dominant land cover fractional abundances in archaeological sites, we can a priori select the optimal spectral range to improve the efficiency of archaeological observations performed by remote sensing data. PMID:22573985

  6. Hyperspectral band selection based on parallel particle swarm optimization and impurity function band prioritization schemes

    NASA Astrophysics Data System (ADS)

    Chang, Yang-Lang; Liu, Jin-Nan; Chen, Yen-Lin; Chang, Wen-Yen; Hsieh, Tung-Ju; Huang, Bormin

    2014-01-01

    In recent years, satellite imaging technologies have resulted in an increased number of bands acquired by hyperspectral sensors, greatly advancing the field of remote sensing. Accordingly, owing to the increasing number of bands, band selection in hyperspectral imagery for dimension reduction is important. This paper presents a framework for band selection in hyperspectral imagery that uses two techniques, referred to as particle swarm optimization (PSO) band selection and the impurity function band prioritization (IFBP) method. With the PSO band selection algorithm, highly correlated bands of hyperspectral imagery can first be grouped into modules to coarsely reduce high-dimensional datasets. Then, these highly correlated band modules are analyzed with the IFBP method to finely select the most important feature bands from the hyperspectral imagery dataset. However, PSO band selection is a time-consuming procedure when the number of hyperspectral bands is very large. Hence, this paper proposes a parallel computing version of PSO, namely parallel PSO (PPSO), using a modern graphics processing unit (GPU) architecture with NVIDIA's compute unified device architecture technology to improve the computational speed of PSO processes. The natural parallelism of the proposed PPSO lies in the fact that each particle can be regarded as an independent agent. Parallel computation benefits the algorithm by providing each agent with a parallel processor. The intrinsic parallel characteristics embedded in PPSO are, therefore, suitable for parallel computation. The effectiveness of the proposed PPSO is evaluated through the use of airborne visible/infrared imaging spectrometer hyperspectral images. The performance of PPSO is validated using the supervised K-nearest neighbor classifier. The experimental results demonstrate that the proposed PPSO/IFBP band selection method can not only improve computational speed, but also offer a satisfactory classification performance.

  7. Selection of a site for the DUMAND detector with optimal water transparency

    NASA Astrophysics Data System (ADS)

    Karabashev, G. S.; Kuleshov, A. F.

    1989-06-01

    With reference to selecting a site for the DUMAND detector with optimal water transparency, measurements were made of the spectral distribution of light attenuation coefficients in samples from diffearent bodies of water, including Lake Baikal, the Atlantic Ocean, and the Mediterranean Sea. Results on the detection efficiency of Cerenkov radiation by the DUMAND detector indicate that not only the abyssal waters of the open ocean but also depressions of land-locked seas have the optical properties suitable for the operation of the DUMAND detector.

  8. Screening and selection of synthetic peptides for a novel and optimized endotoxin detection method.

    PubMed

    Mujika, M; Zuzuarregui, A; Sánchez-Gómez, S; Martínez de Tejada, G; Arana, S; Pérez-Lorenzo, E

    2014-09-30

    The current validated endotoxin detection methods, in spite of being highly sensitive, present several drawbacks in terms of reproducibility, handling and cost. Therefore novel approaches are being carried out in the scientific community to overcome these difficulties. Remarkable efforts are focused on the development of endotoxin-specific biosensors. The key feature of these solutions relies on the proper definition of the capture protocol, especially of the bio-receptor or ligand. The aim of the presented work is the screening and selection of a synthetic peptide specifically designed for LPS detection, as well as the optimization of a procedure for its immobilization onto gold substrates for further application to biosensors. PMID:25034430

  9. Optimization of the excitation light sheet in selective plane illumination microscopy.

    PubMed

    Gao, Liang

    2015-03-01

    Selective plane illumination microscopy (SPIM) allows rapid 3D live fluorescence imaging on biological specimens with high 3D spatial resolution, good optical sectioning capability and minimal photobleaching and phototoxic effect. SPIM gains its advantage by confining the excitation light near the detection focal plane, and its performance is determined by the ability to create a thin, large and uniform excitation light sheet. Several methods have been developed to create such an excitation light sheet for SPIM. However, each method has its own strengths and weaknesses, and tradeoffs must be made among different aspects in SPIM imaging. In this work, we present a strategy to select the excitation light sheet among the latest SPIM techniques, and to optimize its geometry based on spatial resolution, field of view, optical sectioning capability, and the sample to be imaged. Besides the light sheets discussed in this work, the proposed strategy is also applicable to estimate the SPIM performance using other excitation light sheets. PMID:25798312

  10. Induction motor fault diagnosis based on the k-NN and optimal feature selection

    NASA Astrophysics Data System (ADS)

    Nguyen, Ngoc-Tu; Lee, Hong-Hee

    2010-09-01

    The k-nearest neighbour (k-NN) rule is applied to diagnose the conditions of induction motors. The features are extracted from the time vibration signals while the optimal features are selected by a genetic algorithm based on a distance criterion. A weight value is assigned to each feature to help select the best quality features. To improve the classification performance of the k-NN rule, each of the k neighbours are evaluated by a weight factor based on the distance to the test pattern. The proposed k-NN is compared to the conventional k-NN and support vector machine classification to verify the performance of an induction motor fault diagnosis.

  11. Optimization of 1,2,5-Thiadiazole Carbamates as Potent and Selective ABHD6 Inhibitors #

    PubMed Central

    Patel, Jayendra Z.; Nevalainen, Tapio J.; Savinainen, Juha R.; Adams, Yahaya; Laitinen, Tuomo; Runyon, Robert S.; Vaara, Miia; Ahenkorah, Stephen; Kaczor, Agnieszka A.; Navia-Paldanius, Dina; Gynther, Mikko; Aaltonen, Niina; Joharapurkar, Amit A.; Jain, Mukul R.; Haka, Abigail S.; Maxfield, Frederick R.; Laitinen, Jarmo T.; Parkkari, Teija

    2015-01-01

    At present, inhibitors of α/β-hydrolase domain 6 (ABHD6) are viewed as a promising approach to treat inflammation and metabolic disorders. This article describes the optimization of 1,2,5-thiadiazole carbamates as ABHD6 inhibitors. Altogether, 34 compounds were synthesized and their inhibitory activity was tested using lysates of HEK293 cells transiently expressing human ABHD6 (hABHD6). Among the compound series, 4-morpholino-1,2,5-thiadiazol-3-yl cyclooctyl(methyl)carbamate (JZP-430, 55) potently and irreversibly inhibited hABHD6 (IC50 44 nM) and showed good selectivity (∼230 fold) over fatty acid amide hydrolase (FAAH) and lysosomal acid lipase (LAL), the main off-targets of related compounds. Additionally, activity-based protein profiling (ABPP) indicated that compound 55 (JZP-430) displayed good selectivity among the serine hydrolases of mouse brain membrane proteome. PMID:25504894

  12. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  13. Strategies to optimize shock wave lithotripsy outcome: Patient selection and treatment parameters

    PubMed Central

    Semins, Michelle Jo; Matlaga, Brian R

    2015-01-01

    Shock wave lithotripsy (SWL) was introduced in 1980, modernizing the treatment of upper urinary tract stones, and quickly became the most commonly utilized technique to treat kidney stones. Over the past 5-10 years, however, use of SWL has been declining because it is not as reliably effective as more modern technology. SWL success rates vary considerably and there is abundant literature predicting outcome based on patient- and stone-specific parameters. Herein we discuss the ways to optimize SWL outcomes by reviewing proper patient selection utilizing stone characteristics and patient features. Stone size, number, location, density, composition, and patient body habitus and renal anatomy are all discussed. We also review the technical parameters during SWL that can be controlled to improve results further, including type of anesthesia, coupling, shock wave rate, focal zones, pressures, and active monitoring. Following these basic principles and selection criteria will help maximize success rate. PMID:25949936

  14. Contrast based band selection for optimized weathered oil detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Levaux, Florian; Bostater, Charles R., Jr.; Neyt, Xavier

    2012-09-01

    Hyperspectral imagery offers unique benefits for detection of land and water features due to the information contained in reflectance signatures such as the bi-directional reflectance distribution function or BRDF. The reflectance signature directly shows the relative absorption and backscattering features of targets. These features can be very useful in shoreline monitoring or surveillance applications, for example to detect weathered oil. In real-time detection applications, processing of hyperspectral data can be an important tool and Optimal band selection is thus important in real time applications in order to select the essential bands using the absorption and backscatter information. In the present paper, band selection is based upon the optimization of target detection using contrast algorithms. The common definition of the contrast (using only one band out of all possible combinations available within a hyperspectral image) is generalized in order to consider all the possible combinations of wavelength dependent contrasts using hyperspectral images. The inflection (defined here as an approximation of the second derivative) is also used in order to enhance the variations in the reflectance spectra as well as in the contrast spectrua in order to assist in optimal band selection. The results of the selection in term of target detection (false alarms and missed detection) are also compared with a previous method to perform feature detection, namely the matched filter. In this paper, imagery is acquired using a pushbroom hyperspectral sensor mounted at the bow of a small vessel. The sensor is mechanically rotated using an optical rotation stage. This opto-mechanical scanning system produces hyperspectral images with pixel sizes on the order of mm to cm scales, depending upon the distance between the sensor and the shoreline being monitored. The motion of the platform during the acquisition induces distortions in the collected HSI imagery. It is therefore

  15. An experimental and theoretical investigation of a fuel system tuner for the suppression of combustion driven oscillations

    NASA Astrophysics Data System (ADS)

    Scarborough, David E.

    Manufacturers of commercial, power-generating, gas turbine engines continue to develop combustors that produce lower emissions of nitrogen oxides (NO x) in order to meet the environmental standards of governments around the world. Lean, premixed combustion technology is one technique used to reduce NOx emissions in many current power and energy generating systems. However, lean, premixed combustors are susceptible to thermo-acoustic oscillations, which are pressure and heat-release fluctuations that occur because of a coupling between the combustion process and the natural acoustic modes of the system. These pressure oscillations lead to premature failure of system components, resulting in very costly maintenance and downtime. Therefore, a great deal of work has gone into developing methods to prevent or eliminate these combustion instabilities. This dissertation presents the results of a theoretical and experimental investigation of a novel Fuel System Tuner (FST) used to damp detrimental combustion oscillations in a gas turbine combustor by changing the fuel supply system impedance, which controls the amplitude and phase of the fuel flowrate. When the FST is properly tuned, the heat release oscillations resulting from the fuel-air ratio oscillations damp, rather than drive, the combustor acoustic pressure oscillations. A feasibility study was conducted to prove the validity of the basic idea and to develop some basic guidelines for designing the FST. Acoustic models for the subcomponents of the FST were developed, and these models were experimentally verified using a two-microphone impedance tube. Models useful for designing, analyzing, and predicting the performance of the FST were developed and used to demonstrate the effectiveness of the FST. Experimental tests showed that the FST reduced the acoustic pressure amplitude of an unstable, model, gas-turbine combustor over a wide range of operating conditions and combustor configurations. Finally, combustor

  16. Small sample training and test selection method for optimized anomaly detection algorithms in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2012-01-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques provide an avenue to select robust settings capable of operating consistently across a large variety of image scenes. Many researchers in this area are faced with a paucity of data. Unfortunately, there are no data splitting methods for model validation of datasets with small sample sizes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research has developed a framework for optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. We have developed method for selecting hyperspectral image training and test subsets that yields consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. The small sample training and test selection method is contrasted with randomly selected training sets as well as training sets chosen from the CADEX and DUPLEX algorithms for the well known Reed-Xiaoli anomaly detector.

  17. Optimal sequence selection in proteins of known structure by simulated evolution.

    PubMed Central

    Hellinga, H W; Richards, F M

    1994-01-01

    Rational design of protein structure requires the identification of optimal sequences to carry out a particular function within a given backbone structure. A general solution to this problem requires that a potential function describing the energy of the system as a function of its atomic coordinates be minimized simultaneously over all available sequences and their three-dimensional atomic configurations. Here we present a method that explicitly minimizes a semiempirical potential function simultaneously in these two spaces, using a simulated annealing approach. The method takes the fixed three-dimensional coordinates of a protein backbone and stochastically generates possible sequences through the introduction of random mutations. The corresponding three-dimensional coordinates are constructed for each sequence by "redecorating" the backbone coordinates of the original structure with the corresponding side chains. These are then allowed to vary in their structure by random rotations around free torsional angles to generate a stochastic walk in configurational space. We have named this method protein simulated evolution, because, in loose analogy with natural selection, it randomly selects for allowed solutions in the sequence of a protein subject to the "selective pressure" of a potential function. Energies predicted by this method for sequences of a small group of residues in the hydrophobic core of the phage lambda cI repressor correlate well with experimentally determined biological activities. This "genetic selection by computer" approach has potential applications in protein engineering, rational protein design, and structure-based drug discovery. PMID:8016069

  18. An Ant Colony Optimization Based Feature Selection for Web Page Classification

    PubMed Central

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods. PMID:25136678

  19. An ant colony optimization based feature selection for web page classification.

    PubMed

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods. PMID:25136678

  20. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection.

    PubMed

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J; Baufreton, Jérôme

    2016-07-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes' equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called 'prototypic' and 'arkypallidal' neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  1. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection

    PubMed Central

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme

    2016-01-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  2. Discovery of a potent class I selective ketone histone deacetylase inhibitor with antitumor activity in vivo and optimized pharmacokinetic properties.

    PubMed

    Kinzel, Olaf; Llauger-Bufi, Laura; Pescatore, Giovanna; Rowley, Michael; Schultz-Fademrecht, Carsten; Monteagudo, Edith; Fonsi, Massimiliano; Gonzalez Paz, Odalys; Fiore, Fabrizio; Steinkühler, Christian; Jones, Philip

    2009-06-11

    The optimization of a potent, class I selective ketone HDAC inhibitor is shown. It possesses optimized pharmacokinetic properties in preclinical species, has a clean off-target profile, and is negative in a microbial mutagenicity (Ames) test. In a mouse xenograft model it shows efficacy comparable to that of vorinostat at a 10-fold reduced dose. PMID:19441846

  3. Optimizing the StackSlide setup and data selection for continuous-gravitational-wave searches in realistic detector data

    NASA Astrophysics Data System (ADS)

    Shaltev, M.

    2016-02-01

    The search for continuous gravitational waves in a wide parameter space at a fixed computing cost is most efficiently done with semicoherent methods, e.g., StackSlide, due to the prohibitive computing cost of the fully coherent search strategies. Prix and Shaltev [Phys. Rev. D 85, 084010 (2012)] have developed a semianalytic method for finding optimal StackSlide parameters at a fixed computing cost under ideal data conditions, i.e., gapless data and a constant noise floor. In this work, we consider more realistic conditions by allowing for gaps in the data and changes in the noise level. We show how the sensitivity optimization can be decoupled from the data selection problem. To find optimal semicoherent search parameters, we apply a numerical optimization using as an example the semicoherent StackSlide search. We also describe three different data selection algorithms. Thus, the outcome of the numerical optimization consists of the optimal search parameters and the selected data set. We first test the numerical optimization procedure under ideal conditions and show that we can reproduce the results of the analytical method. Then we gradually relax the conditions on the data and find that a compact data selection algorithm yields higher sensitivity compared to a greedy data selection procedure.

  4. Optimization of the Dutch Matrix Test by Random Selection of Sentences From a Preselected Subset

    PubMed Central

    Dreschler, Wouter A.

    2015-01-01

    Matrix tests are available for speech recognition testing in many languages. For an accurate measurement, a steep psychometric function of the speech materials is required. For existing tests, it would be beneficial if it were possible to further optimize the available materials by increasing the function’s steepness. The objective is to show if the steepness of the psychometric function of an existing matrix test can be increased by selecting a homogeneous subset of recordings with the steepest sentence-based psychometric functions. We took data from a previous multicenter evaluation of the Dutch matrix test (45 normal-hearing listeners). Based on half of the data set, first the sentences (140 out of 311) with a similar speech reception threshold and with the steepest psychometric function (≥9.7%/dB) were selected. Subsequently, the steepness of the psychometric function for this selection was calculated from the remaining (unused) second half of the data set. The calculation showed that the slope increased from 10.2%/dB to 13.7%/dB. The resulting subset did not allow the construction of enough balanced test lists. Therefore, the measurement procedure was changed to randomly select the sentences during testing. Random selection may interfere with a representative occurrence of phonemes. However, in our material, the median phonemic occurrence remained close to that of the original test. This finding indicates that phonemic occurrence is not a critical factor. The work highlights the possibility that existing speech tests might be improved by selecting sentences with a steep psychometric function. PMID:25964195

  5. Pareto archived dynamically dimensioned search with hypervolume-based selection for multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Asadzadeh, Masoud; Tolson, Bryan

    2013-12-01

    Pareto archived dynamically dimensioned search (PA-DDS) is a parsimonious multi-objective optimization algorithm with only one parameter to diminish the user's effort for fine-tuning algorithm parameters. This study demonstrates that hypervolume contribution (HVC) is a very effective selection metric for PA-DDS and Monte Carlo sampling-based HVC is very effective for higher dimensional problems (five objectives in this study). PA-DDS with HVC performs comparably to algorithms commonly applied to water resources problems (ɛ-NSGAII and AMALGAM under recommended parameter values). Comparisons on the CEC09 competition show that with sufficient computational budget, PA-DDS with HVC performs comparably to 13 benchmark algorithms and shows improved relative performance as the number of objectives increases. Lastly, it is empirically demonstrated that the total optimization runtime of PA-DDS with HVC is dominated (90% or higher) by solution evaluation runtime whenever evaluation exceeds 10 seconds/solution. Therefore, optimization algorithm runtime associated with the unbounded archive of PA-DDS is negligible in solving computationally intensive problems.

  6. Optimization strategies for fast detection of positive selection on phylogenetic trees

    PubMed Central

    Valle, Mario; Schabauer, Hannes; Pacher, Christoph; Stockinger, Heinz; Stamatakis, Alexandros; Robinson-Rechavi, Marc; Salamin, Nicolas

    2014-01-01

    Motivation: The detection of positive selection is widely used to study gene and genome evolution, but its application remains limited by the high computational cost of existing implementations. We present a series of computational optimizations for more efficient estimation of the likelihood function on large-scale phylogenetic problems. We illustrate our approach using the branch-site model of codon evolution. Results: We introduce novel optimization techniques that substantially outperform both CodeML from the PAML package and our previously optimized sequential version SlimCodeML. These techniques can also be applied to other likelihood-based phylogeny software. Our implementation scales well for large numbers of codons and/or species. It can therefore analyse substantially larger datasets than CodeML. We evaluated FastCodeML on different platforms and measured average sequential speedups of FastCodeML (single-threaded) versus CodeML of up to 5.8, average speedups of FastCodeML (multi-threaded) versus CodeML on a single node (shared memory) of up to 36.9 for 12 CPU cores, and average speedups of the distributed FastCodeML versus CodeML of up to 170.9 on eight nodes (96 CPU cores in total). Availability and implementation: ftp://ftp.vital-it.ch/tools/FastCodeML/. Contact: selectome@unil.ch or nicolas.salamin@unil.ch PMID:24389654

  7. Selection of optimal oligonucleotide probes for microarrays using multiple criteria, global alignment and parameter estimation

    PubMed Central

    Li, Xingyuan; He, Zhili; Zhou, Jizhong

    2005-01-01

    The oligonucleotide specificity for microarray hybridization can be predicted by its sequence identity to non-targets, continuous stretch to non-targets, and/or binding free energy to non-targets. Most currently available programs only use one or two of these criteria, which may choose ‘false’ specific oligonucleotides or miss ‘true’ optimal probes in a considerable proportion. We have developed a software tool, called CommOligo using new algorithms and all three criteria for selection of optimal oligonucleotide probes. A series of filters, including sequence identity, free energy, continuous stretch, GC content, self-annealing, distance to the 3′-untranslated region (3′-UTR) and melting temperature (Tm), are used to check each possible oligonucleotide. A sequence identity is calculated based on gapped global alignments. A traversal algorithm is used to generate alignments for free energy calculation. The optimal Tm interval is determined based on probe candidates that have passed all other filters. Final probes are picked using a combination of user-configurable piece-wise linear functions and an iterative process. The thresholds for identity, stretch and free energy filters are automatically determined from experimental data by an accessory software tool, CommOligo_PE (CommOligo Parameter Estimator). The program was used to design probes for both whole-genome and highly homologous sequence data. CommOligo and CommOligo_PE are freely available to academic users upon request. PMID:16246912

  8. Design-Optimization and Material Selection for a Proximal Radius Fracture-Fixation Implant

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Xie, X.; Arakere, G.; Grujicic, A.; Wagner, D. W.; Vallejo, A.

    2010-11-01

    The problem of optimal size, shape, and placement of a proximal radius-fracture fixation-plate is addressed computationally using a combined finite-element/design-optimization procedure. To expand the set of physiological loading conditions experienced by the implant during normal everyday activities of the patient, beyond those typically covered by the pre-clinical implant-evaluation testing procedures, the case of a wheel-chair push exertion is considered. Toward that end, a musculoskeletal multi-body inverse-dynamics analysis is carried out of a human propelling a wheelchair. The results obtained are used as input to a finite-element structural analysis for evaluation of the maximum stress and fatigue life of the parametrically defined implant design. While optimizing the design of the radius-fracture fixation-plate, realistic functional requirements pertaining to the attainment of the required level of the devise safety factor and longevity/lifecycle were considered. It is argued that the type of analyses employed in the present work should be: (a) used to complement the standard experimental pre-clinical implant-evaluation tests (the tests which normally include a limited number of daily-living physiological loading conditions and which rely on single pass/fail outcomes/decisions with respect to a set of lower-bound implant-performance criteria) and (b) integrated early in the implant design and material/manufacturing-route selection process.

  9. Feature selection and classifier parameters estimation for EEG signals peak detection using particle swarm optimization.

    PubMed

    Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236

  10. Feature Selection and Classifier Parameters Estimation for EEG Signals Peak Detection Using Particle Swarm Optimization

    PubMed Central

    Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236

  11. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  12. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  13. Optimal energy window selection of a CZT-based small-animal SPECT for quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Park, Su-Jin; Yu, A. Ram; Choi, Yun Young; Kim, Kyeong Min; Kim, Hee-Joung

    2015-05-01

    Cadmium zinc telluride (CZT)-based small-animal single-photon emission computed tomography (SPECT) has desirable characteristics such as superior energy resolution, but data acquisition for SPECT imaging has been widely performed with a conventional energy window. The aim of this study was to determine the optimal energy window settings for technetium-99 m (99mTc) and thallium-201 (201Tl), the most commonly used isotopes in SPECT imaging, using CZT-based small-animal SPECT for quantitative accuracy. We experimentally investigated quantitative measurements with respect to primary count rate, contrast-to-noise ratio (CNR), and scatter fraction (SF) within various energy window settings using Triumph X-SPECT. The two ways of energy window settings were considered: an on-peak window and an off-peak window. In the on-peak window setting, energy centers were set on the photopeaks. In the off-peak window setting, the ratios of energy differences between the photopeak from the lower- and higher-threshold varied from 4:6 to 3:7. In addition, the energy-window width for 99mTc varied from 5% to 20%, and that for 201Tl varied from 10% to 30%. The results of this study enabled us to determine the optimal energy windows for each isotope in terms of primary count rate, CNR, and SF. We selected the optimal energy window that increases the primary count rate and CNR while decreasing SF. For 99mTc SPECT imaging, the energy window of 138-145 keV with a 5% width and off-peak ratio of 3:7 was determined to be the optimal energy window. For 201Tl SPECT imaging, the energy window of 64-85 keV with a 30% width and off-peak ratio of 3:7 was selected as the optimal energy window. Our results demonstrated that the proper energy window should be carefully chosen based on quantitative measurements in order to take advantage of desirable characteristics of CZT-based small-animal SPECT. These results provided valuable reference information for the establishment of new protocol for CZT

  14. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    PubMed

    Read, Mark N; Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-09-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  15. Algorithm for selection of optimized EPR distance restraints for de novo protein structure determination

    PubMed Central

    Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.

    2010-01-01

    A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624

  16. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods.

    PubMed

    Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community. PMID:25863895

  17. Optimization methods for selecting founder individuals for captive breeding or reintroduction of endangered species.

    PubMed

    Miller, Webb; Wright, Stephen J; Zhang, Yu; Schuster, Stephan C; Hayes, Vanessa M

    2010-01-01

    Methods from genetics and genomics can be employed to help save endangered species. One potential use is to provide a rational strategy for selecting a population of founders for a captive breeding program. The hope is to capture most of the available genetic diversity that remains in the wild population, to provide a safe haven where representatives of the species can be bred, and eventually to release the progeny back into the wild. However, the founders are often selected based on a random-sampling strategy whose validity is based on unrealistic assumptions. Here we outline an approach that starts by using cutting-edge genome sequencing and genotyping technologies to objectively assess the available genetic diversity. We show how combinatorial optimization methods can be applied to these data to guide the selection of the founder population. In particular, we develop a mixed-integer linear programming technique that identifies a set of animals whose genetic profile is as close as possible to specified abundances of alleles (i.e., genetic variants), subject to constraints on the number of founders and their genders and ages. PMID:19908356

  18. Real-time 2D spatially selective MRI experiments: Comparative analysis of optimal control design methods

    NASA Astrophysics Data System (ADS)

    Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon

    2015-05-01

    There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.

  19. Resonance Raman enhancement optimization in the visible range by selecting different excitation wavelengths

    NASA Astrophysics Data System (ADS)

    Wang, Zhong; Li, Yuee

    2015-09-01

    Resonance enhancement of Raman spectroscopy (RS) has been used to significantly improve the sensitivity and selectivity of detection for specific components in complicated environments. Resonance RS gives more insight into the biochemical structure and reactivity. In this field, selecting a proper excitation wavelength to achieve optimal resonance enhancement is vital for the study of an individual chemical/biological ingredient with a particular absorption characteristic. Raman spectra of three azo derivatives with absorption spectra in the visible range are studied under the same experimental conditions at 488, 532, and 633 nm excitations. Universal laws in the visible range have been concluded by analyzing resonance Raman (RR) spectra of samples. The long wavelength edge of the absorption spectrum is a better choice for intense enhancement and the integrity of a Raman signal. The obtained results are valuable for applying RR for the selective detection of biochemical constituents whose electronic transitions take place at energies corresponding to the visible spectra, which is much friendlier to biologial samples compared to ultraviolet.

  20. Model selection based on FDR-thresholding optimizing the area under the ROC-curve.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2009-01-01

    We evaluate variable selection by multiple tests controlling the false discovery rate (FDR) to build a linear score for prediction of clinical outcome in high-dimensional data. Quality of prediction is assessed by the receiver operating characteristic curve (ROC) for prediction in independent patients. Thus we try to combine both goals: prediction and controlled structure estimation. We show that the FDR-threshold which provides the ROC-curve with the largest area under the curve (AUC) varies largely over the different parameter constellations not known in advance. Hence, we investigated a new cross validation procedure based on the maximum rank correlation estimator to determine the optimal selection threshold. This procedure (i) allows choosing an appropriate selection criterion, (ii) provides an estimate of the FDR close to the true FDR and (iii) is simple and computationally feasible for rather moderate to small sample sizes. Low estimates of the cross validated AUC (the estimates generally being positively biased) and large estimates of the cross validated FDR may indicate a lack of sufficiently prognostic variables and/or too small sample sizes. The method is applied to an oncology dataset. PMID:19572830

  1. Optimal feature selection for automated classification of FDG-PET in patients with suspected dementia

    NASA Astrophysics Data System (ADS)

    Serag, Ahmed; Wenzel, Fabian; Thiele, Frank; Buchert, Ralph; Young, Stewart

    2009-02-01

    FDG-PET is increasingly used for the evaluation of dementia patients, as major neurodegenerative disorders, such as Alzheimer's disease (AD), Lewy body dementia (LBD), and Frontotemporal dementia (FTD), have been shown to induce specific patterns of regional hypo-metabolism. However, the interpretation of FDG-PET images of patients with suspected dementia is not straightforward, since patients are imaged at different stages of progression of neurodegenerative disease, and the indications of reduced metabolism due to neurodegenerative disease appear slowly over time. Furthermore, different diseases can cause rather similar patterns of hypo-metabolism. Therefore, classification of FDG-PET images of patients with suspected dementia may lead to misdiagnosis. This work aims to find an optimal subset of features for automated classification, in order to improve classification accuracy of FDG-PET images in patients with suspected dementia. A novel feature selection method is proposed, and performance is compared to existing methods. The proposed approach adopts a combination of balanced class distributions and feature selection methods. This is demonstrated to provide high classification accuracy for classification of FDG-PET brain images of normal controls and dementia patients, comparable with alternative approaches, and provides a compact set of features selected.

  2. Energetic optimization of ion conduction rate by the K+ selectivity filter

    NASA Astrophysics Data System (ADS)

    Morais-Cabral, João H.; Zhou, Yufeng; MacKinnon, Roderick

    2001-11-01

    The K+ selectivity filter catalyses the dehydration, transfer and rehydration of a K+ ion in about ten nanoseconds. This physical process is central to the production of electrical signals in biology. Here we show how nearly diffusion-limited rates are achieved, by analysing ion conduction and the corresponding crystallographic ion distribution in the selectivity filter of the KcsA K+ channel. Measurements with K+ and its slightly larger analogue, Rb+, lead us to conclude that the selectivity filter usually contains two K+ ions separated by one water molecule. The two ions move in a concerted fashion between two configurations, K+-water-K+-water (1,3 configuration) and water-K+-water-K+ (2,4 configuration), until a third ion enters, displacing the ion on the opposite side of the queue. For K+, the energy difference between the 1,3 and 2,4 configurations is close to zero, the condition of maximum conduction rate. The energetic balance between these configurations is a clear example of evolutionary optimization of protein function.

  3. Evaluation of the selection methods used in the exIWO algorithm based on the optimization of multidimensional functions

    NASA Astrophysics Data System (ADS)

    Kostrzewa, Daniel; Josiński, Henryk

    2016-06-01

    The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.

  4. X-ray backscatter imaging for radiography by selective detection and snapshot: Evolution, development, and optimization

    NASA Astrophysics Data System (ADS)

    Shedlock, Daniel

    Compton backscatter imaging (CBI) is a single-sided imaging technique that uses the penetrating power of radiation and unique interaction properties of radiation with matter to image subsurface features. CBI has a variety of applications that include non-destructive interrogation, medical imaging, security and military applications. Radiography by selective detection (RSD), lateral migration radiography (LMR) and shadow aperture backscatter radiography (SABR) are different CBI techniques that are being optimized and developed. Radiography by selective detection (RSD) is a pencil beam Compton backscatter imaging technique that falls between highly collimated and uncollimated techniques. Radiography by selective detection uses a combination of single- and multiple-scatter photons from a projected area below a collimation plane to generate an image. As a result, the image has a combination of first- and multiple-scatter components. RSD techniques offer greater subsurface resolution than uncollimated techniques, at speeds at least an order of magnitude faster than highly collimated techniques. RSD scanning systems have evolved from a prototype into near market-ready scanning devices for use in a variety of single-sided imaging applications. The design has changed to incorporate state-of-the-art detectors and electronics optimized for backscatter imaging with an emphasis on versatility, efficiency and speed. The RSD system has become more stable, about 4 times faster, and 60% lighter while maintaining or improving image quality and contrast over the past 3 years. A new snapshot backscatter radiography (SBR) CBI technique, shadow aperture backscatter radiography (SABR), has been developed from concept and proof-of-principle to a functional laboratory prototype. SABR radiography uses digital detection media and shaded aperture configurations to generate near-surface Compton backscatter images without scanning, similar to how transmission radiographs are taken. Finally, a

  5. [Study on optimal selection of structure of vaneless centrifugal blood pump with constraints on blood perfusion and on blood damage indexes].

    PubMed

    Hu, Zhaoyan; Pan, Youlian; Chen, Zhenglong; Zhang, Tianyi; Lu, Lijun

    2012-12-01

    This paper is aimed to study the optimal selection of structure of vaneless centrifugal blood pump. The optimal objective is determined according to requirements of clinical use. Possible schemes are generally worked out based on structural feature of vaneless centrifugal blood pump. The optimal structure is selected from possible schemes with constraints on blood perfusion and blood damage indexes. Using an optimal selection method one can find the optimum structure scheme from possible schemes effectively. The results of numerical simulation of optimal blood pump showed that the method of constraints of blood perfusion and blood damage is competent for the requirements of selection of the optimal blood pumps. PMID:23469557

  6. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  7. Field trials for corrosion inhibitor selection and optimization, using a new generation of electrical resistance probes

    SciTech Connect

    Ridd, B.; Blakset, T.J.; Queen, D.

    1998-12-31

    Even with today`s availability of corrosion resistant alloys, carbon steels protected by corrosion inhibitors still dominate the material selection for pipework in the oil and gas production. Even though laboratory screening tests of corrosion inhibitor performance provides valuable data, the real performance of the chemical can only be studied through field trials which provide the ultimate test to evaluate the effectiveness of an inhibitor under actual operating conditions. A new generation of electrical resistance probe has been developed, allowing highly sensitive and immediate response to changes in corrosion rates on the internal environment of production pipework. Because of the high sensitivity, the probe responds to small changes in the corrosion rate, and it provides the corrosion engineer with a highly effective method of optimizing the use of inhibitor chemicals resulting in confidence in corrosion control and minimizing detrimental environmental effects.

  8. Testability requirement uncertainty analysis in the sensor selection and optimization model for PHM

    NASA Astrophysics Data System (ADS)

    Yang, S. M.; Qiu, J.; Liu, G. J.; Yang, P.; Zhang, Y.

    2012-05-01

    Prognostics and health management (PHM) has been an important part to guarantee the reliability and safety of complex systems. Design for testability (DFT) developed concurrently with system design is considered as a fundamental way to improve PHM performance, and sensor selection and optimization (SSO) is one of the important parts in DFT. To address the problem that testability requirement analysis in the existing SSO models does not take test uncertainty in actual scenario into account, fault detection uncertainty is analyzed from the view of fault attributes, sensor attributes and fault-sensor matching attributes qualitatively. And then, quantitative uncertainty analysis is given, which assigns a rational confidence level to fault size. A case is presented to demonstrate the proposed methodology for an electromechanical servo-controlled system, and application results show the proposed approach is reasonable and feasible.

  9. Bone Mineral Density and Fracture Risk Assessment to Optimize Prosthesis Selection in Total Hip Replacement.

    PubMed

    Pétursson, Þröstur; Edmunds, Kyle Joseph; Gíslason, Magnús Kjartan; Magnússon, Benedikt; Magnúsdóttir, Gígja; Halldórsson, Grétar; Jónsson, Halldór; Gargiulo, Paolo

    2015-01-01

    The variability in patient outcome and propensity for surgical complications in total hip replacement (THR) necessitates the development of a comprehensive, quantitative methodology for prescribing the optimal type of prosthetic stem: cemented or cementless. The objective of the research presented herein was to describe a novel approach to this problem as a first step towards creating a patient-specific, presurgical application for determining the optimal prosthesis procedure. Finite element analysis (FEA) and bone mineral density (BMD) calculations were performed with ten voluntary primary THR patients to estimate the status of their operative femurs before surgery. A compilation model of the press-fitting procedure was generated to define a fracture risk index (FRI) from incurred forces on the periprosthetic femoral head. Comparing these values to patient age, sex, and gender elicited a high degree of variability between patients grouped by implant procedure, reinforcing the notion that age and gender alone are poor indicators for prescribing prosthesis type. Additionally, correlating FRI and BMD measurements indicated that at least two of the ten patients may have received nonideal implants. This investigation highlights the utility of our model as a foundation for presurgical software applications to assist orthopedic surgeons with selecting THR prostheses. PMID:26417376

  10. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.