Kimura, Daiju; Kurisu, Yosuke; Nozaki, Dai; Yano, Keisuke; Imai, Youta; Kumakura, Sho; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki
2014-02-01
We are constructing a tandem type ECRIS. The first stage is large-bore with cylindrically comb-shaped magnet. We optimize the ion beam current and ion saturation current by a mobile plate tuner. They change by the position of the plate tuner for 2.45 GHz, 11-13 GHz, and multi-frequencies. The peak positions of them are close to the position where the microwave mode forms standing wave between the plate tuner and the extractor. The absorbed powers are estimated for each mode. We show a new guiding principle, which the number of efficient microwave mode should be selected to fit to that of multipole of the comb-shaped magnets. We obtained the excitation of the selective modes using new mobile plate tuner to enhance ECR efficiency.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurisu, Yosuke; Kiriyama, Ryutaro; Takenaka, Tomoya
2012-02-15
We are constructing a tandem-type electron cyclotron resonance ion source (ECRIS). The first stage of this can supply 2.45 GHz and 11-13 GHz microwaves to plasma chamber individually and simultaneously. We optimize the beam current I{sub FC} by the mobile plate tuner. The I{sub FC} is affected by the position of the mobile plate tuner in the chamber as like a circular cavity resonator. We aim to clarify the relation between the I{sub FC} and the ion saturation current in the ECRIS against the position of the mobile plate tuner. We obtained the result that the variation of the plasmamore » density contributes largely to the variation of the I{sub FC} when we change the position of the mobile plate tuner.« less
Test of a coaxial blade tuner at HTS FNAL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pischalnikov, Y.; Barbanotti, S.; Harms, E.
2011-03-01
A coaxial blade tuner has been selected for the 1.3GHz SRF cavities of the Fermilab SRF Accelerator Test Facility. Results from tuner cold tests in the Fermilab Horizontal Test Stand are presented. Fermilab is constructing the SRF Accelerator Test Facility, a facility for accelerator physics research and development. This facility will contain a total of six cryomodules, each containing eight 1.3 GHz nine-cell elliptical cavities. Each cavity will be equipped with a Slim Blade Tuner designed by INFN Milan. The blade tuner incorporates both a stepper motor and piezo actuators to allow for both slow and fast cavity tuning. Themore » stepper motor allows the cavity frequency to be statically tuned over a range of 500 kHz with an accuracy of several Hz. The piezos provide up to 2 kHz of dynamic tuning for compensation of Lorentz force detuning and variations in the He bath pressure. The first eight blade tuners were built at INFN Milan, but the remainder are being manufactured commercially following the INFN design. To date, more than 40 of the commercial tuners have been delivered.« less
Model-Based Engine Control Architecture with an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2016-01-01
This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.
Model-Based Engine Control Architecture with an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2016-01-01
This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.
Broadband power amplifier tube: Klystron tube 5K70SK-WBT and step tuner VA-1470S
NASA Technical Reports Server (NTRS)
Cox, H. R.; Johnson, J. O.
1974-01-01
The design concept, the fabrication, and the acceptance testing of a wide band Klystron tube and remotely controlled step tuner for channel selection are discussed. The equipment was developed for the modification of an existing 20 KW Power Amplifier System which was provided to the contractor as GFE. The replacement Klystron covers a total frequency range of 2025 to 2120 MHz and is tuneable to six (6) each channel with a band width of 22 MHz or greater per channel. A 5 MHz overlap is provided between channels. Channels are selected at the control panel located in the front of the Klystron magnet or from one of three remote control stations connected in parallel with the step tuner. Included in this final report are the results of acceptance tests conducted at the vendor's plant and of the integrated system tests.
Cartridge Casing Catcher With Reduced Firearm Ejection Port Flash and Noise
2009-05-26
acoustic tuner structure comprises at least one of a quarter wave tuner, a Quincke tuner, and a Helmholtz tuner. The magnetic material comprises magnetic...of noise) will be attenuated. FIG. 2B illustrates a Herschel- Quincke (usually simply called Quincke ) or interference tuner 10’. The Quincke tnner... Quincke tuner, and a Helmholtz resonator similar to the acoustic tnners illustrated in FIGS. 2(A-C), respectively. The acoustic tnner structure 240 of
ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amarasinghe, Saman
This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Testing of the new tuner design for the CEBAF 12 GeV upgrade SRF cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edward Daly; G. Davis; William Hicks
2005-05-01
The new tuner design for the 12 GeV Upgrade SRF cavities consists of a coarse mechanical tuner and a fine piezoelectric tuner. The mechanism provides a 30:1 mechanical advantage, is pre-loaded at room temperature and tunes the cavities in tension only. All of the components are located in the insulating vacuum space and attached to the helium vessel, including the motor, harmonic drive and piezoelectric actuators. The requirements and detailed design are presented. Measurements of range and resolution of the coarse tuner are presented and discussed.
Proof-of-principle Experiment of a Ferroelectric Tuner for the 1.3 GHz Cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi,E.M.; Hahn, H.; Shchelkunov, S. V.
2009-01-01
A novel tuner has been developed by the Omega-P company to achieve fast control of the accelerator RF cavity frequency. The tuner is based on the ferroelectric property which has a variable dielectric constant as function of applied voltage. Tests using a Brookhaven National Laboratory (BNL) 1.3 GHz electron gun cavity have been carried out for a proof-of-principle experiment of the ferroelectric tuner. Two different methods were used to determine the frequency change achieved with the ferroelectric tuner (FT). The first method is based on a S11 measurement at the tuner port to find the reactive impedance change when themore » voltage is applied. The reactive impedance change then is used to estimate the cavity frequency shift. The second method is a direct S21 measurement of the frequency shift in the cavity with the tuner connected. The estimated frequency change from the reactive impedance measurement due to 5 kV is in the range between 3.2 kHz and 14 kHz, while 9 kHz is the result from the direct measurement. The two methods are in reasonable agreement. The detail description of the experiment and the analysis are discussed in the paper.« less
1979-12-01
MLS-1, Oirect ILS Replacement Tuner L@ ’,, -_Y VOVO~F AG . MLS-2, Selectable Azimuth And Elevation Tuner/Selector ©@ MLS 032 HOG DIS DIM F CHAN ON...LIGHTS (5) / ~YELLOW /" / /GREEN - fR IIN _!4N INA iL 4 R91 .. ms 115.15 ’ 5 .s at E aG E N T C R 3 /T R N G V E R T CHAN A 0 L APPR ALA~ 0 T I0 HgA...due to the age of the aircraft, Lhe present autopilot is of an early vintage and is not recom- mended for use below 1,000 ft. unless the controls
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Csank, Jeffrey Thomas; Chicatelli, Amy; Kilver, Jacob
2013-01-01
This paper covers the development of a model-based engine control (MBEC) methodology featuring a self tuning on-board model applied to an aircraft turbofan engine simulation. Here, the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) serves as the MBEC application engine. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC over a wide range of operating points. The on-board model is a piece-wise linear model derived from CMAPSS40k and updated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. Investigations using the MBEC to provide a stall margin limit for the controller protection logic are presented that could provide benefits over a simple acceleration schedule that is currently used in traditional engine control architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarvainen, O., E-mail: olli.tarvainen@jyu.fi; Orpana, J.; Kronholm, R.
2016-09-15
The efficiency of the microwave-plasma coupling plays a significant role in the production of highly charged ion beams with electron cyclotron resonance ion sources (ECRISs). The coupling properties are affected by the mechanical design of the ion source plasma chamber and microwave launching system, as well as damping of the microwave electric field by the plasma. Several experiments attempting to optimize the microwave-plasma coupling characteristics by fine-tuning the frequency of the injected microwaves have been conducted with varying degrees of success. The inherent difficulty in interpretation of the frequency tuning results is that the effects of microwave coupling system andmore » the cavity behavior of the plasma chamber cannot be separated. A preferable approach to study the effect of the cavity properties of the plasma chamber on extracted beam currents is to adjust the cavity dimensions. The results of such cavity tuning experiments conducted with the JYFL 14 GHz ECRIS are reported here. The cavity properties were adjusted by inserting a conducting tuner rod axially into the plasma chamber. The extracted beam currents of oxygen charge states O{sup 3+}–O{sup 7+} were recorded at various tuner positions and frequencies in the range of 14.00–14.15 GHz. It was observed that the tuner position affects the beam currents of high charge state ions up to several tens of percent. In particular, it was found that at some tuner position / frequency combinations the plasma exhibited “mode-hopping” between two operating regimes. The results improve the understanding of the role of plasma chamber cavity properties on ECRIS performances.« less
Tuner design and RF test of a four-rod RFQ
NASA Astrophysics Data System (ADS)
Zhou, QuanFeng; Zhu, Kun; Guo, ZhiYu; Kang, MingLei; Gao, ShuLi; Lu, YuanRong; Chen, JiaEr
2011-12-01
A mini-vane four-rod radio frequency quadruple (RFQ) accelerator has been built for neutron imaging. The RFQ will operate at 201.5 MHz, and its length is 2.7 m. The original electric field distribution along the electrodes is not flat. The resonant frequency needs to be tuned to the operating value. And the frequency needs to be compensated for temperature change during high power RF test and beam test. As tuning such a RFQ is difficult, plate tuners and stick tuners are designed. This paper will present the tuners design, the tuning procedure, and the RF properties of the RFQ.
Navigating the auditory scene: an expert role for the hippocampus.
Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C Rebecca; Moore, Brian C J; Capleton, Brian; Griffiths, Timothy D
2012-08-29
Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner.
Model-Based Control of an Aircraft Engine using an Optimal Tuner Approach
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Chicatelli, Amy; Garg, Sanjay
2012-01-01
This paper covers the development of a model-based engine control (MBEC) method- ology applied to an aircraft turbofan engine. Here, a linear model extracted from the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40k) at a cruise operating point serves as the engine and the on-board model. The on-board model is up- dated using an optimal tuner Kalman Filter (OTKF) estimation routine, which enables the on-board model to self-tune to account for engine performance variations. The focus here is on developing a methodology for MBEC with direct control of estimated parameters of interest such as thrust and stall margins. MBEC provides the ability for a tighter control bound of thrust over the entire life cycle of the engine that is not achievable using traditional control feedback, which uses engine pressure ratio or fan speed. CMAPSS40k is capable of modeling realistic engine performance, allowing for a verification of the MBEC tighter thrust control. In addition, investigations of using the MBEC to provide a surge limit for the controller limit logic are presented that could provide benefits over a simple acceleration schedule that is currently used in engine control architectures.
Tuner control system of Spoke012 SRF cavity for C-ADS injector I
NASA Astrophysics Data System (ADS)
Liu, Na; Sun, Yi; Wang, Guang-Wei; Mi, Zheng-Hui; Lin, Hai-Ying; Wang, Qun-Yao; Liu, Rong; Ma, Xin-Peng
2016-09-01
A new tuner control system for spoke superconducting radio frequency (SRF) cavities has been developed and applied to cryomodule I of the C-ADS injector I at the Institute of High Energy Physics, Chinese Academy of Sciences. We have successfully implemented the tuner controller based on Programmable Logic Controller (PLC) for the first time and achieved a cavity tuning phase error of ±0.7° (about ±4 Hz peak to peak) in the presence of electromechanical coupled resonance. This paper presents preliminary experimental results based on the PLC tuner controller under proton beam commissioning. Supported by Proton linac accelerator I of China Accelerator Driven sub-critical System (Y12C32W129)
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
NASA Astrophysics Data System (ADS)
Kim, Hye-Jin; Choi, B. H.; Han, Jaeeun; Hyun, Myung Ook; Park, Bum-Sik; Choi, Ohryoung; Lee, Doyoon; Son, Kitaek
2018-03-01
In the medium energy beam transport (MEBT) line system of the RAON which consists of several quadrupole magnets, three normal-conducting re-bunchers, and several diagnostic devices, a quarter wave resonator type re-buncher was chosen for minimizing longitudinal emittance growth and manipulating a longitudinal phase ellipse into the longitudinal acceptance of the low energy linac. The re-buncher has a resonant frequency of 81.25 MHz, geometrical beta (βg) of 0.049, and physical length of 24 cm. Based on the result of numerical calculations of electromagnetic field using CST-MWS and mechanical analysis of the heat distribution and deformation, an internal structure of the re-buncher was optimized to increase the effective voltage and to reduce power losses in the wall. The criteria of the multipacting effect was estimated and it was confirmed by the experiment. The position and specification of cooling channels are designed to recover a heat load up to 15 kW with reasonable margin of 25%. The coaxial and loop type RF power coupler are positioned on the high magnetic field region and two slug tuners are installed perpendicularly to the beam axis. The frequency sensitivity as a function of the tuner depth and cooling water temperature is measured and the frequency shift is in all cases within the provided tuner range. The test with a high power of 10 kW and the continuous wave is performed and the reflection power is observed less than 450 W.
Inductive tuners for microwave driven discharge lamps
Simpson, James E.
1999-01-01
An RF powered electrodeless lamp utilizing an inductive tuner in the waveguide which couples the RF power to the lamp cavity, for reducing reflected RF power and causing the lamp to operate efficiently.
State-space self-tuner for on-line adaptive control
NASA Technical Reports Server (NTRS)
Shieh, L. S.
1994-01-01
Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.
Human Computer Interface Design Criteria. Volume 1. User Interface Requirements
2010-03-19
Television tuners, including tuner cards for use in computers, shall be equipped with secondary audio program playback circuitry. (c) All training...Shelf CSS Cascading Style Sheets DII Defense Information Infrastructure DISA Defense Information Systems Agency DoD Department of Defense
Results of Accelerated Life Testing of LCLS-II Cavity Tuner Motor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huque, Naeem; Daly, Edward; Pischalnikov, Yuriy
An Accelerated Life Test (ALT) of the Phytron stepper motor used in the LCLS-II cavity tuner has been conducted at JLab. Since the motor will reside inside the cryomodule, any failure would lead to a very costly and arduous repair. As such, the motor was tested for the equivalent of 30 lifetimes before being approved for use in the production cryomodules. The 9-cell LCLS-II cavity is simulated by disc springs with an equivalent spring constant. Plots of the motor position vs. tuner position ' measured via an installed linear variable differential transformer (LVDT) ' are used to measure motor motion.more » The titanium spindle was inspected for loss of lubrication. The motor passed the ALT, and is set to be installed in the LCLS-II cryomodules.« less
RESULTS OF ACCELERATED LIFE TESTING OF LCLS-II CAVITY TUNER MOTOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huque, Naeem; Daly, Edward F.; Pischalnikov, Yuriy
An Accelerated Life Test (ALT) of the Phytron stepper motor used in the LCLS-II cavity tuner has been conducted at JLab. Since the motor will reside inside the cryomodule, any failure would lead to a very costly and arduous repair. As such, the motor was tested for the equivalent of 30 lifetimes before being approved for use in the production cryomodules. The 9-cell LCLS-II cavity is simulated by disc springs with an equivalent spring constant. Plots of the motor position vs. tuner position ' measured via an installed linear variable differential transformer (LVDT) ' are used to measure motor motion.more » The titanium spindle was inspected for loss of lubrication. The motor passed the ALT, and is set to be installed in the LCLS-II cryomodules.« less
Garnet Ring Measurements for the Fermilab Booster 2nd Harmonic Cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuharik, J.; Dey, J.; Duel, K.
A perpendicularly biased tuneable 2nd harmonic cavity is being constructed for use in the Fermilab Booster. The cavity's tuner uses National Magnetics AL800 garnet as the tuning media. For quality control, the magnetic properties of the material and the uniformity of the properties within the tuner must be assessed. We describe two tests which are performed on the rings and on their corresponding witness samples.
Feedback control impedance matching system using liquid stub tuner for ion cyclotron heating
NASA Astrophysics Data System (ADS)
Nomura, G.; Yokota, M.; Kumazawa, R.; Takahashi, C.; Torii, Y.; Saito, K.; Yamamoto, T.; Takeuchi, N.; Shimpo, F.; Kato, A.; Seki, T.; Mutoh, T.; Watari, T.; Zhao, Y.
2001-10-01
A long pulse discharge more than 2 minutes was achieved using Ion Cyclotron Range of Frequency (ICRF) heating only on the Large Helical Device (LHD). The final goal is a steady state operation (30 minutes) at MW level. A liquid stub tuner was newly invented to cope with the long pulse discharge. The liquid surface level was shifted under a high RF voltage operation without breakdown. In the long pulse discharge the reflected power was observed to gradually increase. The shift of the liquid surface was thought to be inevitably required at the further longer discharge. An ICRF heating system consisting of a liquid stub tuner was fabricated to demonstrate a feedback control impedance matching. The required shift of the liquid surface was predicted using a forward and a reflected RF powers as well as the phase difference between them. A liquid stub tuner was controlled by the multiprocessing computer system with CINOS (CHS Integration No Operating System) methods. The prime objective was to improve the performance of data processing and controlling a signal response. By employing this method a number of the program steps was remarkably reduced. A real time feedback control was demonstrated in the system using a temporally changed electric resistance.
Qualification and cryogenic performance of cryomodule components at CEBAF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heckman, J.; Macha, K.; Fischer, J.
1996-12-31
At CEBAF an electron beam is accelerated by superconducting resonant niobium cavities which are operated submerged in superfluid helium. The accelerator has 42 1/4 cryomodules, each containing eight cavities. The qualification and design of components for the cryomodules under went stringent testing and evaluation for acceptance. Indium wire seals are used between the cavity and helium vessel interface to make a superfluid helium leak tight seal. Each cavity is equipped with a mechanical tuner assembly designed to stretch and compress the cavities. Two rotary feedthroughs are used to operate each mechanical tuner assembly. Ceramic feedthroughs not designed for super-fluid weremore » qualified for tuner and cryogenic instrumentation. To ensure long term integrity of the machine special attention is required for material specifications and machine processes. The following is to share the qualification methods, design and performance of the cryogenic cryomodule components.« less
A hydrogen maser with cavity auto-tuner for timekeeping
NASA Technical Reports Server (NTRS)
Lin, C. F.; He, J. W.; Zhai, Z. C.
1992-01-01
A hydrogen maser frequency standard for timekeeping was worked on at the Shanghai Observatory. The maser employs a fast cavity auto-tuner, which can detect and compensate the frequency drift of the high-Q resonant cavity with a short time constant by means of a signal injection method, so that the long term frequency stability of the maser standard is greatly improved. The cavity auto-tuning system and some maser data obtained from the atomic time comparison are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.
High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structuredmore » locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.« less
Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh
Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less
Tuner of a Second Harmonic Cavity of the Fermilab Booster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terechkine, I.; Duel, K.; Madrak, R.
2015-05-17
Introducing a second harmonic cavity in the accelerating system of the Fermilab Booster promises significant reduc-tion of the particle beam loss during the injection, transi-tion, and extraction stages. To follow the changing energy of the beam during acceleration cycles, the cavity is equipped with a tuner that employs perpendicularly biased AL800 garnet material as the frequency tuning media. The required tuning range of the cavity is from 75.73 MHz at injection to 105.64 MHz at extraction. This large range ne-cessitates the use of a relatively low bias magnetic field at injection, which could lead to high RF loss power densitymore » in the garnet, or a strong bias magnetic field at extraction, which could result in high power consumption in the tuner’s bias magnet. The required 15 Hz repetition rate of the device and high sensitivity of the local RF power loss to the level of the magnetic field added to the challenges of the bias system design. In this report, the main features of a proposed prototype of the second harmonic cavity tuner are presented.« less
Beam dynamics and electromagnetic studies of a 3 MeV, 325 MHz radio frequency quadrupole accelerator
NASA Astrophysics Data System (ADS)
Gaur, Rahul; Kumar, Vinit
2018-05-01
We present the beam dynamics and electromagnetic studies of a 3 MeV, 325 MHz H- radio frequency quadrupole (RFQ) accelerator for the proposed Indian Spallation Neutron Source project. We have followed a design approach, where the emittance growth and the losses are minimized by keeping the tune depression ratio larger than 0.5. The transverse cross-section of RFQ is designed at a frequency lower than the operating frequency, so that the tuners have their nominal position inside the RFQ cavity. This has resulted in an improvement of the tuning range, and the efficiency of tuners to correct the field errors in the RFQ. The vane-tip modulations have been modelled in CST-MWS code, and its effect on the field flatness and the resonant frequency has been studied. The deterioration in the field flatness due to vane-tip modulations is reduced to an acceptable level with the help of tuners. Details of the error study and the higher order mode study along with mode stabilization technique are also described in the paper.
NASA Astrophysics Data System (ADS)
Ma, Wei; Lu, Liang; Xu, Xianbo; Sun, Liepeng; Zhang, Zhouli; Dou, Weiping; Li, Chenxing; Shi, Longbo; He, Yuan; Zhao, Hongwei
2017-03-01
An 81.25 MHz continuous wave (CW) radio frequency quadrupole (RFQ) accelerator has been designed for the Low Energy Accelerator Facility (LEAF) at the Institute of Modern Physics (IMP) of the Chinese Academy of Science (CAS). In the CW operating mode, the proposed RFQ design adopted the conventional four-vane structure. The main design goals are providing high shunt impendence with low power losses. In the electromagnetic (EM) design, the π-mode stabilizing loops (PISLs) were optimized to produce a good mode separation. The tuners were also designed and optimized to tune the frequency and field flatness of the operating mode. The vane undercuts were optimized to provide a flat field along the RFQ cavity. Additionally, a full length model with modulations was set up for the final EM simulations. Following the EM design, thermal analysis of the structure was carried out. In this paper, detailed EM design and thermal simulations of the LEAF-RFQ will be presented and discussed. Structure error analysis was also studied.
NASA Technical Reports Server (NTRS)
Chern, Shy-Shiun (Inventor)
1981-01-01
A coaxial stub tuner assembly is comprised of a short circuit branch diametrically opposite an open circuit branch. The stub of the short circuit branch is tubular, and the stub of the open circuit branch is a rod which extends through the tubular stub into the open circuit branch. The rod is threaded at least at its outer end, and the tubular stub is internally threaded to receive the threads of the rod. The open circuit branch can be easily tuned by turning the threaded rod in the tubular stub to adjust the length of the rod extending into the open circuit branch.
Development of Advanced Materials for Electro-Ceramic Application Final Report CRADA No. TC-1331-96
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caplan, M.; Olstad, R.; McMillan, L.
The goal of this project was to further develop and characterize the electrochemical methods originating in Russia for producing ultra high purity organometallic compounds utilized as precursors in the production of high quality electro-ceramic materials. Symetrix planned to use electro-ceramic materials with high dielectric constant for microelectronic memory circuit applications. General Atomics planned to use the barium titanate type ceramics with low loss tangent for producing a high power ferroelectric tuner used to match radio frequency power into their Dill-D fusion machine. Phase I of the project was scheduled to have a large number of organometallic (alkoxides) chemical samples producedmore » using various methods. These would be analyzed by LLNL, Soliton and Symetrix independently to determine the level of chemical impurities thus verifying each other's analysis. The goal was to demonstrate a cost-effective production method, which could be implemented in a large commercial facility to produce high purity organometallic compounds. In addition, various compositions of barium-strontium-titanate ceramics were to be produced and analyzed in order to develop an electroceramic capacitor material having the desired characteristics with respect to dielectric constant, loss tangent, temperature characteristics and non-linear behavior under applied voltage. Upon optimizing the barium titanate material, 50 capacitor preforms would be produced from this material demonstrating the ability to produce, in quantity, the pills ultimately required for the ferroelectric tuner (approx 2000-3000 ceramic pills).« less
NASA Astrophysics Data System (ADS)
Atkinson, J. E.; Barker, G. G.; Feltham, S. J.; Gabrielson, S.; Lane, P. C.; Matthews, V. J.; Perring, D.; Randall, J. P.; Saunders, J. W.; Tuck, R. A.
1982-05-01
An electrical model klystron amplifier was designed. Its features include a gridded gun, a single stage depressed collector, a rare earth permanent magnet focusing system, an input loop, six rugged tuners and a coaxial line output section incorporating a coaxial-to-waveguide transducer and a pillbox window. At each stage of the design, the thermal and mechanical aspects were investigated and optimized within the framework of the RF specification. Extensive use was made of data from the preliminary design study and from RF measurements on the breadboard model. In an additional study, a comprehensive draft tube specification has been produced. Great emphasis has been laid on a second additional study on space-qualified materials and processes.
Digitally Controlled Slot Coupled Patch Array
NASA Technical Reports Server (NTRS)
D'Arista, Thomas; Pauly, Jerry
2010-01-01
A four-element array conformed to a singly curved conducting surface has been demonstrated to provide 2 dB axial ratio of 14 percent, while maintaining VSWR (voltage standing wave ratio) of 2:1 and gain of 13 dBiC. The array is digitally controlled and can be scanned with the LMS Adaptive Algorithm using the power spectrum as the objective, as well as the Direction of Arrival (DoA) of the beam to set the amplitude of the power spectrum. The total height of the array above the conducting surface is 1.5 inches (3.8 cm). A uniquely configured microstrip-coupled aperture over a conducting surface produced supergain characteristics, achieving 12.5 dBiC across the 2-to-2.13- GHz and 2.2-to-2.3-GHz frequency bands. This design is optimized to retain VSWR and axial ratio across the band as well. The four elements are uniquely configured with respect to one another for performance enhancement, and the appropriate phase excitation to each element for scan can be found either by analytical beam synthesis using the genetic algorithm with the measured or simulated far field radiation pattern, or an adaptive algorithm implemented with the digitized signal. The commercially available tuners and field-programmable gate array (FPGA) boards utilized required precise phase coherent configuration control, and with custom code developed by Nokomis, Inc., were shown to be fully functional in a two-channel configuration controlled by FPGA boards. A four-channel tuner configuration and oscilloscope configuration were also demonstrated although algorithm post-processing was required.
A parametric model and estimation techniques for the inharmonicity and tuning of the piano.
Rigaud, François; David, Bertrand; Daudet, Laurent
2013-05-01
Inharmonicity of piano tones is an essential property of their timbre that strongly influences the tuning, leading to the so-called octave stretching. It is proposed in this paper to jointly model the inharmonicity and tuning of pianos on the whole compass. While using a small number of parameters, these models are able to reflect both the specificities of instrument design and tuner's practice. An estimation algorithm is derived that can run either on a set of isolated note recordings, but also on chord recordings, assuming that the played notes are known. It is applied to extract parameters highlighting some tuner's choices on different piano types and to propose tuning curves for out-of-tune pianos or piano synthesizers.
Status of the Perpendicular Biased 2nd Harmonic Cavity for the Fermilab Booster
Tan, C. Y.; Dey, J. E.; Duel, K. L.; ...
2017-05-01
This is a status report on the 2nd harmonic cavity for the Fermilab Booster as part of the Proton Improvement Plan (PIP) for increasing beam transmission efficiency, and thus reducing losses. A set of tuner rings has been procured and is undergoing quality control tests. The Y567 tube for driving the cavity has been successfully tested at both injection and extraction frequencies. A cooling scheme for the tuner and cavity has been developed after a thorough thermal analysis of the system. RF windows have been procured and substantial progress has been made on the mechanical designs of the cavity andmore » the bias solenoid. Finally, the goal is to have a prototype cavity ready for testing by the end of 2017.« less
rf measurements and tuning of the 750 MHz radio frequency quadrupole
NASA Astrophysics Data System (ADS)
Koubek, Benjamin; Grudiev, Alexej; Timmins, Marc
2017-08-01
In the framework of the program on medical applications a compact 750 MHz RFQ has been designed and built to be used as an injector for a hadron therapy linac. This RFQ was designed to accelerate protons to an energy of 5 MeV within only 2 m length. It is divided into four segments and equipped with 32 tuners in total. The length of the RFQ corresponds to 5 λ which is considered to be close to the limit for field adjustment using only piston tuners. Moreover the high frequency, which is about double the frequency of existing RFQs, results in a sensitive structure and requires careful tuning. In this paper we present the tuning algorithm, the tuning procedure and rf measurements of the RFQ.
Status of the Perpendicular Biased 2nd Harmonic Cavity for the Fermilab Booster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, C. Y.; Dey, J. E.; Duel, K. L.
This is a status report on the 2nd harmonic cavity for the Fermilab Booster as part of the Proton Improvement Plan (PIP) for increasing beam transmission efficiency, and thus reducing losses. A set of tuner rings has been procured and is undergoing quality control tests. The Y567 tube for driving the cavity has been successfully tested at both injection and extraction frequencies. A cooling scheme for the tuner and cavity has been developed after a thorough thermal analysis of the system. RF windows have been procured and substantial progress has been made on the mechanical designs of the cavity andmore » the bias solenoid. Finally, the goal is to have a prototype cavity ready for testing by the end of 2017.« less
Zheng, Xuezhe; Chang, Eric; Amberg, Philip; Shubin, Ivan; Lexau, Jon; Liu, Frankie; Thacker, Hiren; Djordjevic, Stevan S; Lin, Shiyun; Luo, Ying; Yao, Jin; Lee, Jin-Hyoung; Raj, Kannan; Ho, Ron; Cunningham, John E; Krishnamoorthy, Ashok V
2014-05-19
We report the first complete 10G silicon photonic ring modulator with integrated ultra-efficient CMOS driver and closed-loop wavelength control. A selective substrate removal technique was used to improve the ring tuning efficiency. Limited by the thermal tuner driver output power, a maximum open-loop tuning range of about 4.5nm was measured with about 14mW of total tuning power including the heater driver circuit power consumption. Stable wavelength locking was achieved with a low-power mixed-signal closed-loop wavelength controller. An active wavelength tracking range of > 500GHz was demonstrated with controller energy cost of only 20fJ/bit.
ERIC Educational Resources Information Center
Alfaro, Daniel
1977-01-01
Careers in the music field for Hispanos are available for the industrious, competitive, and talented person. Among the careers are: composer, orchestrator-arranger, church musician, conductor, teacher, music librarian, tuner-technician, copyist, and instrument repairer. (NQ)
A digital low-level radio-frequency system R&D for a 1.3 GHz nine-cell cavity
NASA Astrophysics Data System (ADS)
Qiu, Feng; Gao, Jie; Lin, Hai-Ying; Liu, Rong; Ma, Xin-Peng; Sha, Peng; Sun, Yi; Wang, Guang-Wei; Wang, Qun-Yao; Xu, Bo
2012-03-01
To test and verify the performance of the digital low-level radio-frequency (LLRF) and tuner system designed by the IHEP RF group, an experimental platform with a retired KEK 1.3 GHz nine-cell cavity is set up. A radio-frequency (RF) field is established successfully in the cavity and the frequency of the cavity is locked by the tuner in ±0.5° (about ±1.2 kHz) at room temperature. The digital LLRF system performs well in a five-hour experiment, and the results show that the system achieves field stability at amplitude <0.1% (peak to peak) and phase <0.1° (peak to peak). This index satisfies the requirements of the International Linear Collider (ILC), and this paper describes this closed-loop experiment of the LLRF system.
Tunable biasing magnetic field design of ferrite tuner for ICRF heating system in EAST
NASA Astrophysics Data System (ADS)
Manman, XU; Yuntao, SONG; Gen, CHEN; Yanping, ZHAO; Yuzhou, MAO; Guang, LIU; Zhen, PENG
2017-11-01
Ion cyclotron range of frequency (ICRF) heating has been used in tokamaks as one of the most successful auxiliary heating tools and has been adopted in the EAST. However, the antenna load will fluctuate with the change of plasma parameters in the ICRF heating process. To ensure the steady operation of the ICRF heating system in the EAST, fast ferrite tuner (FFT) has been carried out to achieve real-time impedance matching. For the requirements of the FFT impedance matching system, the magnet system of the ferrite tuner (FT) was designed by numerical simulations and experimental analysis, where the biasing magnetic circuit and alternating magnetic circuit were the key researched parts of the ferrite magnet. The integral design goal of the FT magnetic circuit is that DC bias magnetic field is 2000 Gs and alternating magnetic field is ±400 Gs. In the FTT, E-type magnetic circuit was adopted. Ferrite material is NdFeB with a thickness of 30 mm by setting the working point of NdFeB, and the ampere turn of excitation coil is 25 through the theoretical calculation and simulation analysis. The coil inductance to generate alternating magnetic field is about 7 mH. Eddy-current effect has been analyzed, while the magnetic field distribution has been measured by a Hall probe in the medium plane of the biasing magnet. Finally, the test results show the good performance of the biasing magnet satisfying the design and operating requirements of the FFT.
Characterization of CNRS Fizeau wedge laser tuner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom-fabricated circuit board which contains a high-speed fringe detection and locating circuit. This board includes a dc level-discriminator-type fringe detector, a counter circuit to determine fringe center, a pulsed lasermore » triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data-collection process and interprets the results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuharik, J.; Madrak, R.; Makarov, A.
A second harmonic tunable RF cavity is being devel-oped for the Fermilab Booster. This device, which prom-ises reduction of the particle beam loss at the injection, transition, and extraction stages, employs perpendicularly biased garnet material for frequency tuning. The required range of the tuning is significantly wider than in previously built and tested tunable RF devices. As a result, the mag-netic field in the garnet comes fairly close to the gyromag-netic resonance line at the lower end of the frequency range. The chosen design concept of a tuner for the cavity cannot ensure uniform magnetic field in the garnet mate-rial;more » thus, it is important to know the static magnetic prop-erties of the material to avoid significant increase in the lo-cal RF loss power density. This report summarizes studies performed at Fermilab to understand variations in the mag-netic properties of the AL800 garnet material used to build the tuner of the cavity.« less
A fully integrated direct-conversion digital satellite tuner in 0.18 μm CMOS
NASA Astrophysics Data System (ADS)
Si, Chen; Zengwang, Yang; Mingliang, Gu
2011-04-01
A fully integrated direct-conversion digital satellite tuner for DVB-S/S2 and ABS-S applications is presented. A broadband noise-canceling Balun-LNA and passive quadrature mixers provided a high-linearity low noise RF front-end, while the synthesizer integrated the loop filter to reduce the solution cost and system debug time. Fabricated in 0.18 μm CMOS, the chip achieves a less than 7.6 dB noise figure over a 900-2150 MHz L-band, while the measured sensitivity for 4.42 MS/s QPSK-3/4 mode is -91 dBm at the PCB connector. The fully integrated integer-N synthesizer operating from 2150 to 4350 MHz achieves less than 1 °C integrated phase error. The chip consumes about 145 mA at a 3.3 V supply with internal integrated LDOs.
Characterization of CNRS Fizeau wedge laser tuner
NASA Technical Reports Server (NTRS)
1984-01-01
A fringe detection and measurement system was constructed for use with the CNRS Fizeau wedge laser tuner, consisting of three circuit boards. The first board is a standard Reticon RC-100 B motherboard which is used to provide the timing, video processing, and housekeeping functions required by the Reticon RL-512 G photodiode array used in the system. The sampled and held video signal from the motherboard is processed by a second, custom fabricated circuit board which contains a high speed fringe detection and locating circuit. This board includes a dc level discriminator type fringe detector, a counter circuit to determine fringe center, a pulsed laser triggering circuit, and a control circuit to operate the shutter for the He-Ne reference laser beam. The fringe center information is supplied to the third board, a commercial single board computer, which governs the data collection process and interprets the results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, G. M.; Fitzgerald, E.; Johnson, D. K.
2014-02-12
Active stub tuning with a fast ferrite tuner (FFT) allows for the system to respond dynamically to changes in the plasma impedance such as during the L-H transition or edge localized modes (ELMs), and has greatly increased the effectiveness of fusion ion cyclotron range of frequency systems. A high power waveguide double-stub tuner is under development for use with the Alcator C-Mod lower hybrid current drive (LHCD) system. Exact impedance matching with a double-stub is possible for a single radiating element under most load conditions, with the reflection coefficient reduced from Γ to Γ{sup 2} in the “forbidden region.” Themore » relative phase shift between adjacent columns of a LHCD antenna is critical for control of the launched n{sub ∥} spectrum. Adding a double-stub tuning network will perturb the phase of the forward wave particularly if the unmatched reflection coefficient is high. This effect can be compensated by adjusting the phase of the low power microwave drive for each klystron amplifier. Cross-coupling of the reflected power between columns of the launcher must also be considered. The problem is simulated by cascading a scattering matrix for the plasma provided by a linear coupling model with the measured launcher scattering matrix and that of the FFTs. The solution is advanced in an iterative manner similar to the time-dependent behavior of the real system. System performance is presented under a range of edge density conditions from under-dense to over-dense and a range of launched n{sub ∥}.« less
78 FR 14717 - Energy Conservation Standards for Set-Top Boxes: Availability of Initial Analysis
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-07
.... Despite the participants' best efforts to negotiate a non- regulatory agreement, these talks ultimately... consumption of baseline products in on and sleep modes of operation by system level components (e.g., tuners...
NASA Astrophysics Data System (ADS)
Rong, Bao; Rui, Xiaoting; Lu, Kun; Tao, Ling; Wang, Guoping; Ni, Xiaojun
2018-05-01
In this paper, an efficient method of dynamics modeling and vibration control design of a linear hybrid multibody system (MS) is studied based on the transfer matrix method. The natural vibration characteristics of a linear hybrid MS are solved by using low-order transfer equations. Then, by constructing the brand-new body dynamics equation, augmented operator and augmented eigenvector, the orthogonality of augmented eigenvector of a linear hybrid MS is satisfied, and its state space model expressed in each independent model space is obtained easily. According to this dynamics model, a robust independent modal space-fuzzy controller is designed for vibration control of a general MS, and the genetic optimization of some critical control parameters of fuzzy tuners is also presented. Two illustrative examples are performed, which results show that this method is computationally efficient and with perfect control performance.
CEBAF Upgrade Cryomodule Component Testing in the Horizontal Test Bed (HTB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
I.E. Campisi; B. Carpenter; G.K. Davis
2001-06-01
The planned upgrade of the CEBAF electron accelerator includes the development of an improved cryomodule. Several components differ substantially from the original CEBAF cryomodule; these include: the new 7-cell, 1.5 GHz cavities with integral helium vessel, a new, backlash-free cavity tuner, the waveguide coupler with its room-temperature ceramic window, and the HOM damping filters. In order to test the design features and performance of the new components, a horizontal cryostat (Horizontal Test Bed) has been constructed which allows testing with a turn around time of less than three weeks. This cryostat provides the environment for testing one or two cavities,more » with associated auxiliary components, in a condition similar to that of a real cryomodule. A series of tests has been performed on a prototype 7-cell cavity and the above-mentioned systems. In this paper the results of the tests on the cryostat, on the cavity performance, on its coupler, on the tuner characteristics, and on the microphonics behavior will be reported.« less
Lorentz Force Detuning Analysis of the SNS Accelerating Cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Mitchell; K. Matsumoto; G. Ciovati
2001-09-01
The Spallation Neutron Source (SNS) project incorporates a superconducting radio-frequency (SRF) accelerator for the final section of the pulsed mode linac Cavities with geometrical {beta} values of {beta} = 0.61 and {beta} = 0.81 are utilized in the SRF section, and are constructed out of thin-walled niobium with stiffener rings welded between the cells near the iris. The welded titanium helium vessel and tuner assembly restrains the cavity beam tubes Cavities with {beta} values less than one have relatively steep and flat side-walls making the cavities susceptible to Ised RF induces cyclic Lorentz pressures that mechanically excite the cavities, producingmore » a dynamic Lorentz force detuning different from a continuous RF system. The amplitude of the dynamic detuning for a given cavity design is a function of the mechanical damping, stiffness of the tuner/helium vessel assembly, RF pulse profile, and the RF pulse rate. This paper presents analysis and testing results to date, and indicates areas where more investigation is required.« less
Telemetry Modernization with Open Architecture Software-Defined Radio Technology
2016-01-01
digital (A/D) con- vertors and separated into narrowband channels through digital down-conversion ( DDC ) techniques implemented in field-programmable...Lexington, MA 02420-9108 781-981-4204 Operations center Recording Filter FPGA DDC Filter Channel 1 Filter FPGA DDC Filter Channel n Wideband tuner A
Investigating Social Competence in Students with High Intelligence
ERIC Educational Resources Information Center
Schirvar, Wendi Margaret
2013-01-01
Social competence is vital for healthy development (Canto-Sperber & Dupuy, 2001; Spence, Barrett & Tuner, 2003). Beginning in childhood and heavily influenced by culture, social competence develops as we combine personal and environmental resources for positive social outcomes and includes the absence of negative behaviors alongside the…
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2016-01-01
This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2015-01-01
This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40,000) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.
ERIC Educational Resources Information Center
Maryland State Dept. of Education, Baltimore.
Standards established by Maryland public schools for information and communication distribution systems in new construction and renovation projects are detailed. The function of the communications distribution room (CDR) is to house the distribution equipment of the school's communications systems and may contain gateways, tuners, video cassette…
Lorentz force detuning analysis of the Spallation Neutron Source (SNS) accelerating cavities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, R.R.; Matsumoto, K. Y.; Ciovati, G.
2001-01-01
The Spallation Neutron Source (SNS) project incorporates a superconducting radio-frequency (SRF) accelerator for the final section of the pulsed mode linac. Cavities with geometrical {beta} values of {beta}=0.61 and {beta}=0.81 are utilized in the SRF section, and are constructed out of thin-walled niobium with stiffener rings welded between the cells near the iris. The welded titanium helium vessel and tuner assembly restrains the cavity beam tubes. Cavities with {beta} values less than one have relatively steep and flat side-walls making the cavities susceptible to Lorentz force detuning. In addition, the pulsed RF induces cyclic Lorentz pressures that mechanically excite themore » cavities, producing a dynamic Lorentz force detuning different from a continuous RF system. The amplitude of the dynamic detuning for a given cavity design is a function of the mechanical damping, stiffness of the tuner/helium vessel assembly, RF pulse profile, and the RF pulse rate. This paper presents analysis and testing results to date, and indicates areas where more investigation is required.« less
Induction Inserts at the Los Alamos PSR
NASA Astrophysics Data System (ADS)
Ng, K. Y.
2002-12-01
Ferrite-loaded induction tuners installed in the Los Alamos Proton Storage Ring have been successful in compensating space-charge effects. However, the resistive part of the ferrite introduces unacceptable microwave instability and severe bunch lengthening. An effective cure was found by heating the ferrite cores up to ˜ 130°C. An understanding of the instability and cure is presented.
Steel Band Repertoire: The Case for Original Music
ERIC Educational Resources Information Center
Tanner, Chris
2010-01-01
In the past few decades, the steel band art form has experienced consistent growth and development in several key respects. For example, in the United States, the sheer number of steel band programs has steadily increased, and it appears that this trend will continue in the future. Additionally, pan builders and tuners have made great strides in…
Effects of Stimulus Octave and Timbre on the Tuning Accuracy of Advanced College Instrumentalists
ERIC Educational Resources Information Center
Byo, James L.; Schlegel, Amanda L.
2016-01-01
The purpose of this study was to test the effects of octave and timbre on advanced college musicians' (N = 63) ability to tune their instruments. We asked: "Are there differences in tuning accuracy due to octave (B-flat 2, B-flat 4) and stimulus timbre (oboe, clarinet, electronic tuner, tuba)?" and "To what extent do participants'…
1976-06-30
NASA SUPPORT GROUP (QSRA PROJECT TEAM). L-R: John Cochrane, Robert Price, Howard Tuner, Mike Shovlin, Dennis Riddle, Al Boissevain, Dennis Brown, Patty Beck, John Weyers, Bob McCracken, Peter Patterakis, Jack Ratcliff, Al Kass, Bob Innis, Tom Twiggs (Boeing). Note: Used in publication in Flight Research at Ames; 57 Years of Development and Validation of Aeronautical Technology NASA SP-1998-3300 fig. 111
Advanced Control Considerations for Turbofan Engine Design
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Csank, Jeffrey T.; Chicatelli, Amy
2016-01-01
This paper covers the application of a model-based engine control (MBEC) methodology featuring a self tuning on-board model for an aircraft turbofan engine simulation. The nonlinear engine model is capable of modeling realistic engine performance, allowing for a verification of the advanced control methodology over a wide range of operating points and life cycle conditions. The on-board model is a piece-wise linear model derived from the nonlinear engine model and updated using an optimal tuner Kalman Filter estimation routine, which enables the on-board model to self-tune to account for engine performance variations. MBEC is used here to show how advanced control architectures can improve efficiency during the design phase of a turbofan engine by reducing conservative operability margins. The operability margins that can be reduced, such as stall margin, can expand the engine design space and offer potential for efficiency improvements. Application of MBEC architecture to a nonlinear engine simulation is shown to reduce the thrust specific fuel consumption by approximately 1% over the baseline design, while maintaining safe operation of the engine across the flight envelope.
The 5K70SK automatically tuned, high power, S-band klystron
NASA Technical Reports Server (NTRS)
Goldfinger, A.
1977-01-01
Primary objectives include delivery of 44 5K70SK klystron amplifier tubes and 26 remote tuner assemblies with spare parts kits. Results of a reliability demonstration on a klystron test cavity are discussed, along with reliability tests performed on a remote tuning unit. Production problems and one design modification are reported and discussed. Results of PAT and DVT are included.
NASA Astrophysics Data System (ADS)
Scarborough, David E.
Manufacturers of commercial, power-generating, gas turbine engines continue to develop combustors that produce lower emissions of nitrogen oxides (NO x) in order to meet the environmental standards of governments around the world. Lean, premixed combustion technology is one technique used to reduce NOx emissions in many current power and energy generating systems. However, lean, premixed combustors are susceptible to thermo-acoustic oscillations, which are pressure and heat-release fluctuations that occur because of a coupling between the combustion process and the natural acoustic modes of the system. These pressure oscillations lead to premature failure of system components, resulting in very costly maintenance and downtime. Therefore, a great deal of work has gone into developing methods to prevent or eliminate these combustion instabilities. This dissertation presents the results of a theoretical and experimental investigation of a novel Fuel System Tuner (FST) used to damp detrimental combustion oscillations in a gas turbine combustor by changing the fuel supply system impedance, which controls the amplitude and phase of the fuel flowrate. When the FST is properly tuned, the heat release oscillations resulting from the fuel-air ratio oscillations damp, rather than drive, the combustor acoustic pressure oscillations. A feasibility study was conducted to prove the validity of the basic idea and to develop some basic guidelines for designing the FST. Acoustic models for the subcomponents of the FST were developed, and these models were experimentally verified using a two-microphone impedance tube. Models useful for designing, analyzing, and predicting the performance of the FST were developed and used to demonstrate the effectiveness of the FST. Experimental tests showed that the FST reduced the acoustic pressure amplitude of an unstable, model, gas-turbine combustor over a wide range of operating conditions and combustor configurations. Finally, combustor acoustic pressure amplitude measurements made in using the model combustor were used in conjunction with model predicted fuel system impedances to verify the developed design rules. The FST concept and design methodology presented in this dissertation can be used to design fuel system tuners for new and existing gas turbine combustors to reduce, or eliminate altogether, thermo-acoustic oscillations.
Mahnke, Peter
2018-01-01
A commercial software defined radio based on a Rafael Micro R820T2 tuner is characterized for the use as a high-frequency lock-in amplifier for frequency modulation spectroscopy. The sensitivity limit of the receiver is 1.6 nV/Hz. Frequency modulation spectroscopy is demonstrated on the 6406.69 cm -1 absorption line of carbon monoxide.
NASA Astrophysics Data System (ADS)
Mahnke, Peter
2018-01-01
A commercial software defined radio based on a Rafael Micro R820T2 tuner is characterized for the use as a high-frequency lock-in amplifier for frequency modulation spectroscopy. The sensitivity limit of the receiver is 1.6 nV/√{Hz }. Frequency modulation spectroscopy is demonstrated on the 6406.69 cm-1 absorption line of carbon monoxide.
Elliptical superconducting RF cavities for FRIB energy upgrade
NASA Astrophysics Data System (ADS)
Ostroumov, P. N.; Contreras, C.; Plastun, A. S.; Rathke, J.; Schultheiss, T.; Taylor, A.; Wei, J.; Xu, M.; Xu, T.; Zhao, Q.; Gonin, I. V.; Khabiboulline, T.; Pischalnikov, Y.; Yakovlev, V. P.
2018-04-01
The multi-physics design of a five cell, βG = 0 . 61, 644 MHz superconducting elliptical cavity being developed for an energy upgrade in the Facility for Rare Isotope Beams (FRIB) is presented. The FRIB energy upgrade from 200 MeV/u to 400 MeV/u for heaviest uranium ions will increase the intensities of rare isotope beams by nearly an order of magnitude. After studying three different frequencies, 1288 MHz, 805 MHz, and 644 MHz, the 644 MHz cavity was shown to provide the highest energy gain per cavity for both uranium and protons. The FRIB upgrade will include 11 cryomodules containing 5 cavities each and installed in 80-meter available space in the tunnel. The cavity development included extensive multi-physics optimization, mechanical and engineering analysis. The development of a niobium cavity is complete and two cavities are being fabricated in industry. The detailed design of the cavity sub-systems such as fundamental power coupler and dynamic tuner are currently being pursued. In the overall design of the cavity and its sub-systems we extensively applied experience gained during the development of 650 MHz low-beta cavities at Fermi National Accelerator Laboratory (FNAL) for the Proton Improvement Plan (PIP) II.
Development of a 32 Inch Diameter Levitated Ducted Fan Conceptual Design
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.; Gallo, Christopher a.; Solano, Paul A.; Thompson, William K.; Vrnak, Daniel R.
2006-01-01
The NASA John H. Glenn Research Center has developed a revolutionary 32 in. diameter Levitated Ducted Fan (LDF) conceptual design. The objective of this work is to develop a viable non-contact propulsion system utilizing Halbach arrays for all-electric flight, and many other applications. This concept will help to reduce harmful emissions, reduce the Nation s dependence on fossil fuels, and mitigate many of the concerns and limitations encountered in conventional aircraft propulsors. The physical layout consists of a ducted fan drum rotor with blades attached at the outer diameter and supported by a stress tuner ring at the inner diameter. The rotor is contained within a stator. This concept exploits the unique physical dimensions and large available surface area to optimize a custom, integrated, electromagnetic system that provides both the levitation and propulsion functions. The rotor is driven by modulated electromagnetic fields between the rotor and the stator. When set in motion, the time varying magnetic fields interact with passive coils in the stator assembly to produce repulsive forces between the stator and the rotor providing magnetic suspension. LDF can provide significant improvements in aviation efficiency, reliability, and safety, and has potential application in ultra-efficient motors, computers, and space power systems.
Design of low loss helix circuits for interference fitted and brazed circuits
NASA Technical Reports Server (NTRS)
Jacquez, A.
1983-01-01
The RF loss properties and thermal capability of brazed helix circuits and interference fitted circuits were evaluated. The objective was to produce design circuits with minimum RF loss and maximum heat transfer. These circuits were to be designed to operate at 10 kV and at 20 GHz using a gamma a approximately equal to 1.0. This represents a circuit diameter of only 0.75 millimeters. The fabrication of this size circuit and the 0.48 millimeter high support rods required considerable refinements in the assembly techniques and fixtures used on lower frequency circuits. The transition from the helices to the waveguide was designed and the circuits were matched from 20 to 40 GHz since the helix design is a broad band circuit and at a gamma a of 1.0 will operate over this band. The loss measurement was a transmission measurement and therefore had two such transitions. This resulting double-ended match required tuning elements to achieve the broad band match and external E-H tuners at each end to optimize the match for each frequency where the loss measurement was made. The test method used was a substitution method where the test fixture was replaced by a calibrated attenuator.
The Use of Variable Q1 Isolation Windows Improves Selectivity in LC-SWATH-MS Acquisition.
Zhang, Ying; Bilbao, Aivett; Bruderer, Tobias; Luban, Jeremy; Strambio-De-Castillia, Caterina; Lisacek, Frédérique; Hopfgartner, Gérard; Varesio, Emmanuel
2015-10-02
As tryptic peptides and metabolites are not equally distributed along the mass range, the probability of cross fragment ion interference is higher in certain windows when fixed Q1 SWATH windows are applied. We evaluated the benefits of utilizing variable Q1 SWATH windows with regards to selectivity improvement. Variable windows based on equalizing the distribution of either the precursor ion population (PIP) or the total ion current (TIC) within each window were generated by an in-house software, swathTUNER. These two variable Q1 SWATH window strategies outperformed, with respect to quantification and identification, the basic approach using a fixed window width (FIX) for proteomic profiling of human monocyte-derived dendritic cells (MDDCs). Thus, 13.8 and 8.4% additional peptide precursors, which resulted in 13.1 and 10.0% more proteins, were confidently identified by SWATH using the strategy PIP and TIC, respectively, in the MDDC proteomic sample. On the basis of the spectral library purity score, some improvement warranted by variable Q1 windows was also observed, albeit to a lesser extent, in the metabolomic profiling of human urine. We show that the novel concept of "scheduled SWATH" proposed here, which incorporates (i) variable isolation windows and (ii) precursor retention time segmentation further improves both peptide and metabolite identifications.
Design and multiphysics analysis of a 176Â MHz continuous-wave radio-frequency quadrupole
NASA Astrophysics Data System (ADS)
Kutsaev, S. V.; Mustapha, B.; Ostroumov, P. N.; Barcikowski, A.; Schrage, D.; Rodnizki, J.; Berkovits, D.
2014-07-01
We have developed a new design for a 176 MHz cw radio-frequency quadrupole (RFQ) for the SARAF upgrade project. At this frequency, the proposed design is a conventional four-vane structure. The main design goals are to provide the highest possible shunt impedance while limiting the required rf power to about 120 kW for reliable cw operation, and the length to about 4 meters. If built as designed, the proposed RFQ will be the first four-vane cw RFQ built as a single cavity (no resonant coupling required) that does not require π-mode stabilizing loops or dipole rods. For this, we rely on very detailed 3D simulations of all aspects of the structure and the level of machining precision achieved on the recently developed ATLAS upgrade RFQ. A full 3D model of the structure including vane modulation was developed. The design was optimized using electromagnetic and multiphysics simulations. Following the choice of the vane type and geometry, the vane undercuts were optimized to produce a flat field along the structure. The final design has good mode separation and should not need dipole rods if built as designed, but their effect was studied in the case of manufacturing errors. The tuners were also designed and optimized to tune the main mode without affecting the field flatness. Following the electromagnetic (EM) design optimization, a multiphysics engineering analysis of the structure was performed. The multiphysics analysis is a coupled electromagnetic, thermal and mechanical analysis. The cooling channels, including their paths and sizes, were optimized based on the limiting temperature and deformation requirements. The frequency sensitivity to the RFQ body and vane cooling water temperatures was carefully studied in order to use it for frequency fine-tuning. Finally, an inductive rf power coupler design based on the ATLAS RFQ coupler was developed and simulated. The EM design optimization was performed using cst Microwave Studio and the results were verified using both hfss and ansys. The engineering analysis was performed using hfss and ansys and most of the results were verified using the newly developed cst Multiphysics package.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
A spectrally tunable calibration source using Ebert-Fastie configuration
NASA Astrophysics Data System (ADS)
Wang, Xiaoxu; Li, Zhigang
2018-03-01
A novel spectrally tunable calibration source based on a digital micromirror device (DMD) and Ebert-Fastie optical configuration with two working modes (narrow-band mode and broad-band mode) was designed. The DMD is set on the image plane of the first spectral tuner, and controls the wavelength and intensity of the light reflected into the second spectral tuner by switching the micromirror array’s condition, which in turn controls the working mode of the spectrally tunable source. When working in narrow-band mode, the spectrally tunable source can be calibrated by a Gershun tube radiant power radiometer and a spectroradiometer. In broad-band mode, it can be used to calibrate optical instruments as a standard spectral radiance source. When using a xenon lamp as a light source, the stability of the spectrally tunable source is better than 0.5%, the minimum spectral bandwidth is 7 nm, and the uncertainty of the spectral radiance of the spectrally tunable source is estimated as 14.68% at 450 nm, 1.54% at 550 nm, and 1.48% at 654.6 nm. The uncertainty of the spectral radiance of the spectrally tunable source calibrated by the Gershun tube radiometer and spectroradiometer can be kept low during the radiometric calibration procedure so that it can meet the application requirement of optical quantitative remote sensing calibration.
Robustness of reduced-order multivariable state-space self-tuning controller
NASA Technical Reports Server (NTRS)
Yuan, Zhuzhi; Chen, Zengqiang
1994-01-01
In this paper, we present a quantitative analysis of the robustness of a reduced-order pole-assignment state-space self-tuning controller for a multivariable adaptive control system whose order of the real process is higher than that of the model used in the controller design. The result of stability analysis shows that, under a specific bounded modelling error, the adaptively controlled closed-loop real system via the reduced-order state-space self-tuner is BIBO stable in the presence of unmodelled dynamics.
NASA Astrophysics Data System (ADS)
Grafen, M.; Nalpantidis, K.; Ihrig, D.; Heise, H. M.; Ostendorf, A.
2016-03-01
Mid-infrared (MIR) spectroscopy is a valuable analytical method for patient monitoring within point-of-care diagnostics. For implementation, quantum cascade lasers (QCL) appear to be most suited regarding miniaturization, complexity and eventually also costs. External cavity (EC) - QCLs offer broad tuning ranges and recently, ultra-broadly tunable systems covering spectral ranges around the mid-infrared fingerprint region became commercially available. Using such a system, transmission spectra from the wavenumber interval of 780 to 1920 cm-1, using a thermoelectrically cooled MCT-detector, were recorded while switching the aqueous glucose concentrations between 0, 50 and 100 mg/dL. In order to optimize the system performance, a multi-parameter study was carried out, varying laser pulse width, duty cycle, sweep speed and the optical sample pathlength for scoring the absorbance noise. Exploratory factor analysis with pattern recognition tools (PCA, LDA) was used for the raw data, providing more than 10 significantly contributing factors. With the glucose signal causing 20 % of the total variance, further factors include short-term drift possibly related to thermal effects, long-term drift due to varying atmospheric water vapour in the lab, as well as wavenumber shifts and drifts of the single tuners. For performance testing, the noise equivalent concentration was estimated based on cross-validated Partial-Least Squares (PLS) predictions and the a-posteriori obtained scores of the factor analysis. Based on the optimized parameters, a noise equivalent glucose concentration of 1.5 mg/dL was achieved.
NASA Technical Reports Server (NTRS)
2011-01-01
Topics covered include: Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation; Airborne Radar Interferometric Repeat-Pass Processing; Plug-and-Play Environmental Monitoring Spacecraft Subsystem; Power-Combined GaN Amplifier with 2.28-W Output Power at 87 GHz; Wallops Ship Surveillance System; Source Lines Counter (SLiC) Version 4.0; Guidance, Navigation, and Control Program; Single-Frame Terrain Mapping Software for Robotic Vehicles; Auto Draw from Excel Input Files; Observation Scheduling System; CFDP for Interplanetary Overlay Network; X-Windows Widget for Image Display; Binary-Signal Recovery; Volumetric 3D Display System with Static Screen; MMIC Replacement for Gunn Diode Oscillators; Feature Acquisition with Imbalanced Training Data; Mount Protects Thin-Walled Glass or Ceramic Tubes from Large Thermal and Vibration Loads; Carbon Nanotube-Based Structural Health Monitoring Sensors; Wireless Inductive Power Device Suppresses Blade Vibrations; Safe, Advanced, Adaptable Isolation System Eliminates the Need for Critical Lifts; Anti-Rotation Device Releasable by Insertion of a Tool; A Magnetically Coupled Cryogenic Pump; Single Piezo-Actuator Rotary-Hammering Drill; Fire-Retardant Polymeric Additives; Catalytic Generation of Lift Gases for Balloons; Ionic Liquids to Replace Hydrazine; Variable Emittance Electrochromics Using Ionic Electrolytes and Low Solar Absorptance Coatings; Spacecraft Radiator Freeze Protection Using a Regenerative Heat Exchanger; Multi-Mission Power Analysis Tool; Correction for Self-Heating When Using Thermometers as Heaters in Precision Control Applications; Gravitational Wave Detection with Single-Laser Atom Interferometers; Titanium Alloy Strong Back for IXO Mirror Segments; Improved Ambient Pressure Pyroelectric Ion Source; Multi-Modal Image Registration and Matching for Localization of a Balloon on Titan; Entanglement in Quantum-Classical Hybrid; Algorithm for Autonomous Landing; Quantum-Classical Hybrid for Information Processing; Small-Scale Dissipation in Binary-Species Transitional Mixing Layers; Superpixel-Augmented Endmember Detection for Hyperspectral Images; Coding for Parallel Links to Maximize the Expected Value of Decodable Messages; and Microwave Tissue Soldering for Immediate Wound Closure.
Multi-Physics Analysis of the Fermilab Booster RF Cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awida, M.; Reid, J.; Yakovlev, V.
After about 40 years of operation the RF accelerating cavities in Fermilab Booster need an upgrade to improve their reliability and to increase the repetition rate in order to support a future experimental program. An increase in the repetitio n rate from 7 to 15 Hz entails increasing the power dissipation in the RF cavities, their ferrite loaded tuners, and HOM dampers. The increased duty factor requires careful modelling for the RF heating effects in the cavity. A multi-physic analysis invest igating both the RF and thermal properties of Booster cavity under various operating conditions is presented in this paper.
Multi-Physics Analysis of the Fermilab Booster RF Cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awida, M.; Reid, J.; Yakovlev, V.
After about 40 years of operation the RF accelerating cavities in Fermilab Booster need an upgrade to improve their reliability and to increase the repetition rate in order to support a future experimental program. An increase in the repetition rate from 7 to 15 Hz entails increasing the power dissipation in the RF cavities, their ferrite loaded tuners, and HOM dampers. The increased duty factor requires careful modelling for the RF heating effects in the cavity. A multi-physic analysis investigating both the RF and thermal properties of Booster cavity under various operating conditions is presented in this paper.
A low noise 410-495 heterodyne two tuner mixer, using submicron Nb/Al2O3/Nb tunneljunctions
NASA Technical Reports Server (NTRS)
Delange, G.; Honingh, C. E.; Dierichs, M. M. T. M.; Panhuyzen, R. A.; Schaeffer, H. H. A.; Klapwijk, T. M.; Vandestadt, H.; Degraauw, M. W. M.
1992-01-01
A 410-495 GHz heterodyne receiver, with an array of two Nb/Al2O3/Nb tunneljunctions as mixing element is described. The noise temperature of this receiver is below 230 K (DSB) over the whole frequency range, and has lowest values of 160 K in the 435-460 GHz range. The calculated DSB mixergain over the whole frequency range varies from -11.9 plus or minus 0.6 dB to -12.6 plus or minus 0.6 dB and the mixer noise is 90 plus or minus 30 K.
Progress on the Design of a Perpendicularly Biased 2nd Harmonic Cavity for the Fermilab Booster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madrak, R. L.; Dey, J. E.; Duel, K. L.
2016-10-01
perpendicularly biased 2nd harmonic cavity is being designed and built for the Fermilab Booster. Its purpose is to flatten the bucket at injection and thus change the longitudinal beam distribution to decrease space charge effects. It can also help at extraction. The cavity frequency range is 76 – 106 MHz. The power amplifier will be built using the Y567B tetrode, which is also used for the fundamental mode cavities in the Fermilab Booster. We discuss recent progress on the cavity, the biasing solenoid design and plans for testing the tuner's garnet material
Influence of Constraint in Parameter Space on Quantum Games
NASA Astrophysics Data System (ADS)
Zhao, Hai-Jun; Fang, Xi-Ming
2004-04-01
We study the influence of the constraint in the parameter space on quantum games. Decomposing SU(2) operator into product of three rotation operators and controlling one kind of them, we impose a constraint on the parameter space of the players' operator. We find that the constraint can provide a tuner to make the bilateral payoffs equal, so that the mismatch of the players' action at multi-equilibrium could be avoided. We also find that the game exhibits an intriguing structure as a function of the parameter of the controlled operators, which is useful for making game models.
Regulatory RNA in Mycobacterium tuberculosis, back to basics.
Schwenk, Stefan; Arnvig, Kristine B
2018-06-01
Since the turn of the millenium, RNA-based control of gene expression has added an extra dimension to the central dogma of molecular biology. Still, the roles of Mycobacterium tuberculosis regulatory RNAs and the proteins that facilitate their functions remain elusive, although there can be no doubt that RNA biology plays a central role in the baterium's adaptation to its many host environments. In this review, we have presented examples from model organisms and from M. tuberculosis to showcase the abundance and versatility of regulatory RNA, in order to emphasise the importance of these 'fine-tuners' of gene expression.
NASA Astrophysics Data System (ADS)
Przygoda, K.; Piotrowski, A.; Jablonski, G.; Makowski, D.; Pozniak, T.; Napieralski, A.
2009-08-01
Pulsed operation of high gradient superconducting radio frequency (SCRF) cavities results in dynamic Lorentz force detuning (LFD) approaching or exceeding the bandwidth of the cavity of order of a few hundreds of Hz. The resulting modulation of the resonance frequency of the cavity is leading to a perturbation of the amplitude and phase of the accelerating field, which can be controlled only at the expense of RF power. Presently, at various labs, a piezoelectric fast tuner based on an active compensation scheme for the resonance frequency control of the cavity is under study. The tests already performed in the Free Electron Laser in Hamburg (FLASH), proved the possibility of Lorentz force detuning compensation by the means of the piezo element excited with the single period of sine wave prior to the RF pulse. The X-Ray Free Electron Laser (X-FEL) accelerator, which is now under development in Deutsche Elektronen-Synchrotron (DESY), will consists of around 800 cavities with a fast tuner fixture including the actuator/sensor configuration. Therefore, it is necessary to design a distributed control system which would be able to supervise around 25 RF stations, each one comprised of 32 cavities. The Advanced Telecomunications Computing Architecture (ATCA) was chosen to design, develop, and build a Low Level Radio Frequency (LLRF) controller for X-FEL. The prototype control system for Lorentz force detuning compensation was designed and developed. The control applications applied in the system were fitted to the main framework of interfaces and communication protocols proposed for the ATCA-based LLRF control system. The paper presents the general view of a designed control system and shows the first experimental results from the tests carried out in FLASH facility. Moreover, the possibilities for integration of the piezo control system to the ATCA standards are discussed.
Electron wind in strong wave guide fields
NASA Astrophysics Data System (ADS)
Krienen, F.
1985-03-01
The X-ray activity observed near highly powered waveguide structures is usually caused by local electric discharges originating from discontinuities such as couplers, tuners or bends. In traveling waves electrons move in the direction of the power flow. Seed electrons can multipactor in a traveling wave, the moving charge pattern is different from the multipactor in a resonant structure and is self-extinguishing. The charge density in the wave guide will modify impedance and propagation constant of the wave guide. The radiation level inside the output wave guide of the SLAC, 50 MW, S-band, klystron is estimated. Possible contributions of radiation to window failure are discussed.
Injector Cavities Fabrication, Vertical Test Performance and Primary Cryomodule Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Haipeng; Cheng, Guangfeng; Clemens, William
2015-09-01
After the electromagnetic design and the mechanical design of a β=0.6, 2-cell elliptical SRF cavity, the cavity has been fabricated. Then both 2-cell and 7-cell cavities have been bench tuned to the target values of frequency, coupling external Q and field flatness. After buffer chemistry polishing (BCP) and high pressure rinses (HPR), Vertical 2K cavity test results have been satisfied the specifications and ready for the string assembly. We will report the cavity performance including Lorenz Force Detuning (LFD) and Higher Order Modes (HOM) damping data. Its integration with cavity tuners to the cryomodule design will be reported.
Design study of a re-bunching RFQ for the SPES project
NASA Astrophysics Data System (ADS)
Shin, Seung Wook; Palmieri, A.; Comunian, M.; Grespan, F.; Chai, Jong Seo
2014-05-01
An upgrade to the 2nd generation of the selective production of exotic species (SPES) to produce a radioactive ion beam (RIB) has been studied at the istituto nazionale di fisica nucleare — laboratory nazionali di Legnaro (INFN-LNL). Due to the long distance between the isotope separator online (ISOL) facility and the superconducting quarter wave resonator (QWR) cavity acceleratore lineare per ioni (ALPI), a new re-buncher cavity must be introduced to maintain the high beam quality during the beam transport. A particular radio frequency quadrupole (RFQ) structure has been suggested to meet the requirements of this project. A window-type RFQ, which has a high mode separation, less power dissipation and compact size compared to the conventional normal 4-vane RFQ, has been introduced. The RF design has been studied considering the requirements of the re-bunching machine for high figures of merit such as a proper operation frequency, a high shunt impedance, a high quality factor, a low power dissipation, etc. A sensitivity analysis of the fabrication and the misalignment error has been conducted. A micro-movement slug tuner has been introduced to compensate for the frequency variations that may occur due to the beam loading, the thermal instability, the microphonic effect, etc.
RF structure design of the China Material Irradiation Facility RFQ
NASA Astrophysics Data System (ADS)
Li, Chenxing; He, Yuan; Xu, Xianbo; Zhang, Zhouli; Wang, Fengfeng; Dou, Weiping; Wang, Zhijun; Wang, Tieshan
2017-10-01
The radio frequency structure design of the radio frequency quadrupole (RFQ) for the front end of China Material Irradiation Facility (CMIF), which is an accelerator based neutron irradiation facility for fusion reactor material qualification, has been completed. The RFQ is specified to accelerate 10 mA continuous deuteron beams from the energies of 20 keV/u to 1.5 MeV/u within the vane length of 5250 mm. The working frequency of the RFQ is selected to 162.5 MHz and the inter-vane voltage is set to 65 kV. Four-vane cavity type is selected and the cavity structure is designed drawing on the experience of China Initiative Accelerator Driven System (CIADS) Injector II RFQ. In order to reduce the azimuthal asymmetry of the field caused from errors in fabrication and assembly, a frequency separation between the working mode and its nearest dipole mode is reached to 17.66 MHz by utilizing 20 pairs of π-mode stabilizing loops (PISLs) distributed along the longitudinal direction with equal intervals. For the purpose of tuning, 100 slug tuners were introduced to compensate the errors caused by machining and assembly. In order to obtain a homogeneous electrical field distribution along cavity, vane cutbacks are introduced and output endplate is modified. Multi-physics study of the cavity with radio frequency power and water cooling is performed to obtain the water temperature tuning coefficients. Through comparing to the worldwide CW RFQs, it is indicated that the power density of the designed structure is moderate for operation under continuous wave (CW) mode.
Design and development of a new SRF cavity cryomodule for the ATLAS intensity upgrade
NASA Astrophysics Data System (ADS)
Kedzie, Mark; Conway, Zachary; Fuerst, Joel; Gerbick, Scott; Kelly, Michael; Morgan, James; Ostroumov, Peter; O'Toole, Michael; Shepard, Kenneth
2012-06-01
The ATLAS heavy ion linac at Argonne National Laboratory is undergoing an intensity upgrade that includes the development and implementation of a new cryomodule containing four superconducting solenoids and seven quarter-wave drift-tube-loaded superconducting rf cavities. The rf cavities extend the state of the art for this class of structure and feature ASME code stamped stainless steel liquid helium containment vessels. The cryomodule design is a further evolution of techniques recently implemented in a previous upgrade [1]. We provide a status report on the construction effort and describe the vacuum vessel, thermal shield, cold mass support and alignment, and other subsystems including couplers and tuners. Cavity mechanical design is also reviewed.
Solar power satellite 50 kW VKS-7773 cw klystron evaluation
NASA Technical Reports Server (NTRS)
Larue, A. D.
1977-01-01
A test program for evaluating the electrical characteristics of a cw, 50 kW power output klystron at 2.45 GHz is described. The tube tested was an 8-cavity klystron, the VKS-7773 which had been in storage for seven years. Tests included preliminary testing of the tube, cold tests of microwave components, tests of the electromagnet, and first and second hot tests of the tube. During the second hot test, the tuner in the fifth cavity went down to air, preventing any further testing. Cause of failure is not known, and recommendations are to repair and modify the tube, then proceed with testing as before to meet program objectives.
Adapting TESLA technology for future cw light sources using HoBiCaT
NASA Astrophysics Data System (ADS)
Kugeler, O.; Neumann, A.; Anders, W.; Knobloch, J.
2010-07-01
The HoBiCaT facility has been set up and operated at the Helmholtz-Zentrum-Berlin and BESSY since 2005. Its purpose is testing superconducting cavities in cw mode of operation and it was successfully demonstrated that TESLA pulsed technology can be used for cw mode of operation with only minor changes. Issues that were addressed comprise of elevated dynamic thermal losses in the cavity walls, necessary modifications in the cryogenics and the cavity processing, the optimum choice of operational parameters such as cavity temperature or bandwidth, the characterization of higher order modes in the cavity, and the usability of existing tuners and couplers for cw.
Zhang, Shu; Taft, Cyrus W; Bentsman, Joseph; Hussey, Aaron; Petrus, Bryan
2012-09-01
Tuning a complex multi-loop PID based control system requires considerable experience. In today's power industry the number of available qualified tuners is dwindling and there is a great need for better tuning tools to maintain and improve the performance of complex multivariable processes. Multi-loop PID tuning is the procedure for the online tuning of a cluster of PID controllers operating in a closed loop with a multivariable process. This paper presents the first application of the simultaneous tuning technique to the multi-input-multi-output (MIMO) PID based nonlinear controller in the power plant control context, with the closed-loop system consisting of a MIMO nonlinear boiler/turbine model and a nonlinear cluster of six PID-type controllers. Although simplified, the dynamics and cross-coupling of the process and the PID cluster are similar to those used in a real power plant. The particular technique selected, iterative feedback tuning (IFT), utilizes the linearized version of the PID cluster for signal conditioning, but the data collection and tuning is carried out on the full nonlinear closed-loop system. Based on the figure of merit for the control system performance, the IFT is shown to deliver performance favorably comparable to that attained through the empirical tuning carried out by an experienced control engineer. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
33 Years of Continuous Solar Radio Flux Observations
NASA Astrophysics Data System (ADS)
Monstein, Christian
2015-10-01
In 1982, after development and testing of several analog receiver concepts, I started continuous solar radio flux observations at 230 MHz. My instruments for the observations were based on cheap commercial components out of consumer TV electronics. The main components included a TV-tuner (at that time analog), intermediate frequency (IF) amplifier and video-detector taken from used TV sets. The 5.5 MHz wide video signal was fed into an integrating circuit, in fact a low pass filter, followed by dc-offset circuit and dc-amplifier built with four ua741 and CA3140 operational amplifier integrated circuits. At that time the signal was recorded with a Heathkit stripchart recorder and ink pen; an example is shown in figure 1.
Huang, Yulu; Wang, Haipeng; Wang, Shaoheng; ...
2016-12-09
Quarter wavelength resonator (QWR) based deflecting cavities with the capability of supporting multiple odd-harmonic modes have been developed for an ultrafast periodic kicker system in the proposed Jefferson Lab Electron Ion Collider (JLEIC, formerly MEIC). Previous work on the kicking pulse synthesis and the transverse beam dynamics tracking simulations show that a flat-top kicking pulse can be generated with minimal emittance growth during injection and circulation of the cooling electron bunches. This flat-top kicking pulse can be obtained when a DC component and 10 harmonic modes with appropriate amplitude and phase are combined together. To support 10 such harmonic modes,more » four QWR cavities are used with 5, 3, 1, and 1 modes, respectively. In the multiple-mode cavities, several slightly tapered segments of the inner conductor are introduced to tune the higher order deflecting modes to be harmonic, and stub tuners are used to fine tune each frequency to compensate for potential errors. In this paper, we summarize the electromagnetic design of the five-mode cavity, including the geometry optimization to get high transverse shunt impedance, the frequency tuning and sensitivity analysis, and the single loop coupler design for coupling to all of the harmonic modes. In particular we report on the design and fabrication of a half-scale copper prototype of this proof-of-principle five-odd-mode cavity, as well as the rf bench measurements. Lastly, we demonstrate mode superposition in this cavity experimentally, which illustrates the kicking pulse generation concept.« less
SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z; Folkert, M; Wang, J
2016-06-15
Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less
2016-09-01
PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION by Jason A. Schwartz...PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION 5. FUNDING NUMBERS 6...describing how public sector organizations can implement a research and development (R&D) portfolio optimization strategy to maximize the cost
High power klystrons for efficient reliable high power amplifiers
NASA Astrophysics Data System (ADS)
Levin, M.
1980-11-01
This report covers the design of reliable high efficiency, high power klystrons which may be used in both existing and proposed troposcatter radio systems. High Power (10 kW) klystron designs were generated in C-band (4.4 GHz to 5.0 GHz), S-band (2.5 GHz to 2.7 GHz), and L-band or UHF frequencies (755 MHz to 985 MHz). The tubes were designed for power supply compatibility and use with a vapor/liquid phase heat exchanger. Four (4) S-band tubes were developed in the course of this program along with two (2) matching focusing solenoids and two (2) heat exchangers. These tubes use five (5) tuners with counters which are attached to the focusing solenoids. A reliability mathematical model of the tube and heat exchanger system was also generated.
NASA Astrophysics Data System (ADS)
Dvorak, Steven L.; Sternberg, Ben K.; Feng, Wanjie
2017-03-01
In this paper we discuss the design and verification of wide-band, multi-frequency, tuning circuits for large-moment Transmitter (TX) loops. Since these multi-frequency, tuned-TX loops allow for the simultaneous transmission of multiple frequencies at high-current levels, they are ideally suited for frequency-domain geophysical systems that collect data while moving, such as helicopter mounted systems. Furthermore, since multi-frequency tuners use the same TX loop for all frequencies, instead of using separate tuned-TX loops for each frequency, they allow for the use of larger moment TX loops. In this paper we discuss the design and simulation of one- and three-frequency tuned TX loops and then present measurement results for a three-frequency, tuned-TX loop.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madrak, R. L.; Pellico, W. A.; Romanov, G.
2016-01-01
A perpendicularly biased 2nd harmonic cavity is being designed and built for the Fermilab Booster, to help with injection and extraction. Tunable accelerating cavities were previously designed and prototyped at LANL, TRIUMF, and SSCL for use at 45-60 MHz (LANL at 50-84 MHz). The required frequency range for FNAL is 76 - 106 MHz. The garnet material chosen for the tuner is AL-800. To reliably model the cavity, its static permeability and loss tangent must be well known. As this information is not supplied by the vendor or in publications of previous studies, a first order evaluation of these propertiesmore » was made using material samples. This paper summarizes the results of the corresponding measurements« less
Faraday anomalous dispersion optical tuners
NASA Technical Reports Server (NTRS)
Wanninger, P.; Valdez, E. C.; Shay, T. M.
1992-01-01
Common methods for frequency stabilizing diode lasers systems employ gratings, etalons, optical electric double feedback, atomic resonance, and a Faraday cell with low magnetic field. Our method, the Faraday Anomalous Dispersion Optical Transmitter (FADOT) laser locking, is much simpler than other schemes. The FADOT uses commercial laser diodes with no antireflection coatings, an atomic Faraday cell with a single polarizer, and an output coupler to form a compound cavity. This method is vibration insensitive, thermal expansion effects are minimal, and the system has a frequency pull in range of 443.2 GHz (9A). Our technique is based on the Faraday anomalous dispersion optical filter. This method has potential applications in optical communication, remote sensing, and pumping laser excited optical filters. We present the first theoretical model for the FADOT and compare the calculations to our experimental results.
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe
2016-11-01
Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.
Roychowdhury, P; Mishra, L; Kewlani, H; Patil, D S; Mittal, K C
2014-03-01
A high current electron cyclotron resonance proton ion source is designed and developed for the low energy high intensity proton accelerator at Bhabha Atomic Research Centre. The plasma discharge in the ion source is stabilized by minimizing the reflected microwave power using four stub auto tuner and magnetic field. The optimization of extraction geometry is performed using PBGUNS code by varying the aperture, shape, accelerating gap, and the potential on the electrodes. While operating the source, it was found that the two layered microwave window (6 mm quartz plate and 2 mm boron nitride plate) was damaged (a fine hole was drilled) by the back-streaming electrons after continuous operation of the source for 3 h at beam current of 20-40 mA. The microwave window was then shifted from the line of sight of the back-streaming electrons and located after the water-cooled H-plane bend. In this configuration the stable operation of the high current ion source for several hours is achieved. The ion beam is extracted from the source by biasing plasma electrode, puller electrode, and ground electrode to +10 to +50 kV, -2 to -4 kV, and 0 kV, respectively. The total ion beam current of 30-40 mA is recorded on Faraday cup at 40 keV of beam energy at 600-1000 W of microwave power, 800-1000 G axial magnetic field and (1.2-3.9) × 10(-3) mbar of neutral hydrogen gas pressure in the plasma chamber. The dependence of beam current on extraction voltage, microwave power, and gas pressure is investigated in the range of operation of the ion source.
Royter, Marina; Schmidt, M; Elend, C; Höbenreich, H; Schäfer, T; Bornscheuer, U T; Antranikian, G
2009-09-01
Two novel genes encoding for heat and solvent stable lipases from strictly anaerobic extreme thermophilic bacteria Thermoanaerobacter thermohydrosulfuricus (LipTth) and Caldanaerobacter subterraneus subsp. tengcongensis (LipCst) were successfully cloned and expressed in E. coli. Recombinant proteins were purified to homogeneity by heat precipitation, hydrophobic interaction, and gel filtration chromatography. Unlike the enzymes from mesophile counterparts, enzymatic activity was measured at a broad temperature and pH range, between 40 and 90 degrees C and between pH 6.5 and 10; the half-life of the enzymes at 75 degrees C and pH 8.0 was 48 h. Inhibition was observed with 4-(2-aminoethyl)-benzenesulfonyl fluoride hydrochloride and phenylmethylsulfonylfluorid indicating that serine and thiol groups play a role in the active site of the enzymes. Gene sequence comparisons indicated very low identity to already described lipases from mesophilic and psychrophilic microorganisms. By optimal cultivation of E. coli Tuner (DE3) cells in 2-l bioreactors, a massive production of the recombinant lipases was achieved (53-2200 U/l) Unlike known lipases, the purified robust proteins are resistant against a large number of organic solvents (up to 99%) and detergents, and show activity toward a broad range of substrates, including triacylglycerols, monoacylglycerols, esters of secondary alcohols, and p-nitrophenyl esters. Furthermore, the enzyme from T. thermohydrosulfuricus is suitable for the production of optically pure compounds since it is highly S-stereoselective toward esters of secondary alcohols. The observed E values for but-3-yn-2-ol butyrate and but-3-yn-2-ol acetate of 21 and 16, respectively, make these enzymes ideal candidates for kinetic resolution of synthetically useful compounds.
Selective robust optimization: A new intensity-modulated proton therapy optimization strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yupeng; Niemela, Perttu; Siljamaki, Sami
2015-08-15
Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less
A 100 MV cryomodule for CW operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles Reece
2005-07-10
A cryomodule designed for high-gradient CW operation has been built at Jefferson Lab. The Renascence cryomodule is the final prototype of a design for use in the 12 GeV CEBAF upgrade. The module uses eight 7-cell 1497 MHz cavities to be individually powered by 13 kW klystrons. Specifications call for providing >109 MV CW with < 250 W of dynamic heat at 2.07 K. The module incorporates a new generation of tuners and higher power input waveguides. A mixture of the new HG and LL cavity shapes are used. A new high thermal conductivity RF feedthrough has been developed andmore » used on the 32 HOM coupler probes of Renascence. The cryomodule assembly is complete. Testing is to begin late June. Design features and initial test data will be presented.« less
2013-01-01
The genetic trends in fitness (inbreeding, fertility and survival) of a closed nucleus flock of Menz sheep under selection during ten years for increased body weight were investigated to evaluate the consequences of selection for body weight on fitness. A mate selection tool was used to optimize in retrospect the actual selection and matings conducted over the project period to assess if the observed genetic gains in body weight could have been achieved with a reduced level of inbreeding. In the actual selection, the genetic trends for yearling weight, fertility of ewes and survival of lambs were 0.81 kg, –0.00026% and 0.016% per generation. The average inbreeding coefficient remained zero for the first few generations and then tended to increase over generations. The genetic gains achieved with the optimized retrospective selection and matings were highly comparable with the observed values, the correlation between the average breeding values of lambs born from the actual and optimized matings over the years being 0.99. However, the level of inbreeding with the optimized mate selections remained zero until late in the years of selection. Our results suggest that an optimal selection strategy that considers both genetic merits and coancestry of mates should be adopted to sustain the Menz sheep breeding program. PMID:23783076
Chen, Kai; Lynen, Frédéric; De Beer, Maarten; Hitzel, Laure; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat
2010-11-12
Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation by using a combination of different stationary phases. Previous work has shown that SOSLC offers excellent possibilities for method development, especially after the recent modification towards linear gradient SOSLC. The present work is aimed at developing and extending the SOSLC approach towards selectivity optimization and method development for green chromatography. Contrary to current LC practices, a green mobile phase (water/ethanol/formic acid) is hereby preselected and the composition of the stationary phase is optimized under a given gradient profile to obtain baseline resolution of all target solutes in the shortest possible analysis time. With the algorithm adapted to the high viscosity property of ethanol, the principle is illustrated with a fast, full baseline resolution for a randomly selected mixture composed of sulphonamides, xanthine alkaloids and steroids. Copyright © 2010 Elsevier B.V. All rights reserved.
A parallel optimization method for product configuration and supplier selection based on interval
NASA Astrophysics Data System (ADS)
Zheng, Jian; Zhang, Meng; Li, Guoxi
2017-06-01
In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.
NASA Astrophysics Data System (ADS)
Khehra, Baljit Singh; Pharwaha, Amar Partap Singh
2017-04-01
Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.
Vibration Method for Tracking the Resonant Mode and Impedance of a Microwave Cavity
NASA Technical Reports Server (NTRS)
Barmatz, M.; Iny, O.; Yiin, T.; Khan, I.
1995-01-01
A vibration technique his been developed to continuously maintain mode resonance and impedance much between a constant frequency magnetron source and resonant cavity. This method uses a vibrating metal rod to modulate the volume of the cavity in a manner equivalent to modulating an adjustable plunger. A similar vibrating metal rod attached to a stub tuner modulates the waveguide volume between the source and cavity. A phase sensitive detection scheme determines the optimum position of the adjustable plunger and stub turner during processing. The improved power transfer during the heating of a 99.8% pure alumina rod was demonstrated using this new technique. Temperature-time and reflected power-time heating curves are presented for the cases of no tracking, impedance tracker only, mode tracker only and simultaneous impedance and mode tracking. Controlled internal melting of an alumina rod near 2000 C using both tracking units was also demonstrated.
Improved repetition rate mixed isotope CO2 TEA laser
NASA Astrophysics Data System (ADS)
Cohn, D. B.
2014-09-01
A compact CO2 TEA laser has been developed for remote chemical detection that operates at a repetition rate of 250 Hz. It emits 700 mJ/pulse at 10.6 μm in a multimode beam with the 12C16O2 isotope. With mixed 12C16O2 plus 13C16O2 isotopes it emits multiple lines in both isotope manifolds to improve detection of a broad range of chemicals. In particular, output pulse energies are 110 mJ/pulse at 9.77 μm, 250 mJ/pulse at 10 μm, and 550 mJ/pulse at 11.15 μm, useful for detection of the chemical agents Sarin, Tabun, and VX. Related work shows capability for long term sealed operation with a catalyst and an agile tuner at a wavelength shift rate of 200 Hz.
Vitamin D-Regulated MicroRNAs: Are They Protective Factors against Dengue Virus Infection?
Arboleda, John F.; Urcuqui-Inchima, Silvio
2016-01-01
Over the last few years, an increasing body of evidence has highlighted the critical participation of vitamin D in the regulation of proinflammatory responses and protection against many infectious pathogens, including viruses. The activity of vitamin D is associated with microRNAs, which are fine tuners of immune activation pathways and provide novel mechanisms to avoid the damage that arises from excessive inflammatory responses. Severe symptoms of an ongoing dengue virus infection and disease are strongly related to highly altered production of proinflammatory mediators, suggesting impairment in homeostatic mechanisms that control the host's immune response. Here, we discuss the possible implications of emerging studies anticipating the biological effects of vitamin D and microRNAs during the inflammatory response, and we attempt to extrapolate these findings to dengue virus infection and to their potential use for disease management strategies. PMID:27293435
Tunable electromagnetically induced transparency in integrated silicon photonics circuit.
Li, Ang; Bogaerts, Wim
2017-12-11
We comprehensively simulate and experimentally demonstrate a novel approach to generate tunable electromagnetically induced transparency (EIT) in a fully integrated silicon photonics circuit. It can also generate tunable fast and slow light. The circuit is a single ring resonator with two integrated tunable reflectors inside, which form an embedded Fabry-Perot (FP) cavity inside the ring cavity. The mode of the FP cavity can be controlled by tuning the reflections using integrated thermo-optic tuners. Under correct tuning conditions, the interaction of the FP mode and the ring resonance mode will generate a Fano resonance and an EIT response. The extinction ratio and bandwidth of the EIT can be tuned by controlling the reflectors. Measured group delay proves that both fast light and slow light can be generated under different tuning conditions. A maximum group delay of 1100 ps is observed because of EIT. Pulse advance around 1200 ps is also demonstrated.
RNA- and protein-mediated control of Listeria monocytogenes virulence gene expression
Lebreton, Alice; Cossart, Pascale
2017-01-01
ABSTRACT The model opportunistic pathogen Listeria monocytogenes has been the object of extensive research, aiming at understanding its ability to colonize diverse environmental niches and animal hosts. Bacterial transcriptomes in various conditions reflect this efficient adaptability. We review here our current knowledge of the mechanisms allowing L. monocytogenes to respond to environmental changes and trigger pathogenicity, with a special focus on RNA-mediated control of gene expression. We highlight how these studies have brought novel concepts in prokaryotic gene regulation, such as the ‘excludon’ where the 5′-UTR of a messenger also acts as an antisense regulator of an operon transcribed in opposite orientation, or the notion that riboswitches can regulate non-coding RNAs to integrate complex metabolic stimuli into regulatory networks. Overall, the Listeria model exemplifies that fine RNA tuners act together with master regulatory proteins to orchestrate appropriate transcriptional programmes. PMID:27217337
Electrode structure of a compact microwave driven capacitively coupled atomic beam source
NASA Astrophysics Data System (ADS)
Shimabukuro, Yuji; Takahashi, Hidenori; Wada, Motoi
2018-01-01
A compact magnetic field free atomic beam source was designed, assembled and tested the performance to produce hydrogen and nitrogen atoms. A forced air-cooled solid-state microwave power supply at 2.45 GHz frequency drives the source up to 100 W through a coaxial transmission cable coupled to a triple stub tuner for realizing a proper matching condition to the discharge load. The discharge structure of the source affected the range of operation pressure, and the pressure was reduced by four orders of magnitude through improving the electrode geometry to enhance the local electric field intensity. Optical emission spectra of the produced plasmas indicate production of hydrogen and nitrogen atoms, while the flux intensity of excited nitrogen atoms monitored by a surface ionization type detector showed the signal level close to a source developed for molecular beam epitaxy applications with 500 W RF power.
2001-05-01
This photograph shows Wes Brown, Marshall Space Flight Center's (MSFC's) lead diamond tuner, an expert in the science of using diamond-tipped tools to cut metal, inspecting the mold's physical characteristics to ensure the uniformity of its more than 6,000 grooves. This king-size copper disk, manufactured at the Space Optics Manufacturing and Technology Center (SOMTC) at MSFC, is a special mold for making high resolution monitor screens. This master mold will be used to make several other molds, each capable of forming hundreds of screens that have a type of lens called a fresnel lens. Weighing much less than conventional optics, fresnel lenses have multiple concentric grooves, each formed to a precise angle, that together create the curvature needed to focus and project images. The MSFC leads NASA's space optics manufacturing technology development as a technology leader for diamond turning. The machine used to manufacture this mold is among many one-of-a-kind pieces of equipment of MSFC's SOMTC.
Park, Han-Saem; Ko, Seo-Jin; Park, Jeong-Seok; Kim, Jin Young; Song, Hyun-Kon
2013-01-01
Electric conductivity of conducting polymers has been steadily enhanced towards a level worthy of being called its alias, “synthetic metal”. PEDOT:PSS (poly(3,4-ethylenedioxythiophene) doped with poly(styrene sulfonate)), as a representative conducting polymer, recently reached around 3,000 S cm−1, the value to open the possibility to replace transparent conductive oxides. The leading strategy to drive the conductivity increase is solvent annealing in which aqueous solution of PEDOT:PSS is treated with an assistant solvent such as DMSO (dimethyl sulfoxide). In addition to the conductivity enhancement, we found that the potential range in which PEDOT:PSS is conductive is tuned wider into a negative potential direction by the DMSO-annealing. Also, the increase in a redox-active fraction of charge carriers is proposed to be responsible for the enhancement of conductivity in the solvent annealing process. PMID:23949091
Profiling charge complementarity and selectivity for binding at the protein surface.
Sulea, Traian; Purisima, Enrico O
2003-05-01
A novel analysis and representation of the protein surface in terms of electrostatic binding complementarity and selectivity is presented. The charge optimization methodology is applied in a probe-based approach that simulates the binding process to the target protein. The molecular surface is color coded according to calculated optimal charge or according to charge selectivity, i.e., the binding cost of deviating from the optimal charge. The optimal charge profile depends on both the protein shape and charge distribution whereas the charge selectivity profile depends only on protein shape. High selectivity is concentrated in well-shaped concave pockets, whereas solvent-exposed convex regions are not charge selective. This suggests the synergy of charge and shape selectivity hot spots toward molecular selection and recognition, as well as the asymmetry of charge selectivity at the binding interface of biomolecular systems. The charge complementarity and selectivity profiles map relevant electrostatic properties in a readily interpretable way and encode information that is quite different from that visualized in the standard electrostatic potential map of unbound proteins.
Self-extinction through optimizing selection.
Parvinen, Kalle; Dieckmann, Ulf
2013-09-21
Evolutionary suicide is a process in which selection drives a viable population to extinction. So far, such selection-driven self-extinction has been demonstrated in models with frequency-dependent selection. This is not surprising, since frequency-dependent selection can disconnect individual-level and population-level interests through environmental feedback. Hence it can lead to situations akin to the tragedy of the commons, with adaptations that serve the selfish interests of individuals ultimately ruining a population. For frequency-dependent selection to play such a role, it must not be optimizing. Together, all published studies of evolutionary suicide have created the impression that evolutionary suicide is not possible with optimizing selection. Here we disprove this misconception by presenting and analyzing an example in which optimizing selection causes self-extinction. We then take this line of argument one step further by showing, in a further example, that selection-driven self-extinction can occur even under frequency-independent selection. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optimal Item Selection with Credentialing Examinations.
ERIC Educational Resources Information Center
Hambleton, Ronald K.; And Others
The study compared two promising item response theory (IRT) item-selection methods, optimal and content-optimal, with two non-IRT item selection methods, random and classical, for use in fixed-length certification exams. The four methods were used to construct 20-item exams from a pool of approximately 250 items taken from a 1985 certification…
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
To Eat or Not to Eat: An Easy Simulation of Optimal Diet Selection in the Classroom
ERIC Educational Resources Information Center
Ray, Darrell L.
2010-01-01
Optimal diet selection, a component of optimal foraging theory, suggests that animals should select a diet that either maximizes energy or nutrient consumption per unit time or minimizes the foraging time needed to attain required energy or nutrients. In this exercise, students simulate the behavior of foragers that either show no foraging…
Method of generating features optimal to a dataset and classifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.
A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.
Profiling Charge Complementarity and Selectivity for Binding at the Protein Surface
Sulea, Traian; Purisima, Enrico O.
2003-01-01
A novel analysis and representation of the protein surface in terms of electrostatic binding complementarity and selectivity is presented. The charge optimization methodology is applied in a probe-based approach that simulates the binding process to the target protein. The molecular surface is color coded according to calculated optimal charge or according to charge selectivity, i.e., the binding cost of deviating from the optimal charge. The optimal charge profile depends on both the protein shape and charge distribution whereas the charge selectivity profile depends only on protein shape. High selectivity is concentrated in well-shaped concave pockets, whereas solvent-exposed convex regions are not charge selective. This suggests the synergy of charge and shape selectivity hot spots toward molecular selection and recognition, as well as the asymmetry of charge selectivity at the binding interface of biomolecular systems. The charge complementarity and selectivity profiles map relevant electrostatic properties in a readily interpretable way and encode information that is quite different from that visualized in the standard electrostatic potential map of unbound proteins. PMID:12719221
Optimization of Helium Vessel Design for ILC Cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fratangelo, Enrico
2009-01-01
The ILC (International Linear Collider) is a proposed new major particle accelerator. It consists of two 20 km long linear accelerators colliding electrons and positrons at an energy exceeding 500 GeV, Achieving this collision energy while keeping reasonable accelerator dimensions requires the use of high electric field superconducting cavities as the main acceleration element. These cavities are operated at l.3 GHz inside an appropriate container (He vessel) at temperatures as low as 1.4 K using superfluid Helium as the refrigerating medium. The purpose of this thesis, in the context of the ILC R&D activities currently in progress at Fermilab (Fermimore » National Accelerator Laboratory), is the mechanical study of an ILC superconducting cavity and Helium vessel prototype. The main goals of these studies are the determination of the limiting working conditions of the whole He vessel assembly, the simulation of the manufacturing process of the cavity end-caps and the assessment of the Helium vessel's efficiency. In addition this thesis studies the requirements to certify the compliance with the ASME Code of the whole cavity/vessel assembly. Several Finite Elements Analyses were performed by the candidate himself in order to perform the studies listed above and described in detail in Chapters 4 through 8. ln particular the candidate has developed an improved procedure to obtain more accurate results with lower computational times. These procedures will be accurately described in the following chapters. After an introduction that briefly describes the Fennilab and in particular the Technical Division (where all the activities concerning with this thesis were developed), the first part of this thesis (Chapters 2 and 3) explains some of the main aspects of modem particle accelerators. Moreover it describes the most important particle accelerators working at the moment and the basic features of the ILC project. Chapter 4 describes all the activities that were done to certify the compliance of the Helium vessel and the cavity to the ASME code standard. After briefly recalling to the main contents of the the ASME Code (Sections II and Vlll - Division ll), the procedure used for finding all relevant stresses and comparing the obtained results with the maximum values allowed are explained. This part also includes the buckling verification of the cavity. In Chapter 5 the manufacturing process of the cavity end-caps, whose function is to link the Helium vessel with the cavity, is studied. The present configuration of the dies is described and the manufacturing process is simulated in order to explain the origin of some defects fol.llld on real parts. Finally a new design of the dies is proposed and the resulting deformed piece is compared with the design requirements. Chapter 6 describes a finite elements analysis to assess the efficiency and the stiffness of the Helium vessel. Furthermore the results of the optimization of the Helium vessel (in order to increase the value of the efficiency) are reported. The same stiffness analysis is used in Chapter 7 for the Blade-Tuner study. After a description of this tuner and of its function, the preliminary analyses done to confirm the results provided by the vendor are described and then its limiting load conditions are found. Chapter 8 shows a study of the resistance of all the welds present in between the cavity and the end-cap and between the end-caps and the He vessel for a smaller superconducting cavity operating at 3.9 GHz. Finally Chapter 9 briefly describes some R&D activities in progress at INFN (Section of Pisa) and Fermilab that could produce significant cost reductions of the Helium vessel design. All the finite elements analyses contained and described in this thesis made possible the certification of the whole superconducting cavity-Helium vessel assembly at Fermilab. Furthermore they gave several useful indications to the Fermilab staff to improve the performance of the Helium vessel by modifying some design parameters or refining the manufacturing processes.« less
Baethge, Anja; Müller, Andreas; Rigotti, Thomas
2016-03-01
The aim of this study was to investigate whether selective optimization with compensation constitutes an individualized action strategy for nurses wanting to maintain job performance under high workload. High workload is a major threat to healthcare quality and performance. Selective optimization with compensation is considered to enhance the efficient use of intra-individual resources and, therefore, is expected to act as a buffer against the negative effects of high workload. The study applied a diary design. Over five consecutive workday shifts, self-report data on workload was collected at three randomized occasions during each shift. Self-reported job performance was assessed in the evening. Self-reported selective optimization with compensation was assessed prior to the diary reporting. Data were collected in 2010. Overall, 136 nurses from 10 German hospitals participated. Selective optimization with compensation was assessed with a nine-item scale that was specifically developed for nursing. The NASA-TLX scale indicating the pace of task accomplishment was used to measure workload. Job performance was assessed with one item each concerning performance quality and forgetting of intentions. There was a weaker negative association between workload and both indicators of job performance in nurses with a high level of selective optimization with compensation, compared with nurses with a low level. Considering the separate strategies, selection and compensation turned out to be effective. The use of selective optimization with compensation is conducive to nurses' job performance under high workload levels. This finding is in line with calls to empower nurses' individual decision-making. © 2015 John Wiley & Sons Ltd.
Shan, Haijun; Xu, Haojie; Zhu, Shanan; He, Bin
2015-10-21
For sensorimotor rhythms based brain-computer interface (BCI) systems, classification of different motor imageries (MIs) remains a crucial problem. An important aspect is how many scalp electrodes (channels) should be used in order to reach optimal performance classifying motor imaginations. While the previous researches on channel selection mainly focus on MI tasks paradigms without feedback, the present work aims to investigate the optimal channel selection in MI tasks paradigms with real-time feedback (two-class control and four-class control paradigms). In the present study, three datasets respectively recorded from MI tasks experiment, two-class control and four-class control experiments were analyzed offline. Multiple frequency-spatial synthesized features were comprehensively extracted from every channel, and a new enhanced method IterRelCen was proposed to perform channel selection. IterRelCen was constructed based on Relief algorithm, but was enhanced from two aspects: change of target sample selection strategy and adoption of the idea of iterative computation, and thus performed more robust in feature selection. Finally, a multiclass support vector machine was applied as the classifier. The least number of channels that yield the best classification accuracy were considered as the optimal channels. One-way ANOVA was employed to test the significance of performance improvement among using optimal channels, all the channels and three typical MI channels (C3, C4, Cz). The results show that the proposed method outperformed other channel selection methods by achieving average classification accuracies of 85.2, 94.1, and 83.2 % for the three datasets, respectively. Moreover, the channel selection results reveal that the average numbers of optimal channels were significantly different among the three MI paradigms. It is demonstrated that IterRelCen has a strong ability for feature selection. In addition, the results have shown that the numbers of optimal channels in the three different motor imagery BCI paradigms are distinct. From a MI task paradigm, to a two-class control paradigm, and to a four-class control paradigm, the number of required channels for optimizing the classification accuracy increased. These findings may provide useful information to optimize EEG based BCI systems, and further improve the performance of noninvasive BCI.
Optimel: Software for selecting the optimal method
NASA Astrophysics Data System (ADS)
Popova, Olga; Popov, Boris; Romanov, Dmitry; Evseeva, Marina
Optimel: software for selecting the optimal method automates the process of selecting a solution method from the optimization methods domain. Optimel features practical novelty. It saves time and money when conducting exploratory studies if its objective is to select the most appropriate method for solving an optimization problem. Optimel features theoretical novelty because for obtaining the domain a new method of knowledge structuring was used. In the Optimel domain, extended quantity of methods and their properties are used, which allows identifying the level of scientific studies, enhancing the user's expertise level, expand the prospects the user faces and opening up new research objectives. Optimel can be used both in scientific research institutes and in educational institutions.
Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander
2016-01-01
Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.
Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander
2016-01-01
Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205
NASA Astrophysics Data System (ADS)
Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta
2016-06-01
With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.
Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon
2014-01-01
We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763
APT: what it has enabled us to do
NASA Astrophysics Data System (ADS)
Blacker, Brett S.; Golombek, Daniel
2004-09-01
With the development and operations deployment of the Astronomer's Proposal Tool (APT), Hubble Space Telescope (HST) proposers have been provided with an integrated toolset for Phase I and Phase II. This toolset consists of editors for filling out proposal information, an Orbit Planner for determining observation feasibility, a Visit Planner for determining schedulability, diagnostic and reporting tools and an integrated Visual Target Tuner (VTT) for viewing exposure specifications. The VTT can also overlay HST"s field of view on user-selected Flexible Image Transport System (FITS) images, perform bright object checks and query the HST archive. In addition to these direct benefits for the HST user, STScI"s internal Phase I process has been able to take advantage of the APT products. APT has enabled a substantial streamlining of the process and software processing tools, which enabled a compression by three months of the Phase I to Phase II schedule, allowing to schedule observations earlier and thus further benefiting HST observers. Some of the improvements to our process include: creating a compact disk (CD) of Phase I products; being able to print all proposals on the day of the deadline; link the proposal in Portable Document Format (PDF) with a database, and being able to run all Phase I software on a single platform. In this paper we will discuss the operational results of using APT for HST's Cycles 12 and 13 Phase I process and will show the improvements for the users and the overall process that is allowing STScI to obtain scientific results with HST three months earlier than in previous years. We will also show how APT can be and is being used for multiple missions.
What is the Optimal Strategy for Adaptive Servo-Ventilation Therapy?
Imamura, Teruhiko; Kinugawa, Koichiro
2018-05-23
Clinical advantages in the adaptive servo-ventilation (ASV) therapy have been reported in selected heart failure patients with/without sleep-disorder breathing, whereas multicenter randomized control trials could not demonstrate such advantages. Considering this discrepancy, optimal patient selection and device setting may be a key for the successful ASV therapy. Hemodynamic and echocardiographic parameters indicating pulmonary congestion such as elevated pulmonary capillary wedge pressure were reported as predictors of good response to ASV therapy. Recently, parameters indicating right ventricular dysfunction also have been reported as good predictors. Optimal device setting with appropriate pressure setting during appropriate time may also be a key. Large-scale prospective trial with optimal patient selection and optimal device setting is warranted.
Combinatorial Optimization in Project Selection Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Dewi, Sari; Sawaluddin
2018-01-01
This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.
Tang, Liyang
2013-04-04
The main aim of China's Health Care System Reform was to help the decision maker find the optimal solution to China's institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China's health care system, and it could efficiently collect the data for determining the optimal solution to China's institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts' views into various optimal solutions to this problem under the support of this pilot system. After the general framework of China's institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. The market-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the doctors' point of view; the traditional government's regulation-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the pharmacists' point of view, the hospital administrators' point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China's institutional problem of health care provider selection from the nurses' point of view, the point of view of officials in medical insurance agencies, and the health care researchers' point of view. The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection.
Asset Allocation and Optimal Contract for Delegated Portfolio Management
NASA Astrophysics Data System (ADS)
Liu, Jingjun; Liang, Jianfeng
This article studies the portfolio selection and the contracting problems between an individual investor and a professional portfolio manager in a discrete-time principal-agent framework. Portfolio selection and optimal contracts are obtained in closed form. The optimal contract was composed with the fixed fee, the cost, and the fraction of excess expected return. The optimal portfolio is similar to the classical two-fund separation theorem.
ERIC Educational Resources Information Center
Penningroth, Suzanna L.; Scott, Walter D.
2012-01-01
Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two…
Relay selection in energy harvesting cooperative networks with rateless codes
NASA Astrophysics Data System (ADS)
Zhu, Kaiyan; Wang, Fei
2018-04-01
This paper investigates the relay selection in energy harvesting cooperative networks, where the relays harvests energy from the radio frequency (RF) signals transmitted by a source, and the optimal relay is selected and uses the harvested energy to assist the information transmission from the source to its destination. Both source and the selected relay transmit information using rateless code, which allows the destination recover original information after collecting codes bits marginally surpass the entropy of original information. In order to improve transmission performance and efficiently utilize the harvested power, the optimal relay is selected. The optimization problem are formulated to maximize the achievable information rates of the system. Simulation results demonstrate that our proposed relay selection scheme outperform other strategies.
Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing
2018-06-01
Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
Embedding impedance approximations in the analysis of SIS mixers
NASA Technical Reports Server (NTRS)
Kerr, A. R.; Pan, S.-K.; Withington, S.
1992-01-01
Future millimeter-wave radio astronomy instruments will use arrays of many SIS receivers, either as focal plane arrays on individual radio telescopes, or as individual receivers on the many antennas of radio interferometers. Such applications will require broadband integrated mixers without mechanical tuners. To produce such mixers, it will be necessary to improve present mixer design techniques, most of which use the three-frequency approximation to Tucker's quantum mixer theory. This paper examines the adequacy of three approximations to Tucker's theory: (1) the usual three-frequency approximation which assumes a sinusoidal LO voltage at the junction, and a short-circuit at all frequencies above the upper sideband; (2) a five-frequency approximation which allows two LO voltage harmonics and five small-signal sidebands; and (3) a quasi five-frequency approximation in which five small-signal sidebands are allowed, but the LO voltage is assumed sinusoidal. These are compared with a full harmonic-Newton solution of Tucker's equations, including eight LO harmonics and their corresponding sidebands, for realistic SIS mixer circuits. It is shown that the accuracy of the three approximations depends strongly on the value of omega R(sub N)C for the SIS junctions used. For large omega R(sub N)C, all three approximations approach the eight-harmonic solution. For omega R(sub N)C values in the range 0.5 to 10, the range of most practical interest, the quasi five-frequency approximation is a considerable improvement over the three-frequency approximation, and should be suitable for much design work. For the realistic SIS mixers considered here, the five-frequency approximation gives results very close to those of the eight-harmonic solution. Use of these approximations, where appropriate, considerably reduces the computational effort needed to analyze an SIS mixer, and allows the design and optimization of mixers using a personal computer.
A Comparative Study of Optimization Algorithms for Engineering Synthesis.
1983-03-01
the ADS program demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular...demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular problem. 4 TABLE OF...algorithm to suit a particular problem. The ADS library of design optimization algorithms was . developed by Vanderplaats in response to the first
Optimal design of compact spur gear reductions
NASA Technical Reports Server (NTRS)
Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.
1992-01-01
The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.
Summary Report for the C50 Cryomodule Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drury, Michael; Davis, G; Fischer, John
2011-03-01
The Thomas Jefferson National Accelerator Facility has recently completed the C50 cryomodule refurbishment project. The goal of this project was to enable robust 6 GeV, 5 pass operation of the Continuous Electron Beam Accelerator Facility (CEBAF). The scope of the project included removal, refurbishment and reinstallation of ten CEBAF cryomodules at a rate of three per year. The refurbishment process included reprocessing of SRF cavities to eliminate field emission and to increase the nominal gradient from the original 5 MV/m to 12.5 MV/m. New 'dogleg' couplers were installed between the cavity and helium vessel flanges to intercept secondary electrons thatmore » produce arcing in the fundamental Power Coupler (FPC). Other changes included new ceramic RF windows for the air to vacuum interface of the FPC and improvements to the mechanical tuner. Damaged or worn components were replaced as well. All ten of the refurbished cryomodules are now installed in CEBAF and are currently operational. This paper will summarize the performance of the cryomodules.« less
CARE activities on superconducting RF cavities at INFN Milano
NASA Astrophysics Data System (ADS)
Bosotti, A.; Pierini, P.; Michelato, P.; Pagani, C.; Paparella, R.; Panzeri, N.; Monaco, L.; Paulon, R.; Novati, M.
2005-09-01
The SC RF group at INFN Milano-LASA is involved both in the TESLA/TTF collaboration and in the research and design activity on superconducting cavities for proton accelerators. Among these activities, some are supported by the European community within the CARE project. In the framework of the JRASRF collaboration we are developing a coaxial blade tuner for ILC (International Linear Collider) cavities, integrated with piezoelectric actuators for the compensation of the Lorenz force detuning and microphonics perturbation. Another activity, regarding the improved component design on SC technology, based on the information retrieving about the status of art on ancillaries and experience of various laboratories involved in SCRF, has started in our laboratory. Finally, in the framework of the HIPPI collaboration, we are testing two low beta superconducting cavities, built for the Italian TRASCO project, to verify the possibility to use them for pulsed operation. All these activities will be described here, together with the main results and the future perspectives.
Intricacies of Using Kevlar and Thermal Knives in a Deployable Release System: Issues and Solutions
NASA Technical Reports Server (NTRS)
Stewart, Alphonso C.; Hair, Jason H.; Broduer, Steve (Technical Monitor)
2002-01-01
The utilization of Kevlar cord and thermal knives in a deployable release system produces a number of issues that must be addressed in the design of the system. This paper proposes design considerations that minimize the major issues, thermal knife failure, Kevlar cord relaxation, and the measurement of the cord tension. Design practices can minimize the potential for thermal knife laminate and element damage that result in failure of the knife. A process for in-situ inspection of the knife with resistance, rather than continuity, checks and 10x zoom optical imaging can detect damaged knives. Tests allow the characterization of the behavior of the particular Kevlar cord in use and the development of specific pre-stretching techniques and initial tension values needed to meet requirements. A new method can accurately measure the tension of the Kevlar cord using a guitar tuner, because more conventional methods do not apply to arimid cords such as Kevlar.
NASA Technical Reports Server (NTRS)
Stewart, Alphonso; Hair, Jason H.
2002-01-01
The utilization of Kevlar cord and thermal knives in a deployable release system produces a number of issues that must be addressed in the design of the system. This paper proposes design considerations that minimize the major issues, thermal knife failure, Kevlar cord relaxation, and the measurement of the cord tension. Design practices can minimize the potential for thermal knife laminate and element damage that result in failure of the knife. A process for in-situ inspection of the knife with resistance, rather than continuity, checks and 10x zoom optical imaging can detect damaged knives. Tests allow the characterization of the behavior of the particular Kevlar cord in use and the development of specific prestretching techniques and initial tension values needed to meet requirements. A new method can accurately measure the tension of the Kevlar cord using a guitar tuner, because more conventional methods do not apply to arimid cords such as Kevlar.
A Power-Efficient Wireless Capacitor Charging System Through an Inductive Link
Lee, Hyung-Min; Ghovanloo, Maysam
2014-01-01
A power-efficient wireless capacitor charging system for inductively powered applications has been presented. A bank of capacitors can be directly charged from an ac source by generating a current through a series charge injection capacitor and a capacitor charger circuit. The fixed charging current reduces energy loss in switches, while maximizing the charging efficiency. An adaptive capacitor tuner compensates for the resonant capacitance variations during charging to keep the amplitude of the ac input voltage at its peak. We have fabricated the capacitor charging system prototype in a 0.35-μm 4-metal 2-poly standard CMOS process in 2.1 mm2 of chip area. It can charge four pairs of capacitors sequentially. While receiving 2.7-V peak ac input through a 2-MHz inductive link, the capacitor charging system can charge each pair of 1 μF capacitors up to ±2 V in 420 μs, achieving a high measured charging efficiency of 82%. PMID:24678284
MicroRNAs in genetic disease: rethinking the dosage.
Henrion-Caude, Alexandra; Girard, Muriel; Amiel, Jeanne
2012-08-01
To date, the general assumption was that most mutations interested protein-coding genes only. Thus, only few illustrations have mentioned here that mutations may occur in non-protein coding genes such as microRNAs (miRNAs). We thus report progress in delineating their contribution as phenotypic modulators, genetic switches and fine-tuners of gene expression. We reasoned that browsing their contribution to genetic disease may provide a framework for understanding the proper requirements to devise miRNA-based therapy strategies, in particular the relief of an appropriate dosage. Gain and loss of function of miRNA enforce the need to respectively antagonize or supply the miRNAs. We further categorized human disease according to the different ways in which the miRNA was altered arising either de novo, or inherited whether as a mendelian or as an epistatic trait, uncovering its role in epigenetics. We discuss how improving our knowledge on the contribution of miRNAs to genetic disease may be beneficial to devise appropriate gene therapy strategies.
Quantifying and tuning entanglement for quantum systems
NASA Astrophysics Data System (ADS)
Xu, Qing
A 2D Ising model with transverse field on a triangular lattice is studied using exact diagonalization. The quantum entanglement of the system is quantified by the entanglement of formation. The ground state property of the system is studied and the quantified entanglement is shown to be closely related to the ground state wavefunction while the singularity in the entanglement as a function of the transverse field is a reasonable indicator of the quantum phase transition. In order to tune the entanglement, one can either include an impurity in the otherwise homogeneous system whose strength is tunable, or one can vary the external transverse field as a tuner. The latter kind of tuning involves complicated dynamical properties of the system. From the study of the dynamics on a comparatively smaller system, we provide ways to tune the entanglement without triggering any decoherence. The finite temperature effect is also discussed. Besides showing above physical results, the realization of the trace-minimization method in our system is provided; the scalability of such method to larger systems is argued.
Improved repetition rate mixed isotope CO{sub 2} TEA laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohn, D. B., E-mail: dbctechnology@earthlink.net
2014-09-15
A compact CO{sub 2} TEA laser has been developed for remote chemical detection that operates at a repetition rate of 250 Hz. It emits 700 mJ/pulse at 10.6 μm in a multimode beam with the {sup 12}C{sup 16}O{sub 2} isotope. With mixed {sup 12}C{sup 16}O{sub 2} plus {sup 13}C{sup 16}O{sub 2} isotopes it emits multiple lines in both isotope manifolds to improve detection of a broad range of chemicals. In particular, output pulse energies are 110 mJ/pulse at 9.77 μm, 250 mJ/pulse at 10 μm, and 550 mJ/pulse at 11.15 μm, useful for detection of the chemical agents Sarin, Tabun, and VX. Relatedmore » work shows capability for long term sealed operation with a catalyst and an agile tuner at a wavelength shift rate of 200 Hz.« less
NASA Astrophysics Data System (ADS)
Stewart, Alphonso; Hair, Jason H.
2002-04-01
The utilization of Kevlar cord and thermal knives in a deployable release system produces a number of issues that must be addressed in the design of the system. This paper proposes design considerations that minimize the major issues, thermal knife failure, Kevlar cord relaxation, and the measurement of the cord tension. Design practices can minimize the potential for thermal knife laminate and element damage that result in failure of the knife. A process for in-situ inspection of the knife with resistance, rather than continuity, checks and 10x zoom optical imaging can detect damaged knives. Tests allow the characterization of the behavior of the particular Kevlar cord in use and the development of specific prestretching techniques and initial tension values needed to meet requirements. A new method can accurately measure the tension of the Kevlar cord using a guitar tuner, because more conventional methods do not apply to arimid cords such as Kevlar.
Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao
2016-08-06
Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product's performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner's ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters.
A Power-Efficient Wireless Capacitor Charging System Through an Inductive Link.
Lee, Hyung-Min; Ghovanloo, Maysam
2013-10-01
A power-efficient wireless capacitor charging system for inductively powered applications has been presented. A bank of capacitors can be directly charged from an ac source by generating a current through a series charge injection capacitor and a capacitor charger circuit. The fixed charging current reduces energy loss in switches, while maximizing the charging efficiency. An adaptive capacitor tuner compensates for the resonant capacitance variations during charging to keep the amplitude of the ac input voltage at its peak. We have fabricated the capacitor charging system prototype in a 0.35- μ m 4-metal 2-poly standard CMOS process in 2.1 mm 2 of chip area. It can charge four pairs of capacitors sequentially. While receiving 2.7-V peak ac input through a 2-MHz inductive link, the capacitor charging system can charge each pair of 1 μ F capacitors up to ±2 V in 420 μ s, achieving a high measured charging efficiency of 82%.
Copper Disk Manufactured at the Space Optics Manufacturing and Technology Center
NASA Technical Reports Server (NTRS)
2001-01-01
This photograph shows Wes Brown, Marshall Space Flight Center's (MSFC's) lead diamond tuner, an expert in the science of using diamond-tipped tools to cut metal, inspecting the mold's physical characteristics to ensure the uniformity of its more than 6,000 grooves. This king-size copper disk, manufactured at the Space Optics Manufacturing and Technology Center (SOMTC) at MSFC, is a special mold for making high resolution monitor screens. This master mold will be used to make several other molds, each capable of forming hundreds of screens that have a type of lens called a fresnel lens. Weighing much less than conventional optics, fresnel lenses have multiple concentric grooves, each formed to a precise angle, that together create the curvature needed to focus and project images. The MSFC leads NASA's space optics manufacturing technology development as a technology leader for diamond turning. The machine used to manufacture this mold is among many one-of-a-kind pieces of equipment of MSFC's SOMTC.
A low noise 665 GHz SIS quasi-particle waveguide receiver
NASA Technical Reports Server (NTRS)
Kooi, J. W.; Walker, C. K.; Leduc, H. G.; Hunter, T. R.; Benford, D. J.; Phillips, T. G.
1993-01-01
Recent results on a 565-690 GHz SIS heterodyne receiver employing a 0.36 micron(sup 2) Nb/AlOx/Nb SIS tunnel junction with high quality circular non-contacting back short and E-plane tuners in a full height wave guide mount are reported. No resonant tuning structures were incorporated in the junction design at this time, even though such structures are expected to help the performance of the receiver. The receiver operates to at least the gap frequency of Niobium, approximately 680 GHz. Typical receiver noise temperatures from 565-690 GHz range from 160K to 230K with a best value of 185K DSB at 648 GHz. With the mixer cooled from 4.3K to 2K the measured receiver noise temperatures decreased by approximately 15 percent, giving roughly 180K DSB from 660 to 680 GHz. The receiver has a full 1 GHz IF pass band and was successfully installed at the Caltech Submillimeter Observatory in Hawaii.
The Application of Optimal Defaults to Improve Elementary School Lunch Selections: Proof of Concept
ERIC Educational Resources Information Center
Loeb, Katharine L.; Radnitz, Cynthia; Keller, Kathleen L.; Schwartz, Marlene B.; Zucker, Nancy; Marcus, Sue; Pierson, Richard N.; Shannon, Michael; DeLaurentis, Danielle
2018-01-01
Background: In this study, we applied behavioral economics to optimize elementary school lunch choices via parent-driven decisions. Specifically, this experiment tested an optimal defaults paradigm, examining whether strategically manipulating the health value of a default menu could be co-opted to improve school-based lunch selections. Methods:…
A Rational Analysis of the Selection Task as Optimal Data Selection.
ERIC Educational Resources Information Center
Oaksford, Mike; Chater, Nick
1994-01-01
Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)
NASA Astrophysics Data System (ADS)
Yazdanpanah Moghadam, Peyman; Quaegebeur, Nicolas; Masson, Patrice
2015-01-01
Piezoelectric transducers are commonly used in structural health monitoring systems to generate and measure ultrasonic guided waves (GWs) by applying interfacial shear and normal stresses to the host structure. In most cases, in order to perform damage detection, advanced signal processing techniques are required, since a minimum of two dispersive modes are propagating in the host structure. In this paper, a systematic approach for mode selection is proposed by optimizing the interfacial shear stress profile applied to the host structure, representing the first step of a global optimization of selective mode actuator design. This approach has the potential of reducing the complexity of signal processing tools as the number of propagating modes could be reduced. Using the superposition principle, an analytical method is first developed for GWs excitation by a finite number of uniform segments, each contributing with a given elementary shear stress profile. Based on this, cost functions are defined in order to minimize the undesired modes and amplify the selected mode and the optimization problem is solved with a parallel genetic algorithm optimization framework. Advantages of this method over more conventional transducers tuning approaches are that (1) the shear stress can be explicitly optimized to both excite one mode and suppress other undesired modes, (2) the size of the excitation area is not constrained and mode-selective excitation is still possible even if excitation width is smaller than all excited wavelengths, and (3) the selectivity is increased and the bandwidth extended. The complexity of the optimal shear stress profile obtained is shown considering two cost functions with various optimal excitation widths and number of segments. Results illustrate that the desired mode (A0 or S0) can be excited dominantly over other modes up to a wave power ratio of 1010 using an optimal shear stress profile.
Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin
2017-06-14
The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Connor, D; Nguyen, D; Voronenko, Y
Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less
Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.
Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S
2017-01-01
Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.
Orbit design and optimization based on global telecommunication performance metrics
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.
2006-01-01
The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.
Lin, Kuan-Cheng; Hsieh, Yi-Hsiu
2015-10-01
The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features.
Practical automated glass selection and the design of apochromats with large field of view.
Siew, Ronian
2016-11-10
This paper presents an automated approach to the selection of optical glasses for the design of an apochromatic lens with large field of view, based on a design originally provided by Yang et al. [Appl. Opt.55, 5977 (2016)APOPAI0003-693510.1364/AO.55.005977]. Following from this reference's preliminary optimized structure, it is shown that the effort of glass selection is significantly reduced by using the global optimization feature in the Zemax optical design program. The glass selection process is very fast, complete within minutes, and the key lies in automating the substitution of glasses found from the global search without the need to simultaneously optimize any other lens parameter during the glass search. The result is an alternate optimized version of the lens from the above reference possessing zero axial secondary color within the visible spectrum and a large field of view. Supplementary material is provided in the form of Zemax and text files, before and after final optimization.
2013-01-01
Background The main aim of China’s Health Care System Reform was to help the decision maker find the optimal solution to China’s institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China’s health care system, and it could efficiently collect the data for determining the optimal solution to China’s institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts’ views into various optimal solutions to this problem under the support of this pilot system. Methods After the general framework of China’s institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. Results The market-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the doctors’ point of view; the traditional government’s regulation-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the pharmacists’ point of view, the hospital administrators’ point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China’s institutional problem of health care provider selection from the nurses’ point of view, the point of view of officials in medical insurance agencies, and the health care researchers’ point of view. Conclusions The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts’ views into various optimal solutions to China’s institutional problem of health care provider selection. PMID:23557082
NASA Astrophysics Data System (ADS)
Sheykhizadeh, Saheleh; Naseri, Abdolhossein
2018-04-01
Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively.
Sheykhizadeh, Saheleh; Naseri, Abdolhossein
2018-04-05
Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
Robert G. Haight; J. Douglas Brodie; Darius M. Adams
1985-01-01
The determination of an optimal sequence of diameter distributions and selection harvests for uneven-aged stand management is formulated as a discrete-time optimal-control problem with bounded control variables and free-terminal point. An efficient programming technique utilizing gradients provides solutions that are stable and interpretable on the basis of economic...
Application’s Method of Quadratic Programming for Optimization of Portfolio Selection
NASA Astrophysics Data System (ADS)
Kawamoto, Shigeru; Takamoto, Masanori; Kobayashi, Yasuhiro
Investors or fund-managers face with optimization of portfolio selection, which means that determine the kind and the quantity of investment among several brands. We have developed a method to obtain optimal stock’s portfolio more rapidly from twice to three times than conventional method with efficient universal optimization. The method is characterized by quadratic matrix of utility function and constrained matrices divided into several sub-matrices by focusing on structure of these matrices.
Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford
2018-04-01
Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.
Salehi, Mojtaba; Bahreininejad, Ardeshir
2011-08-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.
Salehi, Mojtaba
2010-01-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously. PMID:21845020
NASA Astrophysics Data System (ADS)
Mahalakshmi; Murugesan, R.
2018-04-01
This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.
FSMRank: feature selection algorithm for learning to rank.
Lai, Han-Jiang; Pan, Yan; Tang, Yong; Yu, Rong
2013-06-01
In recent years, there has been growing interest in learning to rank. The introduction of feature selection into different learning problems has been proven effective. These facts motivate us to investigate the problem of feature selection for learning to rank. We propose a joint convex optimization formulation which minimizes ranking errors while simultaneously conducting feature selection. This optimization formulation provides a flexible framework in which we can easily incorporate various importance measures and similarity measures of the features. To solve this optimization problem, we use the Nesterov's approach to derive an accelerated gradient algorithm with a fast convergence rate O(1/T(2)). We further develop a generalization bound for the proposed optimization problem using the Rademacher complexities. Extensive experimental evaluations are conducted on the public LETOR benchmark datasets. The results demonstrate that the proposed method shows: 1) significant ranking performance gain compared to several feature selection baselines for ranking, and 2) very competitive performance compared to several state-of-the-art learning-to-rank algorithms.
NASA Astrophysics Data System (ADS)
Bascetin, A.
2007-04-01
The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.
De Beer, Maarten; Lynen, Fréderic; Chen, Kai; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat
2010-03-01
Stationary-phase optimized selectivity liquid chromatography (SOS-LC) is a tool in reversed-phase LC (RP-LC) to optimize the selectivity for a given separation by combining stationary phases in a multisegment column. The presently (commercially) available SOS-LC optimization procedure and algorithm are only applicable to isocratic analyses. Step gradient SOS-LC has been developed, but this is still not very elegant for the analysis of complex mixtures composed of components covering a broad hydrophobicity range. A linear gradient prediction algorithm has been developed allowing one to apply SOS-LC as a generic RP-LC optimization method. The algorithm allows operation in isocratic, stepwise, and linear gradient run modes. The features of SOS-LC in the linear gradient mode are demonstrated by means of a mixture of 13 steroids, whereby baseline separation is predicted and experimentally demonstrated.
Mode perturbation method for optimal guided wave mode and frequency selection.
Philtron, J H; Rose, J L
2014-09-01
With a thorough understanding of guided wave mechanics, researchers can predict which guided wave modes will have a high probability of success in a particular nondestructive evaluation application. However, work continues to find optimal mode and frequency selection for a given application. This "optimal" mode could give the highest sensitivity to defects or the greatest penetration power, increasing inspection efficiency. Since material properties used for modeling work may be estimates, in many cases guided wave mode and frequency selection can be adjusted for increased inspection efficiency in the field. In this paper, a novel mode and frequency perturbation method is described and used to identify optimal mode points based on quantifiable wave characteristics. The technique uses an ultrasonic phased array comb transducer to sweep in phase velocity and frequency space. It is demonstrated using guided interface waves for bond evaluation. After searching nearby mode points, an optimal mode and frequency can be selected which has the highest sensitivity to a defect, or gives the greatest penetration power. The optimal mode choice for a given application depends on the requirements of the inspection. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kostrzewa, Daniel; Josiński, Henryk
2016-06-01
The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.
Systematic Sensor Selection Strategy (S4) User Guide
NASA Technical Reports Server (NTRS)
Sowers, T. Shane
2012-01-01
This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.
Overview of field gamma spectrometries based on Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Solving TSP problem with improved genetic algorithm
NASA Astrophysics Data System (ADS)
Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying
2018-05-01
The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.
Nonparametric variational optimization of reaction coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banushkina, Polina V.; Krivov, Sergei V., E-mail: s.krivov@leeds.ac.uk
State of the art realistic simulations of complex atomic processes commonly produce trajectories of large size, making the development of automated analysis tools very important. A popular approach aimed at extracting dynamical information consists of projecting these trajectories into optimally selected reaction coordinates or collective variables. For equilibrium dynamics between any two boundary states, the committor function also known as the folding probability in protein folding studies is often considered as the optimal coordinate. To determine it, one selects a functional form with many parameters and trains it on the trajectories using various criteria. A major problem with such anmore » approach is that a poor initial choice of the functional form may lead to sub-optimal results. Here, we describe an approach which allows one to optimize the reaction coordinate without selecting its functional form and thus avoiding this source of error.« less
Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization
NASA Technical Reports Server (NTRS)
Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.
2014-01-01
Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Reserve selection with land market feedbacks.
Butsic, Van; Lewis, David J; Radeloff, Volker C
2013-01-15
How to best site reserves is a leading question for conservation biologists. Recently, reserve selection has emphasized efficient conservation: maximizing conservation goals given the reality of limited conservation budgets, and this work indicates that land market can potentially undermine the conservation benefits of reserves by increasing property values and development probabilities near reserves. Here we propose a reserve selection methodology which optimizes conservation given both a budget constraint and land market feedbacks by using a combination of econometric models along with stochastic dynamic programming. We show that amenity based feedbacks can be accounted for in optimal reserve selection by choosing property price and land development models which exogenously estimate the effects of reserve establishment. In our empirical example, we use previously estimated models of land development and property prices to select parcels to maximize coarse woody debris along 16 lakes in Vilas County, WI, USA. Using each lake as an independent experiment, we find that including land market feedbacks in the reserve selection algorithm has only small effects on conservation efficacy. Likewise, we find that in our setting heuristic (minloss and maxgain) algorithms perform nearly as well as the optimal selection strategy. We emphasize that land market feedbacks can be included in optimal reserve selection; the extent to which this improves reserve placement will likely vary across landscapes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Cross-layer Joint Relay Selection and Power Allocation Scheme for Cooperative Relaying System
NASA Astrophysics Data System (ADS)
Zhi, Hui; He, Mengmeng; Wang, Feiyue; Huang, Ziju
2018-03-01
A novel cross-layer joint relay selection and power allocation (CL-JRSPA) scheme over physical layer and data-link layer is proposed for cooperative relaying system in this paper. Our goal is finding the optimal relay selection and power allocation scheme to maximize system achievable rate when satisfying total transmit power constraint in physical layer and statistical delay quality-of-service (QoS) demand in data-link layer. Using the concept of effective capacity (EC), our goal can be formulated into an optimal joint relay selection and power allocation (JRSPA) problem to maximize the EC when satisfying total transmit power limitation. We first solving optimal power allocation (PA) problem with Lagrange multiplier approach, and then solving optimal relay selection (RS) problem. Simulation results demonstrate that CL-JRSPA scheme gets larger EC than other schemes when satisfying delay QoS demand. In addition, the proposed CL-JRSPA scheme achieves the maximal EC when relay located approximately halfway between source and destination, and EC becomes smaller when the QoS exponent becomes larger.
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
An adaptive response surface method for crashworthiness optimization
NASA Astrophysics Data System (ADS)
Shi, Lei; Yang, Ren-Jye; Zhu, Ping
2013-11-01
Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.
Optimisation of strain selection in evolutionary continuous culture
NASA Astrophysics Data System (ADS)
Bayen, T.; Mairet, F.
2017-12-01
In this work, we study a minimal time control problem for a perfectly mixed continuous culture with n ≥ 2 species and one limiting resource. The model that we consider includes a mutation factor for the microorganisms. Our aim is to provide optimal feedback control laws to optimise the selection of the species of interest. Thanks to Pontryagin's Principle, we derive optimality conditions on optimal controls and introduce a sub-optimal control law based on a most rapid approach to a singular arc that depends on the initial condition. Using adaptive dynamics theory, we also study a simplified version of this model which allows to introduce a near optimal strategy.
Kim, Hee Seok; Lee, Dong Soo
2017-11-01
SimpleBox is an important multimedia model used to estimate the predicted environmental concentration for screening-level exposure assessment. The main objectives were (i) to quantitatively assess how the magnitude and nature of prediction bias of SimpleBox vary with the selection of observed concentration data set for optimization and (ii) to present the prediction performance of the optimized SimpleBox. The optimization was conducted using a total of 9604 observed multimedia data for 42 chemicals of four groups (i.e., polychlorinated dibenzo-p-dioxins/furans (PCDDs/Fs), polybrominated diphenyl ethers (PBDEs), phthalates, and polycyclic aromatic hydrocarbons (PAHs)). The model performance was assessed based on the magnitude and skewness of prediction bias. Monitoring data selection in terms of number of data and kind of chemicals plays a significant role in optimization of the model. The coverage of the physicochemical properties was found to be very important to reduce the prediction bias. This suggests that selection of observed data should be made such that the physicochemical property (such as vapor pressure, octanol-water partition coefficient, octanol-air partition coefficient, and Henry's law constant) range of the selected chemical groups be as wide as possible. With optimization, about 55%, 90%, and 98% of the total number of the observed concentration ratios were predicted within factors of three, 10, and 30, respectively, with negligible skewness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.
Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao
2015-04-01
Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Postoptimality Analysis in the Selection of Technology Portfolios
NASA Technical Reports Server (NTRS)
Adumitroaie, Virgil; Shelton, Kacie; Elfes, Alberto; Weisbin, Charles R.
2006-01-01
This slide presentation reviews a process of postoptimally analysing the selection of technology portfolios. The rationale for the analysis stems from the need for consistent, transparent and auditable decision making processes and tools. The methodology is used to assure that project investments are selected through an optimization of net mission value. The main intent of the analysis is to gauge the degree of confidence in the optimal solution and to provide the decision maker with an array of viable selection alternatives which take into account input uncertainties and possibly satisfy non-technical constraints. A few examples of the analysis are reviewed. The goal of the postoptimality study is to enhance and improve the decision-making process by providing additional qualifications and substitutes to the optimal solution.
2011-03-09
task stability, technology application certainty, risk, and transaction-specific investments impact the selection of the optimal mode of governance...technology application certainty, risk, and transaction-specific investments impact the selection of the optimal mode of governance. Our model views...U.S. Defense Industry. The 1990s were a perfect storm of technological change, consolidation , budget downturns, environmental uncertainty, and the
Optimization of biological sulfide removal in a CSTR bioreactor.
Roosta, Aliakbar; Jahanmiri, Abdolhossein; Mowla, Dariush; Niazi, Ali; Sotoodeh, Hamidreza
2012-08-01
In this study, biological sulfide removal from natural gas in a continuous bioreactor is investigated for estimation of the optimal operational parameters. According to the carried out reactions, sulfide can be converted to elemental sulfur, sulfate, thiosulfate, and polysulfide, of which elemental sulfur is the desired product. A mathematical model is developed and was used for investigation of the effect of various parameters on elemental sulfur selectivity. The results of the simulation show that elemental sulfur selectivity is a function of dissolved oxygen, sulfide load, pH, and concentration of bacteria. Optimal parameter values are calculated for maximum elemental sulfur selectivity by using genetic algorithm as an adaptive heuristic search. In the optimal conditions, 87.76% of sulfide loaded to the bioreactor is converted to elemental sulfur.
An experimental sample of the field gamma-spectrometer based on solid state Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Optimization of motion control laws for tether crawler or elevator systems
NASA Technical Reports Server (NTRS)
Swenson, Frank R.; Von Tiesenhausen, Georg
1988-01-01
Based on the proposal of a motion control law by Lorenzini (1987), a method is developed for optimizing motion control laws for tether crawler or elevator systems in terms of the performance measures of travel time, the smoothness of acceleration and deceleration, and the maximum values of velocity and acceleration. The Lorenzini motion control law, based on powers of the hyperbolic tangent function, is modified by the addition of a constant-velocity section, and this modified function is then optimized by parameter selections to minimize the peak acceleration value for a selected travel time or to minimize travel time for the selected peak values of velocity and acceleration. It is shown that the addition of a constant-velocity segment permits further optimization of the motion control law performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breedveld, Sebastiaan; Storchi, Pascal R. M.; Voet, Peter W. J.
2012-02-15
Purpose: To introduce iCycle, a novel algorithm for integrated, multicriterial optimization of beam angles, and intensity modulated radiotherapy (IMRT) profiles. Methods: A multicriterial plan optimization with iCycle is based on a prescription called wish-list, containing hard constraints and objectives with ascribed priorities. Priorities are ordinal parameters used for relative importance ranking of the objectives. The higher an objective priority is, the higher the probability that the corresponding objective will be met. Beam directions are selected from an input set of candidate directions. Input sets can be restricted, e.g., to allow only generation of coplanar plans, or to avoid collisions betweenmore » patient/couch and the gantry in a noncoplanar setup. Obtaining clinically feasible calculation times was an important design criterium for development of iCycle. This could be realized by sequentially adding beams to the treatment plan in an iterative procedure. Each iteration loop starts with selection of the optimal direction to be added. Then, a Pareto-optimal IMRT plan is generated for the (fixed) beam setup that includes all so far selected directions, using a previously published algorithm for multicriterial optimization of fluence profiles for a fixed beam arrangement Breedveld et al.[Phys. Med. Biol. 54, 7199-7209 (2009)]. To select the next direction, each not yet selected candidate direction is temporarily added to the plan and an optimization problem, derived from the Lagrangian obtained from the just performed optimization for establishing the Pareto-optimal plan, is solved. For each patient, a single one-beam, two-beam, three-beam, etc. Pareto-optimal plan is generated until addition of beams does no longer result in significant plan quality improvement. Plan generation with iCycle is fully automated. Results: Performance and characteristics of iCycle are demonstrated by generating plans for a maxillary sinus case, a cervical cancer patient, and a liver patient treated with SBRT. Plans generated with beam angle optimization did better meet the clinical goals than equiangular or manually selected configurations. For the maxillary sinus and liver cases, significant improvements for noncoplanar setups were seen. The cervix case showed that also in IMRT with coplanar setups, beam angle optimization with iCycle may improve plan quality. Computation times for coplanar plans were around 1-2 h and for noncoplanar plans 4-7 h, depending on the number of beams and the complexity of the site. Conclusions: Integrated beam angle and profile optimization with iCycle may result in significant improvements in treatment plan quality. Due to automation, the plan generation workload is minimal. Clinical application has started.« less
Dynamic Portfolio Strategy Using Clustering Approach
Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian
2017-01-01
The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market. PMID:28129333
Dynamic Portfolio Strategy Using Clustering Approach.
Ren, Fei; Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian
2017-01-01
The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market.
Guthier, Christian V; Damato, Antonio L; Hesser, Juergen W; Viswanathan, Akila N; Cormack, Robert A
2017-12-01
Interstitial high-dose rate (HDR) brachytherapy is an important therapeutic strategy for the treatment of locally advanced gynecologic (GYN) cancers. The outcome of this therapy is determined by the quality of dose distribution achieved. This paper focuses on a novel yet simple heuristic for catheter selection for GYN HDR brachytherapy and their comparison against state of the art optimization strategies. The proposed technique is intended to act as a decision-supporting tool to select a favorable needle configuration. The presented heuristic for catheter optimization is based on a shrinkage-type algorithm (SACO). It is compared against state of the art planning in a retrospective study of 20 patients who previously received image-guided interstitial HDR brachytherapy using a Syed Neblett template. From those plans, template orientation and position are estimated via a rigid registration of the template with the actual catheter trajectories. All potential straight trajectories intersecting the contoured clinical target volume (CTV) are considered for catheter optimization. Retrospectively generated plans and clinical plans are compared with respect to dosimetric performance and optimization time. All plans were generated with one single run of the optimizer lasting 0.6-97.4 s. Compared to manual optimization, SACO yields a statistically significant (P ≤ 0.05) improved target coverage while at the same time fulfilling all dosimetric constraints for organs at risk (OARs). Comparing inverse planning strategies, dosimetric evaluation for SACO and "hybrid inverse planning and optimization" (HIPO), as gold standard, shows no statistically significant difference (P > 0.05). However, SACO provides the potential to reduce the number of used catheters without compromising plan quality. The proposed heuristic for needle selection provides fast catheter selection with optimization times suited for intraoperative treatment planning. Compared to manual optimization, the proposed methodology results in fewer catheters without a clinically significant loss in plan quality. The proposed approach can be used as a decision support tool that guides the user to find the ideal number and configuration of catheters. © 2017 American Association of Physicists in Medicine.
Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J
2016-01-01
A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Mao, Fangjie; Zhou, Guomo; Li, Pingheng; Du, Huaqiang; Xu, Xiaojun; Shi, Yongjun; Mo, Lufeng; Zhou, Yufeng; Tu, Guoqing
2017-04-15
The selective cutting method currently used in Moso bamboo forests has resulted in a reduction of stand productivity and carbon sequestration capacity. Given the time and labor expense involved in addressing this problem manually, simulation using an ecosystem model is the most suitable approach. The BIOME-BGC model was improved to suit managed Moso bamboo forests, which was adapted to include age structure, specific ecological processes and management measures of Moso bamboo forest. A field selective cutting experiment was done in nine plots with three cutting intensities (high-intensity, moderate-intensity and low-intensity) during 2010-2013, and biomass of these plots was measured for model validation. Then four selective cutting scenarios were simulated by the improved BIOME-BGC model to optimize the selective cutting timings, intervals, retained ages and intensities. The improved model matched the observed aboveground carbon density and yield of different plots, with a range of relative error from 9.83% to 15.74%. The results of different selective cutting scenarios suggested that the optimal selective cutting measure should be cutting 30% culms of age 6, 80% culms of age 7, and all culms thereafter (above age 8) in winter every other year. The vegetation carbon density and harvested carbon density of this selective cutting method can increase by 74.63% and 21.5%, respectively, compared with the current selective cutting measure. The optimized selective cutting measure developed in this study can significantly promote carbon density, yield, and carbon sink capacity in Moso bamboo forests. Copyright © 2017 Elsevier Ltd. All rights reserved.
A linear model fails to predict orientation selectivity of cells in the cat visual cortex.
Volgushev, M; Vidyasagar, T R; Pei, X
1996-01-01
1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828
NASA Technical Reports Server (NTRS)
1976-01-01
In the conceptual design task, several feasible wind generator systems (WGS) configurations were evaluated, and the concept offering the lowest energy cost potential and minimum technical risk for utility applications was selected. In the optimization task, the selected concept was optimized utilizing a parametric computer program prepared for this purpose. In the preliminary design task, the optimized selected concept was designed and analyzed in detail. The utility requirements evaluation task examined the economic, operational, and institutional factors affecting the WGS in a utility environment, and provided additional guidance for the preliminary design effort. Results of the conceptual design task indicated that a rotor operating at constant speed, driving an AC generator through a gear transmission is the most cost effective WGS configuration. The optimization task results led to the selection of a 500 kW rating for the low power WGS and a 1500 kW rating for the high power WGS.
Selecting registration schemes in case of interstitial lung disease follow-up in CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros
Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information),more » four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the range of 1.985–2.156 mm and 1.966–2.234 mm, for NLP and ILD affected regions, respectively, excluding schemes with statistically significant lower performance (Wilcoxon signed-ranks test, p < 0.05), resulting in 13 finally selected registration schemes. Conclusions: Selected registration schemes in case of ILD CT follow-up analysis indicate the significance of adaptive stochastic gradient descent optimizer, as well as the importance of combined rigid and nonrigid schemes providing high accuracy and time efficiency. The selected optimal deformable registration schemes are equivalent in terms of their accuracy and thus compatible in terms of their clinical outcome.« less
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Heru Tjahjana, R.
2017-01-01
In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.
Selective waste collection optimization in Romania and its impact to urban climate
NASA Astrophysics Data System (ADS)
Mihai, Šercǎianu; Iacoboaea, Cristina; Petrescu, Florian; Aldea, Mihaela; Luca, Oana; Gaman, Florian; Parlow, Eberhard
2016-08-01
According to European Directives, transposed in national legislation, the Member States should organize separate collection systems at least for paper, metal, plastic, and glass until 2015. In Romania, since 2011 only 12% of municipal collected waste was recovered, the rest being stored in landfills, although storage is considered the last option in the waste hierarchy. At the same time there was selectively collected only 4% of the municipal waste. Surveys have shown that the Romanian people do not have selective collection bins close to their residencies. The article aims to analyze the current situation in Romania in the field of waste collection and management and to make a proposal for selective collection containers layout, using geographic information systems tools, for a case study in Romania. Route optimization is used based on remote sensing technologies and network analyst protocols. Optimizing selective collection system the greenhouse gases, particles and dust emissions can be reduced.
Payment mechanism and GP self-selection: capitation versus fee for service.
Allard, Marie; Jelovac, Izabela; Léger, Pierre-Thomas
2014-06-01
This paper analyzes the consequences of allowing gatekeeping general practitioners (GPs) to select their payment mechanism. We model GPs' behavior under the most common payment schemes (capitation and fee for service) and when GPs can select one among them. Our analysis considers GP heterogeneity in terms of both ability and concern for their patients' health. We show that when the costs of wasteful referrals to costly specialized care are relatively high, fee for service payments are optimal to maximize the expected patients' health net of treatment costs. Conversely, when the losses associated with failed referrals of severely ill patients are relatively high, we show that either GPs' self-selection of a payment form or capitation is optimal. Last, we extend our analysis to endogenous effort and to competition among GPs. In both cases, we show that self-selection is never optimal.
Zhou, Yuan; Shi, Tie-Mao; Hu, Yuan-Man; Gao, Chang; Liu, Miao; Song, Lin-Qi
2011-12-01
Based on geographic information system (GIS) technology and multi-objective location-allocation (LA) model, and in considering of four relatively independent objective factors (population density level, air pollution level, urban heat island effect level, and urban land use pattern), an optimized location selection for the urban parks within the Third Ring of Shenyang was conducted, and the selection results were compared with the spatial distribution of existing parks, aimed to evaluate the rationality of the spatial distribution of urban green spaces. In the location selection of urban green spaces in the study area, the factor air pollution was most important, and, compared with single objective factor, the weighted analysis results of multi-objective factors could provide optimized spatial location selection of new urban green spaces. The combination of GIS technology with LA model would be a new approach for the spatial optimizing of urban green spaces.
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Optimizing Experimental Design for Comparing Models of Brain Function
Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas
2011-01-01
This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485
A Lifetime Maximization Relay Selection Scheme in Wireless Body Area Networks.
Zhang, Yu; Zhang, Bing; Zhang, Shi
2017-06-02
Network Lifetime is one of the most important metrics in Wireless Body Area Networks (WBANs). In this paper, a relay selection scheme is proposed under the topology constrains specified in the IEEE 802.15.6 standard to maximize the lifetime of WBANs through formulating and solving an optimization problem where relay selection of each node acts as optimization variable. Considering the diversity of the sensor nodes in WBANs, the optimization problem takes not only energy consumption rate but also energy difference among sensor nodes into account to improve the network lifetime performance. Since it is Non-deterministic Polynomial-hard (NP-hard) and intractable, a heuristic solution is then designed to rapidly address the optimization. The simulation results indicate that the proposed relay selection scheme has better performance in network lifetime compared with existing algorithms and that the heuristic solution has low time complexity with only a negligible performance degradation gap from optimal value. Furthermore, we also conduct simulations based on a general WBAN model to comprehensively illustrate the advantages of the proposed algorithm. At the end of the evaluation, we validate the feasibility of our proposed scheme via an implementation discussion.
Zheng, Xiaoming
2017-12-01
The purpose of this work was to examine the effects of relationship functions between diagnostic image quality and radiation dose on the governing equations for image acquisition parameter variations in X-ray imaging. Various equations were derived for the optimal selection of peak kilovoltage (kVp) and exposure parameter (milliAmpere second, mAs) in computed tomography (CT), computed radiography (CR), and direct digital radiography. Logistic, logarithmic, and linear functions were employed to establish the relationship between radiation dose and diagnostic image quality. The radiation dose to the patient, as a function of image acquisition parameters (kVp, mAs) and patient size (d), was used in radiation dose and image quality optimization. Both logistic and logarithmic functions resulted in the same governing equation for optimal selection of image acquisition parameters using a dose efficiency index. For image quality as a linear function of radiation dose, the same governing equation was derived from the linear relationship. The general equations should be used in guiding clinical X-ray imaging through optimal selection of image acquisition parameters. The radiation dose to the patient could be reduced from current levels in medical X-ray imaging.
A Miniaturized Spectrometer for Optimized Selection of Subsurface Samples for Future MSR Missions
NASA Astrophysics Data System (ADS)
De Sanctis, M. C.; Altieri, F.; De Angelis, S.; Ferrari, M.; Frigeri, A.; Biondi, D.; Novi, S.; Antonacci, F.; Gabrieli, R.; Paolinetti, R.; Villa, F.; Ammannito, A.; Mugnuolo, R.; Pirrotta, S.
2018-04-01
We present the concept of a miniaturized spectrometer based on the ExoMars2020/Ma_MISS experiment. Coupled with a drill tool, it will allow an assessment of subsurface composition and optimize the selection of martian samples with a high astrobiological potential.
Use of a quality trait index to increase the reliability of phenotypic evaluations in broccoli
USDA-ARS?s Scientific Manuscript database
Selection of superior broccoli hybrids involves multiple considerations, including optimization of head quality traits. Quality assessment of broccoli heads is often confounded by relatively subjective human preferences for optimal appearance of heads. To assist the selection process, we assessed fi...
Gao, JianZhao; Tao, Xue-Wen; Zhao, Jia; Feng, Yuan-Ming; Cai, Yu-Dong; Zhang, Ning
2017-01-01
Lysine acetylation, as one type of post-translational modifications (PTM), plays key roles in cellular regulations and can be involved in a variety of human diseases. However, it is often high-cost and time-consuming to use traditional experimental approaches to identify the lysine acetylation sites. Therefore, effective computational methods should be developed to predict the acetylation sites. In this study, we developed a position-specific method for epsilon lysine acetylation site prediction. Sequences of acetylated proteins were retrieved from the UniProt database. Various kinds of features such as position specific scoring matrix (PSSM), amino acid factors (AAF), and disorders were incorporated. A feature selection method based on mRMR (Maximum Relevance Minimum Redundancy) and IFS (Incremental Feature Selection) was employed. Finally, 319 optimal features were selected from total 541 features. Using the 319 optimal features to encode peptides, a predictor was constructed based on dagging. As a result, an accuracy of 69.56% with MCC of 0.2792 was achieved. We analyzed the optimal features, which suggested some important factors determining the lysine acetylation sites. We developed a position-specific method for epsilon lysine acetylation site prediction. A set of optimal features was selected. Analysis of the optimal features provided insights into the mechanism of lysine acetylation sites, providing guidance of experimental validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Role of non-coding RNAs in non-aging-related neurological disorders.
Vieira, A S; Dogini, D B; Lopes-Cendes, I
2018-06-11
Protein coding sequences represent only 2% of the human genome. Recent advances have demonstrated that a significant portion of the genome is actively transcribed as non-coding RNA molecules. These non-coding RNAs are emerging as key players in the regulation of biological processes, and act as "fine-tuners" of gene expression. Neurological disorders are caused by a wide range of genetic mutations, epigenetic and environmental factors, and the exact pathophysiology of many of these conditions is still unknown. It is currently recognized that dysregulations in the expression of non-coding RNAs are present in many neurological disorders and may be relevant in the mechanisms leading to disease. In addition, circulating non-coding RNAs are emerging as potential biomarkers with great potential impact in clinical practice. In this review, we discuss mainly the role of microRNAs and long non-coding RNAs in several neurological disorders, such as epilepsy, Huntington disease, fragile X-associated ataxia, spinocerebellar ataxias, amyotrophic lateral sclerosis (ALS), and pain. In addition, we give information about the conditions where microRNAs have demonstrated to be potential biomarkers such as in epilepsy, pain, and ALS.
805 MHz Beta = 0.47 Elliptical Accelerating Structure R & D
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. Bricker; C. Compton; W. Hartung
2008-09-22
A 6-cell 805 MHz superconducting cavity for acceleration in the velocity range of about 0.4 to 0.53 times the speed of light was designed. After single-cell prototyping, three 6-cell niobium cavities were fabricated. In vertical RF tests of the 6-cell cavities, the measured quality factors (Q{sub 0}) were between 7 {center_dot} 10{sup 9} and 1.4 {center_dot} 10{sup 10} at the design field (accelerating gradient of 8 to 10 MV/m). A rectangular cryomodule was designed to house 4 cavities per cryomodule. The 4-cavity cryomodule could be used for acceleration of ions in a linear accelerator, with focusing elements between the cryomodules.more » A prototype cryomodule was fabricated to test 2 cavities under realistic operating conditions. Two of the 6-cell cavities were equipped with helium tanks, tuners, and input coupler and installed into the cryomodule. The prototype cryomodule was used to verify alignment, electromagnetic performance, frequency tuning, cryogenic performance, low-level RF control, and control of microphonics.« less
Application of extremum seeking for time-varying systems to resonance control of RF cavities
Scheinker, Alexander
2016-09-13
A recently developed form of extremum seeking for time-varying systems is implemented in hardware for the resonance control of radio-frequency cavities without phase measurements. Normal conducting RF cavity resonance control is performed via a slug tuner, while superconducting TESLA-type cavity resonance control is performed via piezo actuators. The controller maintains resonance by minimizing reflected power by utilizing model-independent adaptive feedback. Unlike standard phase-measurement-based resonance control, the presented approach is not sensitive to arbitrary phase shifts of the RF signals due to temperature-dependent cable length or phasemeasurement hardware changes. The phase independence of this method removes common slowly varying drifts andmore » required periodic recalibration of phase-based methods. A general overview of the adaptive controller is presented along with the proof of principle experimental results at room temperature. Lastly, this method allows us to both maintain a cavity at a desired resonance frequency and also to dynamically modify its resonance frequency to track the unknown time-varying frequency of an RF source, thereby maintaining maximal cavity field strength, based only on power-level measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheinker, Alexander
A recently developed form of extremum seeking for time-varying systems is implemented in hardware for the resonance control of radio-frequency cavities without phase measurements. Normal conducting RF cavity resonance control is performed via a slug tuner, while superconducting TESLA-type cavity resonance control is performed via piezo actuators. The controller maintains resonance by minimizing reflected power by utilizing model-independent adaptive feedback. Unlike standard phase-measurement-based resonance control, the presented approach is not sensitive to arbitrary phase shifts of the RF signals due to temperature-dependent cable length or phasemeasurement hardware changes. The phase independence of this method removes common slowly varying drifts andmore » required periodic recalibration of phase-based methods. A general overview of the adaptive controller is presented along with the proof of principle experimental results at room temperature. Lastly, this method allows us to both maintain a cavity at a desired resonance frequency and also to dynamically modify its resonance frequency to track the unknown time-varying frequency of an RF source, thereby maintaining maximal cavity field strength, based only on power-level measurements.« less
Perpendicular Biased Ferrite Tuned Cavities for the Fermilab Booster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanov, Gennady; Awida, Mohamed; Khabiboulline, Timergali
2014-07-01
The aging Fermilab Booster RF system needs an upgrade to support future experimental program. The important feature of the upgrade is substantial enhancement of the requirements for the accelerating cavities. The new requirements include enlargement of the cavity beam pipe aperture, increase of the cavity voltage and increase in the repetition rate. The modification of the present traditional parallel biased ferrite cavities is rather challenging. An alternative to rebuilding the present Fermilab Booster RF cavities is to design and construct new perpendicular biased RF cavities, which potentially offer a number of advantages. An evaluation and a preliminary design of themore » perpendicular biased ferrite tuned cavities for the Fermilab Booster upgrade is described in the paper. Also it is desirable for better Booster performance to improve the capture of beam in the Booster during injection and at the start of the ramp. One possible way to do that is to flatten the bucket by introducing second harmonic cavities into the Booster. This paper also looks into the option of using perpendicularly biased ferrite tuners for the second harmonic cavities.« less
UCHL3 Regulates Topoisomerase-Induced Chromosomal Break Repair by Controlling TDP1 Proteostasis.
Liao, Chunyan; Beveridge, Ryan; Hudson, Jessica J R; Parker, Jacob D; Chiang, Shih-Chieh; Ray, Swagat; Ashour, Mohamed E; Sudbery, Ian; Dickman, Mark J; El-Khamisy, Sherif F
2018-06-12
Genomic damage can feature DNA-protein crosslinks whereby their acute accumulation is utilized to treat cancer and progressive accumulation causes neurodegeneration. This is typified by tyrosyl DNA phosphodiesterase 1 (TDP1), which repairs topoisomerase-mediated chromosomal breaks. Although TDP1 levels vary in multiple clinical settings, the mechanism underpinning this variation is unknown. We reveal that TDP1 is controlled by ubiquitylation and identify UCHL3 as the deubiquitylase that controls TDP1 proteostasis. Depletion of UCHL3 increases TDP1 ubiquitylation and turnover rate and sensitizes cells to TOP1 poisons. Overexpression of UCHL3, but not a catalytically inactive mutant, suppresses TDP1 ubiquitylation and turnover rate. TDP1 overexpression in the topoisomerase therapy-resistant rhabdomyosarcoma is driven by UCHL3 overexpression. In contrast, UCHL3 is downregulated in spinocerebellar ataxia with axonal neuropathy (SCAN1), causing elevated levels of TDP1 ubiquitylation and faster turnover rate. These data establish UCHL3 as a regulator of TDP1 proteostasis and, consequently, a fine-tuner of protein-linked DNA break repair. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Kim, Jung-Hyun; Baddoo, Melody C.; Park, Eun Young; Stone, Joshua K.; Park, Hyeonsoo; Butler, Thomas W.; Huang, Gang; Yan, Xiaomei; Pauli-Behn, Florencia; Myers, Richard M.; Tan, Ming; Flemington, Erik K.; Lim, Ssang-Taek; Erin Ahn, Eun-Young
2016-01-01
SUMMARY Dysregulation of MLL complex-mediated histone methylation plays a pivotal role in gene expression associated with diseases, but little is known about cellular factors modulating MLL complex activity. Here, we report that SON, previously known as an RNA splicing factor, controls MLL complex-mediated transcriptional initiation. SON binds to DNA near transcription start sites, interacts with menin, and inhibits MLL complex assembly, resulting in decreased H3K4me3 and transcriptional repression. Importantly, alternatively spliced short isoforms of SON are markedly upregulated in acute myeloid leukemia. The short isoforms compete with full-length SON for chromatin occupancy, but lack the menin-binding ability, thereby antagonizing full-length SON function in transcriptional repression while not impairing full-length SON-mediated RNA splicing. Furthermore, overexpression of a short isoform of SON enhances replating potential of hematopoietic progenitors. Our findings define SON as a fine-tuner of the MLL-menin interaction and reveal short SON overexpression as a marker indicating aberrant transcriptional initiation in leukemia. PMID:26990989
Optimized Periocular Template Selection for Human Recognition
Sa, Pankaj K.; Majhi, Banshidhar
2013-01-01
A novel approach for selecting a rectangular template around periocular region optimally potential for human recognition is proposed. A comparatively larger template of periocular image than the optimal one can be slightly more potent for recognition, but the larger template heavily slows down the biometric system by making feature extraction computationally intensive and increasing the database size. A smaller template, on the contrary, cannot yield desirable recognition though the smaller template performs faster due to low computation for feature extraction. These two contradictory objectives (namely, (a) to minimize the size of periocular template and (b) to maximize the recognition through the template) are aimed to be optimized through the proposed research. This paper proposes four different approaches for dynamic optimal template selection from periocular region. The proposed methods are tested on publicly available unconstrained UBIRISv2 and FERET databases and satisfactory results have been achieved. Thus obtained template can be used for recognition of individuals in an organization and can be generalized to recognize every citizen of a nation. PMID:23984370
Protein construct storage: Bayesian variable selection and prediction with mixtures.
Clyde, M A; Parmigiani, G
1998-07-01
Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.
Longin, C Friedrich H; Utz, H Friedrich; Reif, Jochen C; Schipprack, Wolfgang; Melchinger, Albrecht E
2006-03-01
Optimum allocation of resources is of fundamental importance for the efficiency of breeding programs. The objectives of our study were to (1) determine the optimum allocation for the number of lines and test locations in hybrid maize breeding with doubled haploids (DHs) regarding two optimization criteria, the selection gain deltaG(k) and the probability P(k) of identifying superior genotypes, (2) compare both optimization criteria including their standard deviations (SDs), and (3) investigate the influence of production costs of DHs on the optimum allocation. For different budgets, number of finally selected lines, ratios of variance components, and production costs of DHs, the optimum allocation of test resources under one- and two-stage selection for testcross performance with a given tester was determined by using Monte Carlo simulations. In one-stage selection, lines are tested in field trials in a single year. In two-stage selection, optimum allocation of resources involves evaluation of (1) a large number of lines in a small number of test locations in the first year and (2) a small number of the selected superior lines in a large number of test locations in the second year, thereby maximizing both optimization criteria. Furthermore, to have a realistic chance of identifying a superior genotype, the probability P(k) of identifying superior genotypes should be greater than 75%. For budgets between 200 and 5,000 field plot equivalents, P(k) > 75% was reached only for genotypes belonging to the best 5% of the population. As the optimum allocation for P(k)(5%) was similar to that for deltaG(k), the choice of the optimization criterion was not crucial. The production costs of DHs had only a minor effect on the optimum number of locations and on values of the optimization criteria.
Adaptive feature selection using v-shaped binary particle swarm optimization.
Teng, Xuyang; Dong, Hongbin; Zhou, Xiurong
2017-01-01
Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers.
Adaptive feature selection using v-shaped binary particle swarm optimization
Dong, Hongbin; Zhou, Xiurong
2017-01-01
Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850
Training set optimization under population structure in genomic selection.
Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E
2015-01-01
Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.
Training set optimization under population structure in genomic selection
USDA-ARS?s Scientific Manuscript database
The optimization of the training set (TRS) in genomic selection (GS) has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the Coefficient of D...
Structure-Based Design of Highly Selective Inhibitors of the CREB Binding Protein Bromodomain.
Denny, R Aldrin; Flick, Andrew C; Coe, Jotham; Langille, Jonathan; Basak, Arindrajit; Liu, Shenping; Stock, Ingrid; Sahasrabudhe, Parag; Bonin, Paul; Hay, Duncan A; Brennan, Paul E; Pletcher, Mathew; Jones, Lyn H; Chekler, Eugene L Piatnitski
2017-07-13
Chemical probes are required for preclinical target validation to interrogate novel biological targets and pathways. Selective inhibitors of the CREB binding protein (CREBBP)/EP300 bromodomains are required to facilitate the elucidation of biology associated with these important epigenetic targets. Medicinal chemistry optimization that paid particular attention to physiochemical properties delivered chemical probes with desirable potency, selectivity, and permeability attributes. An important feature of the optimization process was the successful application of rational structure-based drug design to address bromodomain selectivity issues (particularly against the structurally related BRD4 protein).
Trotter, B Wesley; Nanda, Kausik K; Burgey, Christopher S; Potteiger, Craig M; Deng, James Z; Green, Ahren I; Hartnett, John C; Kett, Nathan R; Wu, Zhicai; Henze, Darrell A; Della Penna, Kimberly; Desai, Reshma; Leitl, Michael D; Lemaire, Wei; White, Rebecca B; Yeh, Suzie; Urban, Mark O; Kane, Stefanie A; Hartman, George D; Bilodeau, Mark T
2011-04-15
A new series of imidazopyridine CB2 agonists is described. Structural optimization improved CB2/CB1 selectivity in this series and conferred physical properties that facilitated high in vivo exposure, both centrally and peripherally. Administration of a highly selective CB2 agonist in a rat model of analgesia was ineffective despite substantial CNS exposure, while administration of a moderately selective CB2/CB1 agonist exhibited significant analgesic effects. Copyright © 2011 Elsevier Ltd. All rights reserved.
Gradient stationary phase optimized selectivity liquid chromatography with conventional columns.
Chen, Kai; Lynen, Frédéric; Szucs, Roman; Hanna-Brown, Melissa; Sandra, Pat
2013-05-21
Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation. By combination of different stationary phases, SOSLC offers excellent possibilities for method development under both isocratic and gradient conditions. The so far available commercial SOSLC protocol utilizes dedicated column cartridges and corresponding cartridge holders to build up the combined column of different stationary phases. The present work is aimed at developing and extending the gradient SOSLC approach towards coupling conventional columns. Generic tubing was used to connect short commercially available LC columns. Fast and base-line separation of a mixture of 12 compounds containing phenones, benzoic acids and hydroxybenzoates under both isocratic and linear gradient conditions was selected to demonstrate the potential of SOSLC. The influence of the connecting tubing on the deviation of predictions is also discussed.
Optimizing event selection with the random grid search
NASA Astrophysics Data System (ADS)
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; Stewart, Chip
2018-07-01
The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.
Optimal Target Stars in the Search for Life
NASA Astrophysics Data System (ADS)
Lingam, Manasvi; Loeb, Abraham
2018-04-01
The selection of optimal targets in the search for life represents a highly important strategic issue. In this Letter, we evaluate the benefits of searching for life around a potentially habitable planet orbiting a star of arbitrary mass relative to a similar planet around a Sun-like star. If recent physical arguments implying that the habitability of planets orbiting low-mass stars is selectively suppressed are correct, we find that planets around solar-type stars may represent the optimal targets.
Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon
2015-06-01
Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Electric Propulsion System Selection Process for Interplanetary Missions
NASA Technical Reports Server (NTRS)
Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul
2008-01-01
The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.
An opinion formation based binary optimization approach for feature selection
NASA Astrophysics Data System (ADS)
Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo
2018-02-01
This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.
ERIC Educational Resources Information Center
Robinson, Stephanie A.; Rickenbach, Elizabeth H.; Lachman, Margie E.
2016-01-01
The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to…
NASA Astrophysics Data System (ADS)
Cao, Yang; Liu, Chun; Huang, Yuehui; Wang, Tieqiang; Sun, Chenjun; Yuan, Yue; Zhang, Xinsong; Wu, Shuyun
2017-02-01
With the development of roof photovoltaic power (PV) generation technology and the increasingly urgent need to improve supply reliability levels in remote areas, islanded microgrid with photovoltaic and energy storage systems (IMPE) is developing rapidly. The high costs of photovoltaic panel material and energy storage battery material have become the primary factors that hinder the development of IMPE. The advantages and disadvantages of different types of photovoltaic panel materials and energy storage battery materials are analyzed in this paper, and guidance is provided on material selection for IMPE planners. The time sequential simulation method is applied to optimize material demands of the IMPE. The model is solved by parallel algorithms that are provided by a commercial solver named CPLEX. Finally, to verify the model, an actual IMPE is selected as a case system. Simulation results on the case system indicate that the optimization model and corresponding algorithm is feasible. Guidance for material selection and quantity demand for IMPEs in remote areas is provided by this method.
Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.
Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei
2015-06-25
Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.
Research on intrusion detection based on Kohonen network and support vector machine
NASA Astrophysics Data System (ADS)
Shuai, Chunyan; Yang, Hengcheng; Gong, Zeweiyi
2018-05-01
In view of the problem of low detection accuracy and the long detection time of support vector machine, which directly applied to the network intrusion detection system. Optimization of SVM parameters can greatly improve the detection accuracy, but it can not be applied to high-speed network because of the long detection time. a method based on Kohonen neural network feature selection is proposed to reduce the optimization time of support vector machine parameters. Firstly, this paper is to calculate the weights of the KDD99 network intrusion data by Kohonen network and select feature by weight. Then, after the feature selection is completed, genetic algorithm (GA) and grid search method are used for parameter optimization to find the appropriate parameters and classify them by support vector machines. By comparing experiments, it is concluded that feature selection can reduce the time of parameter optimization, which has little influence on the accuracy of classification. The experiments suggest that the support vector machine can be used in the network intrusion detection system and reduce the missing rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandra, S.; Habicht, P.; Chexal, B.
1995-12-01
A large amount of piping in a typical nuclear power plant is susceptible to Flow-Accelerated Corrosion (FAC) wall thinning to varying degrees. A typical PAC monitoring program includes the wall thickness measurement of a select number of components in order to judge the structural integrity of entire systems. In order to appropriately allocate resources and maintain an adequate FAC program, it is necessary to optimize the selection of components for inspection by focusing on those components which provide the best indication of system susceptibility to FAC. A better understanding of system FAC predictability and the types of FAC damage encounteredmore » can provide some of the insight needed to better focus and optimize the inspection plan for an upcoming refueling outage. Laboratory examination of FAC damaged components removed from service at Northeast Utilities` (NU) nuclear power plants provides a better understanding of the damage mechanisms involved and contributing causes. Selected results of this ongoing study are presented with specific conclusions which will help NU to better focus inspections and thus optimize the ongoing FAC inspection program.« less
NASA Astrophysics Data System (ADS)
Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming
2006-10-01
The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix
McConnel, M B; Galligan, D T
2004-10-01
Optimization programs are currently used to aid in the selection of bulls to be used in herd breeding programs. While these programs offer a systematic approach to the problem of semen selection, they ignore the impact of volume discounts. Volume discounts are discounts that vary depending on the number of straws purchased. The dynamic nature of volume discounts means that, in order to be adequately accounted for, they must be considered in the optimization routine. Failing to do this creates a missed economic opportunity because the potential benefits of optimally selecting and combining breeding company discount opportunities are not captured. To address these issues, an integer program was created which used binary decision variables to incorporate the effects of quantity discounts into the optimization program. A consistent set of trait criteria was used to select a group of bulls from 3 sample breeding companies. Three different selection programs were used to select the bulls, 2 traditional methods and the integer method. After the discounts were applied using each method, the integer program resulted in the lowest cost portfolio of bulls. A sensitivity analysis showed that the integer program also resulted in a low cost portfolio when the genetic trait goals were changed to be more or less stringent. In the sample application, a net benefit of the new approach over the traditional approaches was a 12.3 to 20.0% savings in semen cost.
Hegade, Ravindra Suryakant; De Beer, Maarten; Lynen, Frederic
2017-09-15
Chiral Stationary-Phase Optimized Selectivity Liquid Chromatography (SOSLC) is proposed as a tool to optimally separate mixtures of enantiomers on a set of commercially available coupled chiral columns. This approach allows for the prediction of the separation profiles on any possible combination of the chiral stationary phases based on a limited number of preliminary analyses, followed by automated selection of the optimal column combination. Both the isocratic and gradient SOSLC approach were implemented for prediction of the retention times for a mixture of 4 chiral pairs on all possible combinations of the 5 commercial chiral columns. Predictions in isocratic and gradient mode were performed with a commercially available and with an in-house developed Microsoft visual basic algorithm, respectively. Optimal predictions in the isocratic mode required the coupling of 4 columns whereby relative deviations between the predicted and experimental retention times ranged between 2 and 7%. Gradient predictions led to the coupling of 3 chiral columns allowing baseline separation of all solutes, whereby differences between predictions and experiments ranged between 0 and 12%. The methodology is a novel tool allowing optimizing the separation of mixtures of optical isomers. Copyright © 2017 Elsevier B.V. All rights reserved.
Inbreeding parents should invest more resources in fewer offspring.
Duthie, A Bradley; Lee, Aline M; Reid, Jane M
2016-11-30
Inbreeding increases parent-offspring relatedness and commonly reduces offspring viability, shaping selection on reproductive interactions involving relatives and associated parental investment (PI). Nevertheless, theories predicting selection for inbreeding versus inbreeding avoidance and selection for optimal PI have only been considered separately, precluding prediction of optimal PI and associated reproductive strategy given inbreeding. We unify inbreeding and PI theory, demonstrating that optimal PI increases when a female's inbreeding decreases the viability of her offspring. Inbreeding females should therefore produce fewer offspring due to the fundamental trade-off between offspring number and PI. Accordingly, selection for inbreeding versus inbreeding avoidance changes when females can adjust PI with the degree that they inbreed. By contrast, optimal PI does not depend on whether a focal female is herself inbred. However, inbreeding causes optimal PI to increase given strict monogamy and associated biparental investment compared with female-only investment. Our model implies that understanding evolutionary dynamics of inbreeding strategy, inbreeding depression, and PI requires joint consideration of the expression of each in relation to the other. Overall, we demonstrate that existing PI and inbreeding theories represent special cases of a more general theory, implying that intrinsic links between inbreeding and PI affect evolution of behaviour and intrafamilial conflict. © 2016 The Authors.
Opposing selection and environmental variation modify optimal timing of breeding.
Tarwater, Corey E; Beissinger, Steven R
2013-09-17
Studies of evolution in wild populations often find that the heritable phenotypic traits of individuals producing the most offspring do not increase proportionally in the population. This paradox may arise when phenotypic traits influence both fecundity and viability and when there is a tradeoff between these fitness components, leading to opposing selection. Such tradeoffs are the foundation of life history theory, but they are rarely investigated in selection studies. Timing of breeding is a classic example of a heritable trait under directional selection that does not result in an evolutionary response. Using a 22-y study of a tropical parrot, we show that opposing viability and fecundity selection on the timing of breeding is common and affects optimal breeding date, defined by maximization of fitness. After accounting for sampling error, the directions of viability (positive) and fecundity (negative) selection were consistent, but the magnitude of selection fluctuated among years. Environmental conditions (rainfall and breeding density) primarily and breeding experience secondarily modified selection, shifting optimal timing among individuals and years. In contrast to other studies, viability selection was as strong as fecundity selection, late-born juveniles had greater survival than early-born juveniles, and breeding later in the year increased fitness under opposing selection. Our findings provide support for life history tradeoffs influencing selection on phenotypic traits, highlight the need to unify selection and life history theory, and illustrate the importance of monitoring survival as well as reproduction for understanding phenological responses to climate change.
Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications
USDA-ARS?s Scientific Manuscript database
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...
NASA Astrophysics Data System (ADS)
Sutrisno, Widowati, Tjahjana, R. Heru
2017-12-01
The future cost in many industrial problem is obviously uncertain. Then a mathematical analysis for a problem with uncertain cost is needed. In this article, we deals with the fuzzy expected value analysis to solve an integrated supplier selection and supplier selection problem with uncertain cost where the costs uncertainty is approached by a fuzzy variable. We formulate the mathematical model of the problems fuzzy expected value based quadratic optimization with total cost objective function and solve it by using expected value based fuzzy programming. From the numerical examples result performed by the authors, the supplier selection problem was solved i.e. the optimal supplier was selected for each time period where the optimal product volume of all product that should be purchased from each supplier for each time period was determined and the product stock level was controlled as decided by the authors i.e. it was followed the given reference level.
Constraint programming based biomarker optimization.
Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng
2015-01-01
Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.
Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong
2017-03-01
Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.
Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong
2017-01-01
Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060
Gu, Wenbo; O'Connor, Daniel; Nguyen, Dan; Yu, Victoria Y; Ruan, Dan; Dong, Lei; Sheng, Ke
2018-04-01
Intensity-Modulated Proton Therapy (IMPT) is the state-of-the-art method of delivering proton radiotherapy. Previous research has been mainly focused on optimization of scanning spots with manually selected beam angles. Due to the computational complexity, the potential benefit of simultaneously optimizing beam orientations and spot pattern could not be realized. In this study, we developed a novel integrated beam orientation optimization (BOO) and scanning-spot optimization algorithm for intensity-modulated proton therapy (IMPT). A brain chordoma and three unilateral head-and-neck patients with a maximal target size of 112.49 cm 3 were included in this study. A total number of 1162 noncoplanar candidate beams evenly distributed across 4π steradians were included in the optimization. For each candidate beam, the pencil-beam doses of all scanning spots covering the PTV and a margin were calculated. The beam angle selection and spot intensity optimization problem was formulated to include three terms: a dose fidelity term to penalize the deviation of PTV and OAR doses from ideal dose distribution; an L1-norm sparsity term to reduce the number of active spots and improve delivery efficiency; a group sparsity term to control the number of active beams between 2 and 4. For the group sparsity term, convex L2,1-norm and nonconvex L2,1/2-norm were tested. For the dose fidelity term, both quadratic function and linearized equivalent uniform dose (LEUD) cost function were implemented. The optimization problem was solved using the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The IMPT BOO method was tested on three head-and-neck patients and one skull base chordoma patient. The results were compared with IMPT plans created using column generation selected beams or manually selected beams. The L2,1-norm plan selected spatially aggregated beams, indicating potential degeneracy using this norm. L2,1/2-norm was able to select spatially separated beams and achieve smaller deviation from the ideal dose. In the L2,1/2-norm plans, the [mean dose, maximum dose] of OAR were reduced by an average of [2.38%, 4.24%] and[2.32%, 3.76%] of the prescription dose for the quadratic and LEUD cost function, respectively, compared with the IMPT plan using manual beam selection while maintaining the same PTV coverage. The L2,1/2 group sparsity plans were dosimetrically superior to the column generation plans as well. Besides beam orientation selection, spot sparsification was observed. Generally, with the quadratic cost function, 30%~60% spots in the selected beams remained active. With the LEUD cost function, the percentages of active spots were in the range of 35%~85%.The BOO-IMPT run time was approximately 20 min. This work shows the first IMPT approach integrating noncoplanar BOO and scanning-spot optimization in a single mathematical framework. This method is computationally efficient, dosimetrically superior and produces delivery-friendly IMPT plans. © 2018 American Association of Physicists in Medicine.
Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm
Svečko, Rajko
2014-01-01
This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749
[Academic burnout and selection-optimization-compensation strategy in medical students].
Chun, Kyung Hee; Park, Young Soon; Lee, Young Hwan; Kim, Seong Yong
2014-12-01
This study was conducted to examine the relationship between academic demand, academic burnout, and the selection-optimization-compensation (SOC) strategy in medical students. A total of 317 students at Yeungnam University, comprising 90 premedical course students, 114 medical course students, and 113 graduate course students, completed a survey that addressed the factors of academic burnout and the selection-optimization-compensation strategy. We analyzed variances of burnout and SOC strategy use by group, and stepwise multiple regression analysis was conducted. There were significant differences in emotional exhaustion and cynicism between groups and year in school. In the SOC strategy, there were no significant differences between groups except for elective selection. The second-year medical and graduate students experienced significantly greater exhaustion (p<0.001), and first-year premedical students experienced significantly higher cynicism (p<0.001). By multiple regression analysis, subfactors of academic burnout and emotional exhaustion were significantly affected by academic demand (p<0.001), and 46% of the variance was explained. Cynicism was significantly affected by elective selection (p<0.05), and inefficacy was significantly influenced by optimization (p<0.001). To improve adaptation, prescriptive strategies and preventive support should be implemented with regard to academic burnout in medical school. Longitudinal and qualitative studies on burnout must be conducted.
Discovery and Optimization of a Novel Series of Highly Selective JAK1 Kinase Inhibitors.
Grimster, Neil P; Anderson, Erica; Alimzhanov, Marat; Bebernitz, Geraldine; Bell, Kirsten; Chuaqui, Claudio; Deegan, Tracy; Ferguson, Andrew D; Gero, Thomas; Harsch, Andreas; Huszar, Dennis; Kawatkar, Aarti; Kettle, Jason Grant; Lyne, Paul D; Read, Jon A; Rivard Costa, Caroline; Ruston, Linette; Schroeder, Patricia; Shi, Jie; Su, Qibin; Throner, Scott; Toader, Dorin; Vasbinder, Melissa Marie; Woessner, Richard; Wang, Haixia; Wu, Allan; Ye, Minwei; Zheng, Weijia; Zinda, Michael
2018-06-01
Herein, we report the discovery and characterization of a novel series of pyrimidine based JAK1 inhibitors. Optimization of these ATP competitive compounds was guided by X-ray crystallography and a structure-based drug design approach, focusing on selectivity, potency, and pharmaceutical properties. The best compound, 24, displayed remarkable JAK1 selectivity (~1000-fold vs JAK2,3 and TYK2), as well as a good kinase selectivity profile. Moreover, a dose-dependent reduction in pSTAT3, a downstream marker of JAK1 inhibition, was observed when 24 was examined in vivo.
Dynamic nuclear polarization and optimal control spatial-selective 13C MRI and MRS
NASA Astrophysics Data System (ADS)
Vinding, Mads S.; Laustsen, Christoffer; Maximov, Ivan I.; Søgaard, Lise Vejby; Ardenkjær-Larsen, Jan H.; Nielsen, Niels Chr.
2013-02-01
Aimed at 13C metabolic magnetic resonance imaging (MRI) and spectroscopy (MRS) applications, we demonstrate that dynamic nuclear polarization (DNP) may be combined with optimal control 2D spatial selection to simultaneously obtain high sensitivity and well-defined spatial restriction. This is achieved through the development of spatial-selective single-shot spiral-readout MRI and MRS experiments combined with dynamic nuclear polarization hyperpolarized [1-13C]pyruvate on a 4.7 T pre-clinical MR scanner. The method stands out from related techniques by facilitating anatomic shaped region-of-interest (ROI) single metabolite signals available for higher image resolution or single-peak spectra. The 2D spatial-selective rf pulses were designed using a novel Krotov-based optimal control approach capable of iteratively fast providing successful pulse sequences in the absence of qualified initial guesses. The technique may be important for early detection of abnormal metabolism, monitoring disease progression, and drug research.
Tandonnet, Christophe; Davranche, Karen; Meynier, Chloé; Burle, Borís; Vidal, Franck; Hasbroucq, Thierry
2012-02-01
We investigated the influence of temporal preparation on information processing. Single-pulse transcranial magnetic stimulation (TMS) of the primary motor cortex was delivered during a between-hand choice task. The time interval between the warning and the imperative stimulus varied across blocks of trials was either optimal (500 ms) or nonoptimal (2500 ms) for participants' performance. Silent period duration was shorter prior to the first evidence of response selection for the optimal condition. Amplitude of the motor evoked potential specific to the responding hand increased earlier for the optimal condition. These results revealed an early release of cortical inhibition and a faster integration of the response selection-related inputs to the corticospinal pathway when temporal preparation is better. Temporal preparation may induce cortical activation prior to response selection that speeds up the implementation of the selected response. Copyright © 2011 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Hull, Anthony B.; Westerhoff, Thomas
2015-01-01
Management of cost and risk have become the key enabling elements for compelling science to be done within Explorer or M-Class Missions. We trace how optimal primary mirror selection may be co-optimized with orbit selection. And then trace the cost and risk implications of selecting a low diffusivity low thermal expansion material for low and medium earth orbits, vs. high diffusivity high thermal expansion materials for the same orbits. We will discuss that ZERODUR®, a material that has been in space for over 30 years, is now available as highly lightweighted open-back mirrors, and the attributes of these mirrors in spaceborne optical telescope assemblies. Lightweight ZERODUR® solutions are practical from mirrors < 0.3m in diameter to >4m in diameter. An example of a 1.2m lightweight ZERODUR® mirror will be discussed.
Optimizing event selection with the random grid search
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; ...
2018-02-27
In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less
Optimizing Event Selection with the Random Grid Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen
2017-06-29
The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events inmore » the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less
Optimizing event selection with the random grid search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen
In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less
How unrealistic optimism is maintained in the face of reality.
Sharot, Tali; Korn, Christoph W; Dolan, Raymond J
2011-10-09
Unrealistic optimism is a pervasive human trait that influences domains ranging from personal relationships to politics and finance. How people maintain unrealistic optimism, despite frequently encountering information that challenges those biased beliefs, is unknown. We examined this question and found a marked asymmetry in belief updating. Participants updated their beliefs more in response to information that was better than expected than to information that was worse. This selectivity was mediated by a relative failure to code for errors that should reduce optimism. Distinct regions of the prefrontal cortex tracked estimation errors when those called for positive update, both in individuals who scored high and low on trait optimism. However, highly optimistic individuals exhibited reduced tracking of estimation errors that called for negative update in right inferior prefrontal gyrus. These findings indicate that optimism is tied to a selective update failure and diminished neural coding of undesirable information regarding the future.
Accelerating IMRT optimization by voxel sampling
NASA Astrophysics Data System (ADS)
Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.
2007-12-01
This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2004-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
A Compensatory Approach to Optimal Selection with Mastery Scores. Research Report 94-2.
ERIC Educational Resources Information Center
van der Linden, Wim J.; Vos, Hans J.
This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious example is the use of individualized instruction in…
Postoptimality analysis in the selection of technology portfolios
NASA Technical Reports Server (NTRS)
Adumitroaie, Virgil; Shelton, Kacie; Elfes, Alberto; Weisbin, Charles R.
2006-01-01
This paper describes an approach for qualifying optimal technology portfolios obtained with a multi-attribute decision support system. The goal is twofold: to gauge the degree of confidence in the optimal solution and to provide the decision-maker with an array of viable selection alternatives, which take into account input uncertainties and possibly satisfy non-technical constraints.
Xu, Lingwei; Zhang, Hao; Gulliver, T. Aaron
2016-01-01
The outage probability (OP) performance of multiple-relay incremental-selective decode-and-forward (ISDF) relaying mobile-to-mobile (M2M) sensor networks with transmit antenna selection (TAS) over N-Nakagami fading channels is investigated. Exact closed-form OP expressions for both optimal and suboptimal TAS schemes are derived. The power allocation problem is formulated to determine the optimal division of transmit power between the broadcast and relay phases. The OP performance under different conditions is evaluated via numerical simulation to verify the analysis. These results show that the optimal TAS scheme has better OP performance than the suboptimal scheme. Further, the power allocation parameter has a significant influence on the OP performance. PMID:26907282
Braggio, Simone; Montanari, Dino; Rossi, Tino; Ratti, Emiliangelo
2010-07-01
As a result of their wide acceptance and conceptual simplicity, drug-like concepts are having a major influence on the drug discovery process, particularly in the selection of the 'optimal' absorption, distribution, metabolism, excretion and toxicity and physicochemical parameters space. While they have an undisputable value when assessing the potential of lead series or in evaluating inherent risk of a portfolio of drug candidates, they result much less useful in weighing up compounds for the selection of the best potential clinical candidate. We introduce the concept of drug efficiency as a new tool both to guide the drug discovery program teams during the lead optimization phase and to better assess the developability potential of a drug candidate.
The admissible portfolio selection problem with transaction costs and an improved PSO algorithm
NASA Astrophysics Data System (ADS)
Chen, Wei; Zhang, Wei-Guo
2010-05-01
In this paper, we discuss the portfolio selection problem with transaction costs under the assumption that there exist admissible errors on expected returns and risks of assets. We propose a new admissible efficient portfolio selection model and design an improved particle swarm optimization (PSO) algorithm because traditional optimization algorithms fail to work efficiently for our proposed problem. Finally, we offer a numerical example to illustrate the proposed effective approaches and compare the admissible portfolio efficient frontiers under different constraints.
Chen, Qiang; Chen, Yunhao; Jiang, Weiguo
2016-07-30
In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.
Combined Optimal Control System for excavator electric drive
NASA Astrophysics Data System (ADS)
Kurochkin, N. S.; Kochetkov, V. P.; Platonova, E. V.; Glushkin, E. Y.; Dulesov, A. S.
2018-03-01
The article presents a synthesis of the combined optimal control algorithms of the AC drive rotation mechanism of the excavator. Synthesis of algorithms consists in the regulation of external coordinates - based on the theory of optimal systems and correction of the internal coordinates electric drive using the method "technical optimum". The research shows the advantage of optimal combined control systems for the electric rotary drive over classical systems of subordinate regulation. The paper presents a method for selecting the optimality criterion of coefficients to find the intersection of the range of permissible values of the coordinates of the control object. There is possibility of system settings by choosing the optimality criterion coefficients, which allows one to select the required characteristics of the drive: the dynamic moment (M) and the time of the transient process (tpp). Due to the use of combined optimal control systems, it was possible to significantly reduce the maximum value of the dynamic moment (M) and at the same time - reduce the transient time (tpp).
NASA Astrophysics Data System (ADS)
Maringanti, Chetan; Chaubey, Indrajeet; Popp, Jennie
2009-06-01
Best management practices (BMPs) are effective in reducing the transport of agricultural nonpoint source pollutants to receiving water bodies. However, selection of BMPs for placement in a watershed requires optimization of the available resources to obtain maximum possible pollution reduction. In this study, an optimization methodology is developed to select and place BMPs in a watershed to provide solutions that are both economically and ecologically effective. This novel approach develops and utilizes a BMP tool, a database that stores the pollution reduction and cost information of different BMPs under consideration. The BMP tool replaces the dynamic linkage of the distributed parameter watershed model during optimization and therefore reduces the computation time considerably. Total pollutant load from the watershed, and net cost increase from the baseline, were the two objective functions minimized during the optimization process. The optimization model, consisting of a multiobjective genetic algorithm (NSGA-II) in combination with a watershed simulation tool (Soil Water and Assessment Tool (SWAT)), was developed and tested for nonpoint source pollution control in the L'Anguille River watershed located in eastern Arkansas. The optimized solutions provided a trade-off between the two objective functions for sediment, phosphorus, and nitrogen reduction. The results indicated that buffer strips were very effective in controlling the nonpoint source pollutants from leaving the croplands. The optimized BMP plans resulted in potential reductions of 33%, 32%, and 13% in sediment, phosphorus, and nitrogen loads, respectively, from the watershed.
Vast Portfolio Selection with Gross-exposure Constraints*
Fan, Jianqing; Zhang, Jingjin; Yu, Ke
2012-01-01
We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404
Analytical Approach to the Fuel Optimal Impulsive Transfer Problem Using Primer Vector Method
NASA Astrophysics Data System (ADS)
Fitrianingsih, E.; Armellin, R.
2018-04-01
One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the optimality of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of optimality. The second method is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both methods come with their own advantages and disadvantages. However, it is possible to use the primer vector method to verify if the result from the direct method is truly optimal or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer optimality without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to optimize a non-optimum transfer. This paper will present the analytical approach to the fuel optimal impulsive transfer using primer vector method. The validity of the method is confirmed by comparing the result to those from the numerical method. The investigation of the optimality of direct transfer is used to give an example of the application of the method. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the optimality of all the possible transfers.
Berthaume, Michael A.; Dumont, Elizabeth R.; Godfrey, Laurie R.; Grosse, Ian R.
2014-01-01
Teeth are often assumed to be optimal for their function, which allows researchers to derive dietary signatures from tooth shape. Most tooth shape analyses normalize for tooth size, potentially masking the relationship between relative food item size and tooth shape. Here, we model how relative food item size may affect optimal tooth cusp radius of curvature (RoC) during the fracture of brittle food items using a parametric finite-element (FE) model of a four-cusped molar. Morphospaces were created for four different food item sizes by altering cusp RoCs to determine whether optimal tooth shape changed as food item size changed. The morphospaces were also used to investigate whether variation in efficiency metrics (i.e. stresses, energy and optimality) changed as food item size changed. We found that optimal tooth shape changed as food item size changed, but that all optimal morphologies were similar, with one dull cusp that promoted high stresses in the food item and three cusps that acted to stabilize the food item. There were also positive relationships between food item size and the coefficients of variation for stresses in food item and optimality, and negative relationships between food item size and the coefficients of variation for stresses in the enamel and strain energy absorbed by the food item. These results suggest that relative food item size may play a role in selecting for optimal tooth shape, and the magnitude of these selective forces may change depending on food item size and which efficiency metric is being selected. PMID:25320068
Economic evaluation of genomic selection in small ruminants: a sheep meat breeding program.
Shumbusho, F; Raoul, J; Astruc, J M; Palhiere, I; Lemarié, S; Fugeray-Scarbel, A; Elsen, J M
2016-06-01
Recent genomic evaluation studies using real data and predicting genetic gain by modeling breeding programs have reported moderate expected benefits from the replacement of classic selection schemes by genomic selection (GS) in small ruminants. The objectives of this study were to compare the cost, monetary genetic gain and economic efficiency of classic selection and GS schemes in the meat sheep industry. Deterministic methods were used to model selection based on multi-trait indices from a sheep meat breeding program. Decisional variables related to male selection candidates and progeny testing were optimized to maximize the annual monetary genetic gain (AMGG), that is, a weighted sum of meat and maternal traits annual genetic gains. For GS, a reference population of 2000 individuals was assumed and genomic information was available for evaluation of male candidates only. In the classic selection scheme, males breeding values were estimated from own and offspring phenotypes. In GS, different scenarios were considered, differing by the information used to select males (genomic only, genomic+own performance, genomic+offspring phenotypes). The results showed that all GS scenarios were associated with higher total variable costs than classic selection (if the cost of genotyping was 123 euros/animal). In terms of AMGG and economic returns, GS scenarios were found to be superior to classic selection only if genomic information was combined with their own meat phenotypes (GS-Pheno) or with their progeny test information. The predicted economic efficiency, defined as returns (proportional to number of expressions of AMGG in the nucleus and commercial flocks) minus total variable costs, showed that the best GS scenario (GS-Pheno) was up to 15% more efficient than classic selection. For all selection scenarios, optimization increased the overall AMGG, returns and economic efficiency. As a conclusion, our study shows that some forms of GS strategies are more advantageous than classic selection, provided that GS is already initiated (i.e. the initial reference population is available). Optimizing decisional variables of the classic selection scheme could be of greater benefit than including genomic information in optimized designs.
NASA Astrophysics Data System (ADS)
Budilova, E. V.; Terekhin, A. T.; Chepurnov, S. A.
1994-09-01
A hypothetical neural scheme is proposed that ensures efficient decision making by an animal searching for food in a maze. Only the general structure of the network is fixed; its quantitative characteristics are found by numerical optimization that simulates the process of natural selection. Selection is aimed at maximization of the expected number of descendants, which is directly related to the energy stored during the reproductive cycle. The main parameters to be optimized are the increments of the interneuronal links and the working-memory constants.
Maessen, J G; Phelps, B; Dekker, A L A J; Dijkman, B
2004-05-01
To optimize resynchronization in biventricular pacing with epicardial leads, mapping to determine the best pacing site, is a prerequisite. A port access surgical mapping technique was developed that allowed multiple pace site selection and reproducible lead evaluation and implantation. Pressure-volume loops analysis was used for real time guidance in targeting epicardial lead placement. Even the smallest changes in lead position revealed significantly different functional results. Optimizing the pacing site with this technique allowed functional improvement up to 40% versus random pace site selection.
Optimizing Ligand Efficiency of Selective Androgen Receptor Modulators (SARMs)
2015-01-01
A series of selective androgen receptor modulators (SARMs) containing the 1-(trifluoromethyl)benzyl alcohol core have been optimized for androgen receptor (AR) potency and drug-like properties. We have taken advantage of the lipophilic ligand efficiency (LLE) parameter as a guide to interpret the effect of structural changes on AR activity. Over the course of optimization efforts the LLE increased over 3 log units leading to a SARM 43 with nanomolar potency, good aqueous kinetic solubility (>700 μM), and high oral bioavailability in rats (83%). PMID:26819671
Volpe, Joseph M; Ward, Douglas J; Napolitano, Laura; Phung, Pham; Toma, Jonathan; Solberg, Owen; Petropoulos, Christos J; Walworth, Charles M
2015-01-01
Transmitted HIV-1 exhibiting reduced susceptibility to protease and reverse transcriptase inhibitors is well documented but limited for integrase inhibitors and enfuvirtide. We describe here a case of transmitted 5 drug class-resistance in an antiretroviral (ARV)-naïve patient who was successfully treated based on the optimized selection of an active ARV drug regimen. The value of baseline resistance testing to determine an optimal ARV treatment regimen is highlighted in this case report. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Sunarsih; Kartono
2018-01-01
In this paper, a mathematical model in quadratic programming with fuzzy parameter is proposed to determine the optimal strategy for integrated inventory control and supplier selection problem with fuzzy demand. To solve the corresponding optimization problem, we use the expected value based fuzzy programming. Numerical examples are performed to evaluate the model. From the results, the optimal amount of each product that have to be purchased from each supplier for each time period and the optimal amount of each product that have to be stored in the inventory for each time period were determined with minimum total cost and the inventory level was sufficiently closed to the reference level.
Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.
The Inverse Optimal Control Problem for a Three-Loop Missile Autopilot
NASA Astrophysics Data System (ADS)
Hwang, Donghyeok; Tahk, Min-Jea
2018-04-01
The performance characteristics of the autopilot must have a fast response to intercept a maneuvering target and reasonable robustness for system stability under the effect of un-modeled dynamics and noise. By the conventional approach, the three-loop autopilot design is handled by time constant, damping factor and open-loop crossover frequency to achieve the desired performance requirements. Note that the general optimal theory can be also used to obtain the same gain as obtained from the conventional approach. The key idea of using optimal control technique for feedback gain design revolves around appropriate selection and interpretation of the performance index for which the control is optimal. This paper derives an explicit expression, which relates the weight parameters appearing in the quadratic performance index to the design parameters such as open-loop crossover frequency, phase margin, damping factor, or time constant, etc. Since all set of selection of design parameters do not guarantee existence of optimal control law, explicit inequalities, which are named the optimality criteria for the three-loop autopilot (OC3L), are derived to find out all set of design parameters for which the control law is optimal. Finally, based on OC3L, an efficient gain selection procedure is developed, where time constant is set to design objective and open-loop crossover frequency and phase margin as design constraints. The effectiveness of the proposed technique is illustrated through numerical simulations.
Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096
Economic optimization of natural hazard protection - conceptual study of existing approaches
NASA Astrophysics Data System (ADS)
Spackova, Olga; Straub, Daniel
2013-04-01
Risk-based planning of protection measures against natural hazards has become a common practice in many countries. The selection procedure aims at identifying an economically efficient strategy with regard to the estimated costs and risk (i.e. expected damage). A correct setting of the evaluation methodology and decision criteria should ensure an optimal selection of the portfolio of risk protection measures under a limited state budget. To demonstrate the efficiency of investments, indicators such as Benefit-Cost Ratio (BCR), Marginal Costs (MC) or Net Present Value (NPV) are commonly used. However, the methodologies for efficiency evaluation differ amongst different countries and different hazard types (floods, earthquakes etc.). Additionally, several inconsistencies can be found in the applications of the indicators in practice. This is likely to lead to a suboptimal selection of the protection strategies. This study provides a general formulation for optimization of the natural hazard protection measures from a socio-economic perspective. It assumes that all costs and risks can be expressed in monetary values. The study regards the problem as a discrete hierarchical optimization, where the state level sets the criteria and constraints, while the actual optimization is made on the regional level (towns, catchments) when designing particular protection measures and selecting the optimal protection level. The study shows that in case of an unlimited budget, the task is quite trivial, as it is sufficient to optimize the protection measures in individual regions independently (by minimizing the sum of risk and cost). However, if the budget is limited, the need for an optimal allocation of resources amongst the regions arises. To ensure this, minimum values of BCR or MC can be required by the state, which must be achieved in each region. The study investigates the meaning of these indicators in the optimization task at the conceptual level and compares their suitability. To illustrate the theoretical findings, the indicators are tested on a hypothetical example of five regions with different risk levels. Last but not least, political and societal aspects and limitations in the use of the risk-based optimization framework are discussed.
The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials
Witman, Matthew; Ling, Sanliang; Jawahery, Sudi; ...
2017-03-30
For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less
The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witman, Matthew; Ling, Sanliang; Jawahery, Sudi
For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less
Cheng, Qiang; Zhou, Hongbo; Cheng, Jie
2011-06-01
Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.
Optimal control of raw timber production processes
Ivan Kolenka
1978-01-01
This paper demonstrates the possibility of optimal planning and control of timber harvesting activ-ities with mathematical optimization models. The separate phases of timber harvesting are represented by coordinated models which can be used to select the optimal decision for the execution of any given phase. The models form a system whose components are connected and...
NASA Astrophysics Data System (ADS)
Zhu, C.; Zhang, S.; Xiao, F.; Li, J.; Yuan, L.; Zhang, Y.; Zhu, T.
2018-05-01
The NASA Operation IceBridge (OIB) mission is the largest program in the Earth's polar remote sensing science observation project currently, initiated in 2009, which collects airborne remote sensing measurements to bridge the gap between NASA's ICESat and the upcoming ICESat-2 mission. This paper develop an improved method that optimizing the selection method of Digital Mapping System (DMS) image and using the optimal threshold obtained by experiments in Beaufort Sea to calculate the local instantaneous sea surface height in this area. The optimal threshold determined by comparing manual selection with the lowest (Airborne Topographic Mapper) ATM L1B elevation threshold of 2 %, 1 %, 0.5 %, 0.2 %, 0.1 % and 0.05 % in A, B, C sections, the mean of mean difference are 0.166 m, 0.124 m, 0.083 m, 0.018 m, 0.002 m and -0.034 m. Our study shows the lowest L1B data of 0.1 % is the optimal threshold. The optimal threshold and manual selections are also used to calculate the instantaneous sea surface height over images with leads, we find that improved methods has closer agreement with those from L1B manual selections. For these images without leads, the local instantaneous sea surface height estimated by using the linear equations between distance and sea surface height calculated over images with leads.
Multiobjective optimization of combinatorial libraries.
Agrafiotis, D K
2002-01-01
Combinatorial chemistry and high-throughput screening have caused a fundamental shift in the way chemists contemplate experiments. Designing a combinatorial library is a controversial art that involves a heterogeneous mix of chemistry, mathematics, economics, experience, and intuition. Although there seems to be little agreement as to what constitutes an ideal library, one thing is certain: only one property or measure seldom defines the quality of the design. In most real-world applications, a good experiment requires the simultaneous optimization of several, often conflicting, design objectives, some of which may be vague and uncertain. In this paper, we discuss a class of algorithms for subset selection rooted in the principles of multiobjective optimization. Our approach is to employ an objective function that encodes all of the desired selection criteria, and then use a simulated annealing or evolutionary approach to identify the optimal (or a nearly optimal) subset from among the vast number of possibilities. Many design criteria can be accommodated, including diversity, similarity to known actives, predicted activity and/or selectivity determined by quantitative structure-activity relationship (QSAR) models or receptor binding models, enforcement of certain property distributions, reagent cost and availability, and many others. The method is robust, convergent, and extensible, offers the user full control over the relative significance of the various objectives in the final design, and permits the simultaneous selection of compounds from multiple libraries in full- or sparse-array format.
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization
2012-01-01
Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104
Bix, Laura; Seo, Do Chan; Ladoni, Moslem; Brunk, Eric; Becker, Mark W
2016-01-01
Effective standardization of medical device labels requires objective study of varied designs. Insufficient empirical evidence exists regarding how practitioners utilize and view labeling. Measure the effect of graphic elements (boxing information, grouping information, symbol use and color-coding) to optimize a label for comparison with those typical of commercial medical devices. Participants viewed 54 trials on a computer screen. Trials were comprised of two labels that were identical with regard to graphics, but differed in one aspect of information (e.g., one had latex, the other did not). Participants were instructed to select the label along a given criteria (e.g., latex containing) as quickly as possible. Dependent variables were binary (correct selection) and continuous (time to correct selection). Eighty-nine healthcare professionals were recruited at Association of Surgical Technologists (AST) conferences, and using a targeted e-mail of AST members. Symbol presence, color coding and grouping critical pieces of information all significantly improved selection rates and sped time to correct selection (α = 0.05). Conversely, when critical information was graphically boxed, probability of correct selection and time to selection were impaired (α = 0.05). Subsequently, responses from trials containing optimal treatments (color coded, critical information grouped with symbols) were compared to two labels created based on a review of those commercially available. Optimal labels yielded a significant positive benefit regarding the probability of correct choice ((P<0.0001) LSM; UCL, LCL: 97.3%; 98.4%, 95.5%)), as compared to the two labels we created based on commercial designs (92.0%; 94.7%, 87.9% and 89.8%; 93.0%, 85.3%) and time to selection. Our study provides data regarding design factors, namely: color coding, symbol use and grouping of critical information that can be used to significantly enhance the performance of medical device labels.
ERIC Educational Resources Information Center
Nagler, Matthew G.
2006-01-01
The paper examines the effect of a shock to university funding on tuition net of financial aid, admissions selectivity, and enrollment levels chosen by an optimizing university. Whereas a positive shock, such as a major donation, results in lower net tuition and greater selectivity with respect to all students, its effect on enrollment may not be…
USDA-ARS?s Scientific Manuscript database
Expressed sequence tag (EST) simple sequence repeats (SSRs) in Prunus were mined, and flanking primers designed and used for genome-wide characterization and selection of primers to optimize marker distribution and reliability. A total of 12,618 contigs were assembled from 84,727 ESTs, along with 34...
Stephanie A. Snyder; Keith D. Stockmann; Gaylord E. Morris
2012-01-01
The US Forest Service used contracted helicopter services as part of its wildfire suppression strategy. An optimization decision-modeling system was developed to assist in the contract selection process. Three contract award selection criteria were considered: cost per pound of delivered water, total contract cost, and quality ratings of the aircraft and vendors....
ERIC Educational Resources Information Center
Unson, Christine; Richardson, Margaret
2013-01-01
Purpose: The study examined the barriers faced, the goals selected, and the optimization and compensation strategies of older workers in relation to career change. Method: Thirty open-ended interviews, 12 in the United States and 18 in New Zealand, were conducted, recorded, transcribed verbatim, and analyzed for themes. Results: Barriers to…
A stereo remote sensing feature selection method based on artificial bee colony algorithm
NASA Astrophysics Data System (ADS)
Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi
2014-05-01
To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.
SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Kenny S K; Lee, Louis K Y; Xing, L
2015-06-15
Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less
Scheibner, Gunnar Benjamin; Leathem, Janet
2012-01-01
Controlling for age, gender, education, and self-rated health, the present study used regression analyses to examine the relationships between memory control beliefs and self-reported forgetfulness in the context of the meta-theory of Selective Optimization with Compensation (SOC). Findings from this online survey (N = 409) indicate that, among adult New Zealanders, a higher sense of memory control accounts for a 22.7% reduction in self-reported forgetfulness. Similarly, optimization was found to account for a 5% reduction in forgetfulness while the strategies of selection and compensation were not related to self-reports of forgetfulness. Optimization partially mediated the beneficial effects that some memory beliefs (e.g., believing that memory decline is inevitable and believing in the potential for memory improvement) have on forgetfulness. It was concluded that memory control beliefs are important predictors of self-reported forgetfulness while the support for the SOC model in the context of memory controllability and everyday forgetfulness is limited.
Relay Selection for Cooperative Relaying in Wireless Energy Harvesting Networks
NASA Astrophysics Data System (ADS)
Zhu, Kaiyan; Wang, Fei; Li, Songsong; Jiang, Fengjiao; Cao, Lijie
2018-01-01
Energy harvesting from the surroundings is a promising solution to provide energy supply and extend the life of wireless sensor networks. Recently, energy harvesting has been shown as an attractive solution to prolong the operation of cooperative networks. In this paper, we propose a relay selection scheme to optimize the amplify-and-forward (AF) cooperative transmission in wireless energy harvesting cooperative networks. The harvesting energy and channel conditions are considered to select the optimal relay as cooperative relay to minimize the outage probability of the system. Simulation results show that our proposed relay selection scheme achieves better outage performance than other strategies.
Identification of a selective small molecule inhibitor of breast cancer stem cells.
Germain, Andrew R; Carmody, Leigh C; Morgan, Barbara; Fernandez, Cristina; Forbeck, Erin; Lewis, Timothy A; Nag, Partha P; Ting, Amal; VerPlank, Lynn; Feng, Yuxiong; Perez, Jose R; Dandapani, Sivaraman; Palmer, Michelle; Lander, Eric S; Gupta, Piyush B; Schreiber, Stuart L; Munoz, Benito
2012-05-15
A high-throughput screen (HTS) with the National Institute of Health-Molecular Libraries Small Molecule Repository (NIH-MLSMR) compound collection identified a class of acyl hydrazones to be selectively lethal to breast cancer stem cell (CSC) enriched populations. Medicinal chemistry efforts were undertaken to optimize potency and selectivity of this class of compounds. The optimized compound was declared as a probe (ML239) with the NIH Molecular Libraries Program and displayed greater than 20-fold selective inhibition of the breast CSC-like cell line (HMLE_sh_Ecad) over the isogenic control line (HMLE_sh_GFP). Copyright © 2012 Elsevier Ltd. All rights reserved.
Urban Rain Gauge Siting Selection Based on Gis-Multicriteria Analysis
NASA Astrophysics Data System (ADS)
Fu, Yanli; Jing, Changfeng; Du, Mingyi
2016-06-01
With the increasingly rapid growth of urbanization and climate change, urban rainfall monitoring as well as urban waterlogging has widely been paid attention. In the light of conventional siting selection methods do not take into consideration of geographic surroundings and spatial-temporal scale for the urban rain gauge site selection, this paper primarily aims at finding the appropriate siting selection rules and methods for rain gauge in urban area. Additionally, for optimization gauge location, a spatial decision support system (DSS) aided by geographical information system (GIS) has been developed. In terms of a series of criteria, the rain gauge optimal site-search problem can be addressed by a multicriteria decision analysis (MCDA). A series of spatial analytical techniques are required for MCDA to identify the prospective sites. With the platform of GIS, using spatial kernel density analysis can reflect the population density; GIS buffer analysis is used to optimize the location with the rain gauge signal transmission character. Experiment results show that the rules and the proposed method are proper for the rain gauge site selection in urban areas, which is significant for the siting selection of urban hydrological facilities and infrastructure, such as water gauge.
Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.
Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander
2015-01-01
Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.
Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology
Faltermeier, Rupert; Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander
2015-01-01
Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses. PMID:26693250
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.
Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A
2013-02-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models.
Optimal path planning for video-guided smart munitions via multitarget tracking
NASA Astrophysics Data System (ADS)
Borkowski, Jeffrey M.; Vasquez, Juan R.
2006-05-01
An advent in the development of smart munitions entails autonomously modifying target selection during flight in order to maximize the value of the target being destroyed. A unique guidance law can be constructed that exploits both attribute and kinematic data obtained from an onboard video sensor. An optimal path planning algorithm has been developed with the goals of obstacle avoidance and maximizing the value of the target impacted by the munition. Target identification and classification provides a basis for target value which is used in conjunction with multi-target tracks to determine an optimal waypoint for the munition. A dynamically feasible trajectory is computed to provide constraints on the waypoint selection. Results demonstrate the ability of the autonomous system to avoid moving obstacles and revise target selection in flight.
Firefly as a novel swarm intelligence variable selection method in spectroscopy.
Goodarzi, Mohammad; dos Santos Coelho, Leandro
2014-12-10
A critical step in multivariate calibration is wavelength selection, which is used to build models with better prediction performance when applied to spectral data. Up to now, many feature selection techniques have been developed. Among all different types of feature selection techniques, those based on swarm intelligence optimization methodologies are more interesting since they are usually simulated based on animal and insect life behavior to, e.g., find the shortest path between a food source and their nests. This decision is made by a crowd, leading to a more robust model with less falling in local minima during the optimization cycle. This paper represents a novel feature selection approach to the selection of spectroscopic data, leading to more robust calibration models. The performance of the firefly algorithm, a swarm intelligence paradigm, was evaluated and compared with genetic algorithm and particle swarm optimization. All three techniques were coupled with partial least squares (PLS) and applied to three spectroscopic data sets. They demonstrate improved prediction results in comparison to when only a PLS model was built using all wavelengths. Results show that firefly algorithm as a novel swarm paradigm leads to a lower number of selected wavelengths while the prediction performance of built PLS stays the same. Copyright © 2014. Published by Elsevier B.V.
This report summarizes Phase II (site optimization) of the Nationwide Fund-lead Pump and Treat Optimization Project. This phase included conducting Remediation System Evaluations (RSEs) at each of the 20 sites selected in Phase I.
Decision Variants for the Automatic Determination of Optimal Feature Subset in RF-RFE.
Chen, Qi; Meng, Zhaopeng; Liu, Xinyi; Jin, Qianguo; Su, Ran
2018-06-15
Feature selection, which identifies a set of most informative features from the original feature space, has been widely used to simplify the predictor. Recursive feature elimination (RFE), as one of the most popular feature selection approaches, is effective in data dimension reduction and efficiency increase. A ranking of features, as well as candidate subsets with the corresponding accuracy, is produced through RFE. The subset with highest accuracy (HA) or a preset number of features (PreNum) are often used as the final subset. However, this may lead to a large number of features being selected, or if there is no prior knowledge about this preset number, it is often ambiguous and subjective regarding final subset selection. A proper decision variant is in high demand to automatically determine the optimal subset. In this study, we conduct pioneering work to explore the decision variant after obtaining a list of candidate subsets from RFE. We provide a detailed analysis and comparison of several decision variants to automatically select the optimal feature subset. Random forest (RF)-recursive feature elimination (RF-RFE) algorithm and a voting strategy are introduced. We validated the variants on two totally different molecular biology datasets, one for a toxicogenomic study and the other one for protein sequence analysis. The study provides an automated way to determine the optimal feature subset when using RF-RFE.
Lou, Yan; Han, Xiaochun; Kuglstatter, Andreas; Kondru, Rama K; Sweeney, Zachary K; Soth, Michael; McIntosh, Joel; Litman, Renee; Suh, Judy; Kocer, Buelent; Davis, Dana; Park, Jaehyeon; Frauchiger, Sandra; Dewdney, Nolan; Zecic, Hasim; Taygerly, Joshua P; Sarma, Keshab; Hong, Junbae; Hill, Ronald J; Gabriel, Tobias; Goldstein, David M; Owens, Timothy D
2015-01-08
Structure-based drug design was used to guide the optimization of a series of selective BTK inhibitors as potential treatments for Rheumatoid arthritis. Highlights include the introduction of a benzyl alcohol group and a fluorine substitution, each of which resulted in over 10-fold increase in activity. Concurrent optimization of drug-like properties led to compound 1 (RN486) ( J. Pharmacol. Exp. Ther. 2012 , 341 , 90 ), which was selected for advanced preclinical characterization based on its favorable properties.
A threshold selection method based on edge preserving
NASA Astrophysics Data System (ADS)
Lou, Liantang; Dan, Wei; Chen, Jiaqi
2015-12-01
A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.
Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom
2015-12-23
The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy. In this study, the optimal number of selected atlases used was six, but for definitive conclusions about the optimal number of atlases and to improve the autosegmentation accuracy for clinical use, more atlases need to be included.
Dai, Shengfa; Wei, Qingguo
2017-01-01
Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.
Ant-cuckoo colony optimization for feature selection in digital mammogram.
Jona, J B; Nagaveni, N
2014-01-15
Digital mammogram is the only effective screening method to detect the breast cancer. Gray Level Co-occurrence Matrix (GLCM) textural features are extracted from the mammogram. All the features are not essential to detect the mammogram. Therefore identifying the relevant feature is the aim of this work. Feature selection improves the classification rate and accuracy of any classifier. In this study, a new hybrid metaheuristic named Ant-Cuckoo Colony Optimization a hybrid of Ant Colony Optimization (ACO) and Cuckoo Search (CS) is proposed for feature selection in Digital Mammogram. ACO is a good metaheuristic optimization technique but the drawback of this algorithm is that the ant will walk through the path where the pheromone density is high which makes the whole process slow hence CS is employed to carry out the local search of ACO. Support Vector Machine (SVM) classifier with Radial Basis Kernal Function (RBF) is done along with the ACO to classify the normal mammogram from the abnormal mammogram. Experiments are conducted in miniMIAS database. The performance of the new hybrid algorithm is compared with the ACO and PSO algorithm. The results show that the hybrid Ant-Cuckoo Colony Optimization algorithm is more accurate than the other techniques.
NASA Astrophysics Data System (ADS)
Ren, Wenjie; Li, Hongnan; Song, Gangbing; Huo, Linsheng
2009-03-01
The problem of optimizing an absorber system for three-dimensional seismic structures is addressed. The objective is to determine the number and position of absorbers to minimize the coupling effects of translation-torsion of structures at minimum cost. A procedure for a multi-objective optimization problem is developed by integrating a dominance-based selection operator and a dominance-based penalty function method. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. The technique guarantees the better performing individual winning its competition, provides a slight selection pressure toward individuals and maintains diversity in the population. Moreover, due to the evaluation for individuals in each generation being finished in one run, less computational effort is taken. Penalty function methods are generally used to transform a constrained optimization problem into an unconstrained one. The dominance-based penalty function contains necessary information on non-dominated character and infeasible position of an individual, essential for success in seeking a Pareto optimal set. The proposed approach is used to obtain a set of non-dominated designs for a six-storey three-dimensional building with shape memory alloy dampers subjected to earthquake.
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.
Investigations of quantum heuristics for optimization
NASA Astrophysics Data System (ADS)
Rieffel, Eleanor; Hadfield, Stuart; Jiang, Zhang; Mandra, Salvatore; Venturelli, Davide; Wang, Zhihui
We explore the design of quantum heuristics for optimization, focusing on the quantum approximate optimization algorithm, a metaheuristic developed by Farhi, Goldstone, and Gutmann. We develop specific instantiations of the of quantum approximate optimization algorithm for a variety of challenging combinatorial optimization problems. Through theoretical analyses and numeric investigations of select problems, we provide insight into parameter setting and Hamiltonian design for quantum approximate optimization algorithms and related quantum heuristics, and into their implementation on hardware realizable in the near term.
Hash Bit Selection for Nearest Neighbor Search.
Xianglong Liu; Junfeng He; Shih-Fu Chang
2017-11-01
To overcome the barrier of storage and computation when dealing with gigantic-scale data sets, compact hashing has been studied extensively to approximate the nearest neighbor search. Despite the recent advances, critical design issues remain open in how to select the right features, hashing algorithms, and/or parameter settings. In this paper, we address these by posing an optimal hash bit selection problem, in which an optimal subset of hash bits are selected from a pool of candidate bits generated by different features, algorithms, or parameters. Inspired by the optimization criteria used in existing hashing algorithms, we adopt the bit reliability and their complementarity as the selection criteria that can be carefully tailored for hashing performance in different tasks. Then, the bit selection solution is discovered by finding the best tradeoff between search accuracy and time using a modified dynamic programming method. To further reduce the computational complexity, we employ the pairwise relationship among hash bits to approximate the high-order independence property, and formulate it as an efficient quadratic programming method that is theoretically equivalent to the normalized dominant set problem in a vertex- and edge-weighted graph. Extensive large-scale experiments have been conducted under several important application scenarios of hash techniques, where our bit selection framework can achieve superior performance over both the naive selection methods and the state-of-the-art hashing algorithms, with significant accuracy gains ranging from 10% to 50%, relatively.
ERIC Educational Resources Information Center
Gestsdottir, Steinunn; Geldhof, G. John; Paus, Tomáš; Freund, Alexandra M.; Adalbjarnardottir, Sigrun; Lerner, Jacqueline V.; Lerner, Richard M.
2015-01-01
We address how to conceptualize and measure intentional self-regulation (ISR) among adolescents from four cultures by assessing whether ISR (conceptualized by the SOC model of Selection, Optimization, and Compensation) is represented by three factors (as with adult samples) or as one "adolescence-specific" factor. A total of 4,057 14-…
Banta, Edward R.; Paschke, Suzanne S.
2012-01-01
Declining water levels caused by withdrawals of water from wells in the west-central part of the Denver Basin bedrock-aquifer system have raised concerns with respect to the ability of the aquifer system to sustain production. The Arapahoe aquifer in particular is heavily used in this area. Two optimization analyses were conducted to demonstrate approaches that could be used to evaluate possible future pumping scenarios intended to prolong the productivity of the aquifer and to delay excessive loss of saturated thickness. These analyses were designed as demonstrations only, and were not intended as a comprehensive optimization study. Optimization analyses were based on a groundwater-flow model of the Denver Basin developed as part of a recently published U.S. Geological Survey groundwater-availability study. For each analysis an optimization problem was set up to maximize total withdrawal rate, subject to withdrawal-rate and hydraulic-head constraints, for 119 selected municipal water-supply wells located in 96 model cells. The optimization analyses were based on 50- and 100-year simulations of groundwater withdrawals. The optimized total withdrawal rate for all selected wells for a 50-year simulation time was about 58.8 cubic feet per second. For an analysis in which the simulation time and head-constraint time were extended to 100 years, the optimized total withdrawal rate for all selected wells was about 53.0 cubic feet per second, demonstrating that a reduction in withdrawal rate of about 10 percent may extend the time before the hydraulic-head constraints are violated by 50 years, provided that pumping rates are optimally distributed. Analysis of simulation results showed that initially, the pumping produces water primarily by release of water from storage in the Arapahoe aquifer. However, because confining layers between the Denver and Arapahoe aquifers are thin, in less than 5 years, most of the water removed by managed-flows pumping likely would be supplied by depleting overlying hydrogeologic units, substantially increasing the rate of decline of hydraulic heads in parts of the overlying Denver aquifer.
Hamdan, Sadeque; Cheaitou, Ali
2017-08-01
This data article provides detailed optimization input and output datasets and optimization code for the published research work titled "Dynamic green supplier selection and order allocation with quantity discounts and varying supplier availability" (Hamdan and Cheaitou, 2017, In press) [1]. Researchers may use these datasets as a baseline for future comparison and extensive analysis of the green supplier selection and order allocation problem with all-unit quantity discount and varying number of suppliers. More particularly, the datasets presented in this article allow researchers to generate the exact optimization outputs obtained by the authors of Hamdan and Cheaitou (2017, In press) [1] using the provided optimization code and then to use them for comparison with the outputs of other techniques or methodologies such as heuristic approaches. Moreover, this article includes the randomly generated optimization input data and the related outputs that are used as input data for the statistical analysis presented in Hamdan and Cheaitou (2017 In press) [1] in which two different approaches for ranking potential suppliers are compared. This article also provides the time analysis data used in (Hamdan and Cheaitou (2017, In press) [1] to study the effect of the problem size on the computation time as well as an additional time analysis dataset. The input data for the time study are generated randomly, in which the problem size is changed, and then are used by the optimization problem to obtain the corresponding optimal outputs as well as the corresponding computation time.
Pan, Xiaoyong; Hu, Xiaohua; Zhang, Yu Hang; Feng, Kaiyan; Wang, Shao Peng; Chen, Lei; Huang, Tao; Cai, Yu Dong
2018-04-12
Atrioventricular septal defect (AVSD) is a clinically significant subtype of congenital heart disease (CHD) that severely influences the health of babies during birth and is associated with Down syndrome (DS). Thus, exploring the differences in functional genes in DS samples with and without AVSD is a critical way to investigate the complex association between AVSD and DS. In this study, we present a computational method to distinguish DS patients with AVSD from those without AVSD using the newly proposed self-normalizing neural network (SNN). First, each patient was encoded by using the copy number of probes on chromosome 21. The encoded features were ranked by the reliable Monte Carlo feature selection (MCFS) method to obtain a ranked feature list. Based on this feature list, we used a two-stage incremental feature selection to construct two series of feature subsets and applied SNNs to build classifiers to identify optimal features. Results show that 2737 optimal features were obtained, and the corresponding optimal SNN classifier constructed on optimal features yielded a Matthew's correlation coefficient (MCC) value of 0.748. For comparison, random forest was also used to build classifiers and uncover optimal features. This method received an optimal MCC value of 0.582 when top 132 features were utilized. Finally, we analyzed some key features derived from the optimal features in SNNs found in literature support to further reveal their essential roles.
Efficient Breeding by Genomic Mating.
Akdemir, Deniz; Sánchez, Julio I
2016-01-01
Selection in breeding programs can be done by using phenotypes (phenotypic selection), pedigree relationship (breeding value selection) or molecular markers (marker assisted selection or genomic selection). All these methods are based on truncation selection, focusing on the best performance of parents before mating. In this article we proposed an approach to breeding, named genomic mating, which focuses on mating instead of truncation selection. Genomic mating uses information in a similar fashion to genomic selection but includes information on complementation of parents to be mated. Following the efficiency frontier surface, genomic mating uses concepts of estimated breeding values, risk (usefulness) and coefficient of ancestry to optimize mating between parents. We used a genetic algorithm to find solutions to this optimization problem and the results from our simulations comparing genomic selection, phenotypic selection and the mating approach indicate that current approach for breeding complex traits is more favorable than phenotypic and genomic selection. Genomic mating is similar to genomic selection in terms of estimating marker effects, but in genomic mating the genetic information and the estimated marker effects are used to decide which genotypes should be crossed to obtain the next breeding population.
Strategies for Fermentation Medium Optimization: An In-Depth Review
Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.
2017-01-01
Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Dezhi; Zhan, Qingwen; Chen, Yuche
This study proposes an optimization model that simultaneously incorporates the selection of logistics infrastructure investments and subsidies for green transport modes to achieve specific CO 2 emission targets in a regional logistics network. The proposed model is formulated as a bi-level formulation, in which the upper level determines the optimal selection of logistics infrastructure investments and subsidies for green transport modes such that the benefit-cost ratio of the entire logistics system is maximized. The lower level describes the selected service routes of logistics users. A genetic and Frank-Wolfe hybrid algorithm is introduced to solve the proposed model. The proposed modelmore » is applied to the regional logistics network of Changsha City, China. Findings show that using the joint scheme of the selection of logistics infrastructure investments and green subsidies is more effective than using them solely. In conclusion, carbon emission reduction targets can significantly affect logistics infrastructure investments and subsidy levels.« less
Optimal Sensor Selection for Health Monitoring Systems
NASA Technical Reports Server (NTRS)
Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.
2005-01-01
Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.
Zhang, Dezhi; Zhan, Qingwen; Chen, Yuche; ...
2016-03-14
This study proposes an optimization model that simultaneously incorporates the selection of logistics infrastructure investments and subsidies for green transport modes to achieve specific CO 2 emission targets in a regional logistics network. The proposed model is formulated as a bi-level formulation, in which the upper level determines the optimal selection of logistics infrastructure investments and subsidies for green transport modes such that the benefit-cost ratio of the entire logistics system is maximized. The lower level describes the selected service routes of logistics users. A genetic and Frank-Wolfe hybrid algorithm is introduced to solve the proposed model. The proposed modelmore » is applied to the regional logistics network of Changsha City, China. Findings show that using the joint scheme of the selection of logistics infrastructure investments and green subsidies is more effective than using them solely. In conclusion, carbon emission reduction targets can significantly affect logistics infrastructure investments and subsidy levels.« less
SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.
Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru
2014-01-01
Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.
SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier
Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru
2014-01-01
Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306
A supplier selection and order allocation problem with stochastic demands
NASA Astrophysics Data System (ADS)
Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua
2011-08-01
We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.
Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.
Ma, Yunbei; Zhou, Xiao-Hua
2017-02-01
For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.
Portfolio Optimization of Nanomaterial Use in Clean Energy Technologies.
Moore, Elizabeth A; Babbitt, Callie W; Gaustad, Gabrielle; Moore, Sean T
2018-04-03
While engineered nanomaterials (ENMs) are increasingly incorporated in diverse applications, risks of ENM adoption remain difficult to predict and mitigate proactively. Current decision-making tools do not adequately account for ENM uncertainties including varying functional forms, unique environmental behavior, economic costs, unknown supply and demand, and upstream emissions. The complexity of the ENM system necessitates a novel approach: in this study, the adaptation of an investment portfolio optimization model is demonstrated for optimization of ENM use in renewable energy technologies. Where a traditional investment portfolio optimization model maximizes return on investment through optimal selection of stock, ENM portfolio optimization maximizes the performance of energy technology systems by optimizing selective use of ENMs. Cumulative impacts of multiple ENM material portfolios are evaluated in two case studies: organic photovoltaic cells (OPVs) for renewable energy and lithium-ion batteries (LIBs) for electric vehicles. Results indicate ENM adoption is dependent on overall performance and variance of the material, resource use, environmental impact, and economic trade-offs. From a sustainability perspective, improved clean energy applications can help extend product lifespans, reduce fossil energy consumption, and substitute ENMs for scarce incumbent materials.
Fuel management optimization using genetic algorithms and code independence
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1994-12-31
Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less
Optimization of a Turboprop UAV for Maximum Loiter and Specific Power Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Dinc, Ali
2016-09-01
In this study, a genuine code was developed for optimization of selected parameters of a turboprop engine for an unmanned aerial vehicle (UAV) by employing elitist genetic algorithm. First, preliminary sizing of a UAV and its turboprop engine was done, by the code in a given mission profile. Secondly, single and multi-objective optimization were done for selected engine parameters to maximize loiter duration of UAV or specific power of engine or both. In single objective optimization, as first case, UAV loiter time was improved with an increase of 17.5% from baseline in given boundaries or constraints of compressor pressure ratio and burner exit temperature. In second case, specific power was enhanced by 12.3% from baseline. In multi-objective optimization case, where previous two objectives are considered together, loiter time and specific power were increased by 14.2% and 9.7% from baseline respectively, for the same constraints.
Tree crickets optimize the acoustics of baffles to exaggerate their mate-attraction signal.
Mhatre, Natasha; Malkin, Robert; Deb, Rittik; Balakrishnan, Rohini; Robert, Daniel
2017-12-11
Object manufacture in insects is typically inherited, and believed to be highly stereotyped. Optimization, the ability to select the functionally best material and modify it appropriately for a specific function, implies flexibility and is usually thought to be incompatible with inherited behaviour. Here, we show that tree-crickets optimize acoustic baffles, objects that are used to increase the effective loudness of mate-attraction calls. We quantified the acoustic efficiency of all baffles within the naturally feasible design space using finite-element modelling and found that design affects efficiency significantly. We tested the baffle-making behaviour of tree crickets in a series of experimental contexts. We found that given the opportunity, tree crickets optimised baffle acoustics; they selected the best sized object and modified it appropriately to make a near optimal baffle. Surprisingly, optimization could be achieved in a single attempt, and is likely to be achieved through an inherited yet highly accurate behavioural heuristic.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2005-01-01
A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Optimizing an Actuator Array for the Control of Multi-Frequency Noise in Aircraft Interiors
NASA Technical Reports Server (NTRS)
Palumbo, D. L.; Padula, S. L.
1997-01-01
Techniques developed for selecting an optimized actuator array for interior noise reduction at a single frequency are extended to the multi-frequency case. Transfer functions for 64 actuators were obtained at 5 frequencies from ground testing the rear section of a fully trimmed DC-9 fuselage. A single loudspeaker facing the left side of the aircraft was the primary source. A combinatorial search procedure (tabu search) was employed to find optimum actuator subsets of from 2 to 16 actuators. Noise reduction predictions derived from the transfer functions were used as a basis for evaluating actuator subsets during optimization. Results indicate that it is necessary to constrain actuator forces during optimization. Unconstrained optimizations selected actuators which require unrealistically large forces. Two methods of constraint are evaluated. It is shown that a fast, but approximate, method yields results equivalent to an accurate, but computationally expensive, method.
WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization
NASA Astrophysics Data System (ADS)
Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry
2018-01-01
We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.
Impact of remanent magnetic field on the heat load of original CEBAF cryomodule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciovati, Gianluigi; Cheng, Guangfeng; Drury, Michael
The heat load of the original cryomodules for the CEBAF accelerator is ~50% higher than the target value of 100 W at 2.07 K for refurbished cavities operating at an accelerating gradient of 12.5 MV/m. This issue is due to the quality factor of the cavities being ~50% lower in the cryomodule than when tested in a vertical cryostat, even at low RF field. Previous studies were not conclusive about the origin of the additional losses. We present the results of a systematic study of the additional losses in a five-cell cavity from a decommissioned cryomodule after attaching components, whichmore » are part of the cryomodule, such as the cold tuner, the He tank and the cold magnetic shield, prior to cryogenic testing in a vertical cryostat. Flux-gate magnetometers and temperature sensors are used as diagnostic elements. Different cool-down procedures and tests in different residual magnetic fields were investigated during the study. Here, three flux-gate magnetometers attached to one of the cavities installed in the refurbished cryomodule C50-12 confirmed the hypothesis of high residual magnetic field as a major cause for the increased RF losses.« less
Impact of remanent magnetic field on the heat load of original CEBAF cryomodule
Ciovati, Gianluigi; Cheng, Guangfeng; Drury, Michael; ...
2016-11-22
The heat load of the original cryomodules for the CEBAF accelerator is ~50% higher than the target value of 100 W at 2.07 K for refurbished cavities operating at an accelerating gradient of 12.5 MV/m. This issue is due to the quality factor of the cavities being ~50% lower in the cryomodule than when tested in a vertical cryostat, even at low RF field. Previous studies were not conclusive about the origin of the additional losses. We present the results of a systematic study of the additional losses in a five-cell cavity from a decommissioned cryomodule after attaching components, whichmore » are part of the cryomodule, such as the cold tuner, the He tank and the cold magnetic shield, prior to cryogenic testing in a vertical cryostat. Flux-gate magnetometers and temperature sensors are used as diagnostic elements. Different cool-down procedures and tests in different residual magnetic fields were investigated during the study. Here, three flux-gate magnetometers attached to one of the cavities installed in the refurbished cryomodule C50-12 confirmed the hypothesis of high residual magnetic field as a major cause for the increased RF losses.« less
MiR-9 Promotes Apoptosis Via Suppressing SMC1A Expression in GBM Cell Lines.
Zu, Yong; Zhu, Zhichuan; Lin, Min; Xu, Dafeng; Liang, Yongjun; Wang, Yueqian; Qiao, Zhengdong; Cao, Ting; Yang, Dan; Gao, Lili; Jin, Pengpeng; Zhang, Peng; Fu, Jianjun; Zheng, Jing
2017-01-01
Glioblastomas multiforme (GBM) is the most malignant brain cancer, which presented vast genomic variation with complicated pathologic mechanism. MicroRNA is a delicate post-transcriptional tuner of gene expression in the organisms by targeting and regulating protein coding genes. MiR-9 was reported as a significant biomarker for GBM patient prognosis and a key factor in regulation of GBM cancer stem cells. To explore the effect of miR-9 on GBM cell growth, we over expressed miR-9 in U87 and U251 cells. The cell viability decreased and apoptosis increased after miR-9 overexpression in these cells. To identify the target of miR-9, we scanned miR-9 binding site in the 3'UTRs region of expression SMC1A (structural maintenance of chromosomes 1A) genes and designed a fluorescent reporter assay to measure miR-9 binding to this region. Our results revealed that miR-9 binds to the 3'sUTR region of SMC1A and down-regulated SMC1A expression. Our results indicated that miR-9 was a potential therapeutic target for GBM through triggering apoptosis of cancer cells.
NASA Astrophysics Data System (ADS)
Dal Forno, Massimo; Craievich, Paolo; Baruzzo, Roberto; De Monte, Raffaele; Ferianis, Mario; Lamanna, Giuseppe; Vescovo, Roberto
2012-01-01
The Cavity Beam Position Monitor (BPM) is a beam diagnostic instrument which, in a seeded Free Electron Laser (FEL), allows the measurement of the electron beam position in a non-destructive way and with sub-micron resolution. It is composed by two resonant cavities called reference and position cavity, respectively. The measurement exploits the dipole mode that arises when the electron bunch passes off axis. In this paper we describe the Cavity BPM that has been designed and realized in the context of the FERMI@Elettra project [1]. New strategies have been adopted for the microwave design, for both the reference and the position cavities. Both cavities have been simulated by means of Ansoft HFSS [2] and CST Particle Studio [3], and have been realized using high precision lathe and wire-EDM (Electro-Discharge) machine, with a new technique that avoids the use of the sinker-EDM machine. Tuners have been used to accurately adjust the working frequencies for both cavities. The RF parameters have been estimated, and the modifications of the resonant frequencies produced by brazing and tuning have been evaluated. Finally, the Cavity BPM has been installed and tested in the presence of the electron beam.
The Unexpected Tuners: Are LncRNAs Regulating Host Translation during Infections?
Knap, Primoz; Tebaldi, Toma; Di Leva, Francesca; Biagioli, Marta; Dalla Serra, Mauro; Viero, Gabriella
2017-01-01
Pathogenic bacteria produce powerful virulent factors, such as pore-forming toxins, that promote their survival and cause serious damage to the host. Host cells reply to membrane stresses and ionic imbalance by modifying gene expression at the epigenetic, transcriptional and translational level, to recover from the toxin attack. The fact that the majority of the human transcriptome encodes for non-coding RNAs (ncRNAs) raises the question: do host cells deploy non-coding transcripts to rapidly control the most energy-consuming process in cells—i.e., host translation—to counteract the infection? Here, we discuss the intriguing possibility that membrane-damaging toxins induce, in the host, the expression of toxin-specific long non-coding RNAs (lncRNAs), which act as sponges for other molecules, encoding small peptides or binding target mRNAs to depress their translation efficiency. Unravelling the function of host-produced lncRNAs upon bacterial infection or membrane damage requires an improved understanding of host lncRNA expression patterns, their association with polysomes and their function during this stress. This field of investigation holds a unique opportunity to reveal unpredicted scenarios and novel approaches to counteract antibiotic-resistant infections. PMID:29469820
Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L
2012-10-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.
Optimal timing in biological processes
Williams, B.K.; Nichols, J.D.
1984-01-01
A general approach for obtaining solutions to a class of biological optimization problems is provided. The general problem is one of determining the appropriate time to take some action, when the action can be taken only once during some finite time frame. The approach can also be extended to cover a number of other problems involving animal choice (e.g., mate selection, habitat selection). Returns (assumed to index fitness) are treated as random variables with time-specific distributions, and can be either observable or unobservable at the time action is taken. In the case of unobservable returns, the organism is assumed to base decisions on some ancillary variable that is associated with returns. Optimal policies are derived for both situations and their properties are discussed. Various extensions are also considered, including objective functions based on functions of returns other than the mean, nonmonotonic relationships between the observable variable and returns; possible death of the organism before action is taken; and discounting of future returns. A general feature of the optimal solutions for many of these problems is that an organism should be very selective (i.e., should act only when returns or expected returns are relatively high) at the beginning of the time frame and should become less and less selective as time progresses. An example of the application of optimal timing to a problem involving the timing of bird migration is discussed, and a number of other examples for which the approach is applicable are described.
A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization
NASA Astrophysics Data System (ADS)
Quan, Ning; Kim, Harrison M.
2018-03-01
The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.
Simultaneous multislice refocusing via time optimal control.
Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf
2018-02-09
Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.
Continued research on selected parameters to minimize community annoyance from airplane noise
NASA Technical Reports Server (NTRS)
Frair, L.
1981-01-01
Results from continued research on selected parameters to minimize community annoyance from airport noise are reported. First, a review of the initial work on this problem is presented. Then the research focus is expanded by considering multiobjective optimization approaches for this problem. A multiobjective optimization algorithm review from the open literature is presented. This is followed by the multiobjective mathematical formulation for the problem of interest. A discussion of the appropriate solution algorithm for the multiobjective formulation is conducted. Alternate formulations and associated solution algorithms are discussed and evaluated for this airport noise problem. Selected solution algorithms that have been implemented are then used to produce computational results for example airports. These computations involved finding the optimal operating scenario for a moderate size airport and a series of sensitivity analyses for a smaller example airport.
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach
Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.
2014-01-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856
Assessment of Trading Partners for China's Rare Earth Exports Using a Decision Analytic Approach
He, Chunyan; Lei, Yalin; Ge, Jianping
2014-01-01
Chinese rare earth export policies currently result in accelerating its depletion. Thus adopting an optimal export trade selection strategy is crucial to determining and ultimately identifying the ideal trading partners. This paper introduces a multi-attribute decision-making methodology which is then used to select the optimal trading partner. In the method, an evaluation criteria system is established to assess the seven top trading partners based on three dimensions: political relationships, economic benefits and industrial security. Specifically, a simple additive weighing model derived from an additive utility function is utilized to calculate, rank and select alternatives. Results show that Japan would be the optimal trading partner for Chinese rare earths. The criteria evaluation method of trading partners for China's rare earth exports provides the Chinese government with a tool to enhance rare earth industrial policies. PMID:25051534
NASA Technical Reports Server (NTRS)
Alexander, S.; Hodgdon, R. B.
1977-01-01
The objective of NAS 3-20108 was the development and evaluation of improved anion selective membranes useful as efficient separators in a redox power storage cell system being constructed. The program was divided into three parts, (a) optimization of the selected candidate membrane systems, (b) investigation of alternative membrane/polymer systems, and (c) characterization of candidate membranes. The major synthesis effort was aimed at improving and optimizing as far as possible each candidate system with respect to three critical membrane properties essential for good redox cell performance. Substantial improvements were made in 5 candidate membrane systems. The critical synthesis variables of cross-link density, monomer ratio, and solvent composition were examined over a wide range. In addition, eight alternative polymer systems were investigated, two of which attained candidate status. Three other alternatives showed potential but required further research and development. Each candidate system was optimized for selectivity.
Fu, Qinyi; Martin, Benjamin L.; Matus, David Q.; Gao, Liang
2016-01-01
Despite the progress made in selective plane illumination microscopy, high-resolution 3D live imaging of multicellular specimens remains challenging. Tiling light-sheet selective plane illumination microscopy (TLS-SPIM) with real-time light-sheet optimization was developed to respond to the challenge. It improves the 3D imaging ability of SPIM in resolving complex structures and optimizes SPIM live imaging performance by using a real-time adjustable tiling light sheet and creating a flexible compromise between spatial and temporal resolution. We demonstrate the 3D live imaging ability of TLS-SPIM by imaging cellular and subcellular behaviours in live C. elegans and zebrafish embryos, and show how TLS-SPIM can facilitate cell biology research in multicellular specimens by studying left-right symmetry breaking behaviour of C. elegans embryos. PMID:27004937
Assessment of trading partners for China's rare earth exports using a decision analytic approach.
He, Chunyan; Lei, Yalin; Ge, Jianping
2014-01-01
Chinese rare earth export policies currently result in accelerating its depletion. Thus adopting an optimal export trade selection strategy is crucial to determining and ultimately identifying the ideal trading partners. This paper introduces a multi-attribute decision-making methodology which is then used to select the optimal trading partner. In the method, an evaluation criteria system is established to assess the seven top trading partners based on three dimensions: political relationships, economic benefits and industrial security. Specifically, a simple additive weighing model derived from an additive utility function is utilized to calculate, rank and select alternatives. Results show that Japan would be the optimal trading partner for Chinese rare earths. The criteria evaluation method of trading partners for China's rare earth exports provides the Chinese government with a tool to enhance rare earth industrial policies.
Weavers, Paul T; Borisch, Eric A; Riederer, Stephen J
2015-06-01
To develop and validate a method for choosing the optimal two-dimensional CAIPIRINHA kernel for subtraction contrast-enhanced MR angiography (CE-MRA) and estimate the degree of image quality improvement versus that of some reference acceleration parameter set at R ≥ 8. A metric based on patient-specific coil calibration information was defined for evaluating optimality of CAIPIRINHA kernels as applied to subtraction CE-MRA. Evaluation in retrospective studies using archived coil calibration data from abdomen, calf, foot, and hand CE-MRA exams was accomplished with an evaluation metric comparing the geometry factor (g-factor) histograms. Prospective calf, foot, and hand CE-MRA studies were evaluated with vessel signal-to-noise ratio (SNR). Retrospective studies show g-factor improvement for the selected CAIPIRINHA kernels was significant in the feet, moderate in the abdomen, and modest in the calves and hands. Prospective CE-MRA studies using optimal CAIPIRINHA show reduced noise amplification with identical acquisition time in studies of the feet, with minor improvements in the hands and calves. A method for selection of the optimal CAIPIRINHA kernel for high (R ≥ 8) acceleration CE-MRA exams given a specific patient and receiver array was demonstrated. CAIPIRINHA optimization appears valuable in accelerated CE-MRA of the feet and to a lesser extent in the abdomen. © 2014 Wiley Periodicals, Inc.
Simple summation rule for optimal fixation selection in visual search.
Najemnik, Jiri; Geisler, Wilson S
2009-06-01
When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
Evaluation of an optimized shade guide made from porcelain powder mixtures.
Wang, Peng; Wei, Jiaqiang; Li, Qing; Wang, Yining
2014-12-01
Color errors associated with current shade guides and problems with color selection and duplication are still challenging for restorative dentists. The purpose of this study was to evaluate an optimized shade guide for visual shade duplication. Color distributions (L*, a*, and b*) of the maxillary left central incisors of 236 participants, whose ages ranged from 20 to 60, were measured with a spectrophotometer. Based on this color map, an optimized shade guide was designed with 14 shade tabs evenly distributed within the given color range of the natural incisors. The shade tabs were fabricated with porcelain powder mixtures and conventional laboratory procedures. A comparison of shade duplication by using the optimized and Vitapan Classical shade guides was conducted. Thirty Chinese participants were involved, and the colors of the left maxillary incisors were selected by using 2 shade guides. Metal ceramic crowns were fabricated according to the results of the shade selection. The colors of the shade tabs, natural teeth, and the ceramic crowns were measured with a spectrophotometer. The color differences among the natural teeth, the shade tabs, and the corresponding metal ceramic crowns were calculated and analyzed (α=.017). Significant differences were found in both phases of shade determination and shade duplication (P<.017). The total number of color errors with the optimized shade guide was 3.5, which was significantly less than that of Vitapan, 5.1 (P<.001). The optimized shade guide system improved performance not only in the color selection phase but also in the color of the fabricated crowns. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Selection of sampling rate for digital control of aircrafts
NASA Technical Reports Server (NTRS)
Katz, P.; Powell, J. D.
1974-01-01
The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.
Computational Optimization and Characterization of Molecularly Imprinted Polymers
NASA Astrophysics Data System (ADS)
Terracina, Jacob J.
Molecularly imprinted polymers (MIPs) are a class of materials containing sites capable of selectively binding to the imprinted target molecule. Computational chemistry techniques were used to study the effect of different fabrication parameters (the monomer-to-target ratios, pre-polymerization solvent, temperature, and pH) on the formation of the MIP binding sites. Imprinted binding sites were built in silico for the purposes of better characterizing the receptor - ligand interactions. Chiefly, the sites were characterized with respect to their selectivities and the heterogeneity between sites. First, a series of two-step molecular mechanics (MM) and quantum mechanics (QM) computational optimizations of monomer -- target systems was used to determine optimal monomer-to-target ratios for the MIPs. Imidazole- and xanthine-derived target molecules were studied. The investigation included both small-scale models (one-target) and larger scale models (five-targets). The optimal ratios differed between the small and larger scales. For the larger models containing multiple targets, binding-site surface area analysis was used to evaluate the heterogeneity of the sites. The more fully surrounded sites had greater binding energies. Molecular docking was then used to measure the selectivities of the QM-optimized binding sites by comparing the binding energies of the imprinted target to that of a structural analogue. Selectivity was also shown to improve as binding sites become more fully encased by the monomers. For internal sites, docking consistently showed selectivity favoring the molecules that had been imprinted via QM geometry optimizations. The computationally imprinted sites were shown to exhibit size-, shape-, and polarity-based selectivity. This represented a novel approach to investigate the selectivity and heterogeneity of imprinted polymer binding sites, by applying the rapid orientation screening of MM docking to the highly accurate QM-optimized geometries. Next, we sought to computationally construct and investigate binding sites for their enantioselectivity. Again, a two-step MM [special characters removed] QM optimization scheme was used to "computationally imprint" chiral molecules. Using docking techniques, the imprinted binding sites were shown to exhibit an enantioselective preference for the imprinted molecule over its enantiomer. Docking of structurally similar chiral molecules showed that the sites computationally imprinted with R- or S-tBOC-tyrosine were able to differentiate between R- and S-forms of other tyrosine derivatives. The cross-enantioselectivity did not hold for chiral molecules that did not share the tyrosine H-bonding functional group orientations. Further analysis of the individual monomer - target interactions within the binding site led us to conclude that H-bonding functional groups that are located immediately next to the target's chiral center, and therefore spatially fixed relative to the chiral center, will have a stronger contribution to the enantioselectivity of the site than those groups separated from the chiral center by two or more rotatable bonds. These models were the first computationally imprinted binding sites to exhibit this enantioselective preference for the imprinted target molecules. Finally, molecular dynamics (MD) was used to quantify H-bonding interactions between target molecules, monomers, and solvents representative of the pre-polymerization matrix. It was found that both target dimerization and solvent interference decrease the number of monomer - target H-bonds present. Systems were optimized via simulated annealing to create binding sites that were then subjected to molecular docking analysis. Docking showed that the presence of solvent had a detrimental effect on the sensitivity and selectivity of the sites, and that solvents with more H-bonding capabilities were more disruptive to the binding properties of the site. Dynamic simulations also showed that increasing the temperature of the solution can significantly decrease the number of H-bonds formed between the targets and monomers. It is believed that the monomer - target complexes formed within the pre-polymerization matrix are translated into the selective binding cavities formed during polymerization. Elucidating the nature of these interactions in silico improves our understanding of MIPs, ultimately allowing for more optimized sensing materials.
Expediting SRM assay development for large-scale targeted proteomics experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chaochao; Shi, Tujin; Brown, Joseph N.
2014-08-22
Due to their high sensitivity and specificity, targeted proteomics measurements, e.g. selected reaction monitoring (SRM), are becoming increasingly popular for biological and translational applications. Selection of optimal transitions and optimization of collision energy (CE) are important assay development steps for achieving sensitive detection and accurate quantification; however, these steps can be labor-intensive, especially for large-scale applications. Herein, we explored several options for accelerating SRM assay development evaluated in the context of a relatively large set of 215 synthetic peptide targets. We first showed that HCD fragmentation is very similar to CID in triple quadrupole (QQQ) instrumentation, and by selection ofmore » top six y fragment ions from HCD spectra, >86% of top transitions optimized from direct infusion on QQQ instrument are covered. We also demonstrated that the CE calculated by existing prediction tools was less accurate for +3 precursors, and a significant increase in intensity for transitions could be obtained using a new CE prediction equation constructed from the present experimental data. Overall, our study illustrates the feasibility of expediting the development of larger numbers of high-sensitivity SRM assays through automation of transitions selection and accurate prediction of optimal CE to improve both SRM throughput and measurement quality.« less
Genetic evolutionary taboo search for optimal marker placement in infrared patient setup
NASA Astrophysics Data System (ADS)
Riboldi, M.; Baroni, G.; Spadea, M. F.; Tagaste, B.; Garibaldi, C.; Cambria, R.; Orecchia, R.; Pedotti, A.
2007-09-01
In infrared patient setup adequate selection of the external fiducial configuration is required for compensating inner target displacements (target registration error, TRE). Genetic algorithms (GA) and taboo search (TS) were applied in a newly designed approach to optimal marker placement: the genetic evolutionary taboo search (GETS) algorithm. In the GETS paradigm, multiple solutions are simultaneously tested in a stochastic evolutionary scheme, where taboo-based decision making and adaptive memory guide the optimization process. The GETS algorithm was tested on a group of ten prostate patients, to be compared to standard optimization and to randomly selected configurations. The changes in the optimal marker configuration, when TRE is minimized for OARs, were specifically examined. Optimal GETS configurations ensured a 26.5% mean decrease in the TRE value, versus 19.4% for conventional quasi-Newton optimization. Common features in GETS marker configurations were highlighted in the dataset of ten patients, even when multiple runs of the stochastic algorithm were performed. Including OARs in TRE minimization did not considerably affect the spatial distribution of GETS marker configurations. In conclusion, the GETS algorithm proved to be highly effective in solving the optimal marker placement problem. Further work is needed to embed site-specific deformation models in the optimization process.
Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks
Robinson, Y. Harold; Rajaram, M.
2015-01-01
Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966
Product Mix Selection Using AN Evolutionary Technique
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Vasant, Pandian
2009-08-01
This paper proposes an evolutionary technique for the solution of a real—life industrial problem and particular for the product mix selection problem. The evolutionary technique is a combination of a genetic algorithm that preserves the feasibility of the trial solutions with penalties and some local optimization method. The goal of this paper has been achieved in finding the best near optimal solution for the profit fitness function respect to vagueness factor and level of satisfaction. The findings of the profit values will be very useful for the decision makers in the industrial engineering sector for the implementation purpose. It's possible to improve the solutions obtained in this study by employing other meta-heuristic methods such as simulated annealing, tabu Search, ant colony optimization, particle swarm optimization and artificial immune systems.
Design Optimization of a Centrifugal Fan with Splitter Blades
NASA Astrophysics Data System (ADS)
Heo, Man-Woong; Kim, Jin-Hyuk; Kim, Kwang-Yong
2015-05-01
Multi-objective optimization of a centrifugal fan with additionally installed splitter blades was performed to simultaneously maximize the efficiency and pressure rise using three-dimensional Reynolds-averaged Navier-Stokes equations and hybrid multi-objective evolutionary algorithm. Two design variables defining the location of splitter, and the height ratio between inlet and outlet of impeller were selected for the optimization. In addition, the aerodynamic characteristics of the centrifugal fan were investigated with the variation of design variables in the design space. Latin hypercube sampling was used to select the training points, and response surface approximation models were constructed as surrogate models of the objective functions. With the optimization, both the efficiency and pressure rise of the centrifugal fan with splitter blades were improved considerably compared to the reference model.
Quantum teleportation scheme by selecting one of multiple output ports
NASA Astrophysics Data System (ADS)
Ishizaka, Satoshi; Hiroshima, Tohya
2009-04-01
The scheme of quantum teleportation, where Bob has multiple (N) output ports and obtains the teleported state by simply selecting one of the N ports, is thoroughly studied. We consider both the deterministic version and probabilistic version of the teleportation scheme aiming to teleport an unknown state of a qubit. Moreover, we consider two cases for each version: (i) the state employed for the teleportation is fixed to a maximally entangled state and (ii) the state is also optimized as well as Alice’s measurement. We analytically determine the optimal protocols for all the four cases and show the corresponding optimal fidelity or optimal success probability. All these protocols can achieve the perfect teleportation in the asymptotic limit of N→∞ . The entanglement properties of the teleportation scheme are also discussed.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.
A global optimization approach to multi-polarity sentiment analysis.
Li, Xinmiao; Li, Jing; Wu, Yukeng
2015-01-01
Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From the results of this comparison, we found that PSOGO-Senti is more suitable for improving a difficult multi-polarity sentiment analysis problem.
Xu, G; Hughes-Oliver, J M; Brooks, J D; Yeatts, J L; Baynes, R E
2013-01-01
Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].
Martínez-Gomez, Juan; Peña-Lamas, Javier; Martín, Mariano; Ponce-Ortega, José María
2017-12-01
The selection of the working fluid for Organic Rankine Cycles has traditionally been addressed from systematic heuristic methods, which perform a characterization and prior selection considering mainly one objective, thus avoiding a selection considering simultaneously the objectives related to sustainability and safety. The objective of this work is to propose a methodology for the optimal selection of the working fluid for Organic Rankine Cycles. The model is presented as a multi-objective approach, which simultaneously considers the economic, environmental and safety aspects. The economic objective function considers the profit obtained by selling the energy produced. Safety was evaluated in terms of individual risk for each of the components of the Organic Rankine Cycles and it was formulated as a function of the operating conditions and hazardous properties of each working fluid. The environmental function is based on carbon dioxide emissions, considering carbon dioxide mitigation, emission due to the use of cooling water as well emissions due material release. The methodology was applied to the case of geothermal facilities to select the optimal working fluid although it can be extended to waste heat recovery. The results show that the hydrocarbons represent better solutions, thus among a list of 24 working fluids, toluene is selected as the best fluid. Copyright © 2017 Elsevier Ltd. All rights reserved.
Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C
2008-01-10
Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity, while neighbour-based methods (KNN, OLS, LSA, LLS) performed better in data with higher complexity. We also found that the EBS and STS schemes serve as complementary and effective tools for selecting the optimal imputation algorithm.
Parallel medicinal chemistry approaches to selective HDAC1/HDAC2 inhibitor (SHI-1:2) optimization.
Kattar, Solomon D; Surdi, Laura M; Zabierek, Anna; Methot, Joey L; Middleton, Richard E; Hughes, Bethany; Szewczak, Alexander A; Dahlberg, William K; Kral, Astrid M; Ozerova, Nicole; Fleming, Judith C; Wang, Hongmei; Secrist, Paul; Harsch, Andreas; Hamill, Julie E; Cruz, Jonathan C; Kenific, Candia M; Chenard, Melissa; Miller, Thomas A; Berk, Scott C; Tempest, Paul
2009-02-15
The successful application of both solid and solution phase library synthesis, combined with tight integration into the medicinal chemistry effort, resulted in the efficient optimization of a novel structural series of selective HDAC1/HDAC2 inhibitors by the MRL-Boston Parallel Medicinal Chemistry group. An initial lead from a small parallel library was found to be potent and selective in biochemical assays. Advanced compounds were the culmination of iterative library design and possess excellent biochemical and cellular potency, as well as acceptable PK and efficacy in animal models.
NASA Astrophysics Data System (ADS)
Rodriguez, Hector German; Popp, Jennie; Maringanti, Chetan; Chaubey, Indrajeet
2011-01-01
An increased loss of agricultural nutrients is a growing concern for water quality in Arkansas. Several studies have shown that best management practices (BMPs) are effective in controlling water pollution. However, those affected with water quality issues need water management plans that take into consideration BMPs selection, placement, and affordability. This study used a nondominated sorting genetic algorithm (NSGA-II). This multiobjective algorithm selects and locates BMPs that minimize nutrients pollution cost-effectively by providing trade-off curves (optimal fronts) between pollutant reduction and total net cost increase. The usefulness of this optimization framework was evaluated in the Lincoln Lake watershed. The final NSGA-II optimization model generated a number of near-optimal solutions by selecting from 35 BMPs (combinations of pasture management, buffer zones, and poultry litter application practices). Selection and placement of BMPs were analyzed under various cost solutions. The NSGA-II provides multiple solutions that could fit the water management plan for the watershed. For instance, by implementing all the BMP combinations recommended in the lowest-cost solution, total phosphorous (TP) could be reduced by at least 76% while increasing cost by less than 2% in the entire watershed. This value represents an increase in cost of 5.49 ha-1 when compared to the baseline. Implementing all the BMP combinations proposed with the medium- and the highest-cost solutions could decrease TP drastically but will increase cost by 24,282 (7%) and $82,306 (25%), respectively.
Liu, Wang; Li, Yu-Long; Feng, Mu-Ting; Zhao, Yu-Wei; Ding, Xianting; He, Ben; Liu, Xuan
2018-01-01
Aim: Combined use of herbal medicines in patients underwent dual antiplatelet therapy (DAPT) might cause bleeding or thrombosis because herbal medicines with anti-platelet activities may exhibit interactions with DAPT. In this study, we tried to use a feedback system control (FSC) optimization technique to optimize dose strategy and clarify possible interactions in combined use of DAPT and herbal medicines. Methods: Herbal medicines with reported anti-platelet activities were selected by searching related references in Pubmed. Experimental anti-platelet activities of representative compounds originated from these herbal medicines were investigated using in vitro assay, namely ADP-induced aggregation of rat platelet-rich-plasma. FSC scheme hybridized artificial intelligence calculation and bench experiments to iteratively optimize 4-drug combination and 2-drug combination from these drug candidates. Results: Totally 68 herbal medicines were reported to have anti-platelet activities. In the present study, 7 representative compounds from these herbal medicines were selected to study combinatorial drug optimization together with DAPT, i.e., aspirin and ticagrelor. FSC technique first down-selected 9 drug candidates to the most significant 5 drugs. Then, FSC further secured 4 drugs in the optimal combination, including aspirin, ticagrelor, ferulic acid from DangGui, and forskolin from MaoHouQiaoRuiHua. Finally, FSC quantitatively estimated the possible interactions between aspirin:ticagrelor, aspirin:ferulic acid, ticagrelor:forskolin, and ferulic acid:forskolin. The estimation was further verified by experimentally determined Combination Index (CI) values. Conclusion: Results of the present study suggested that FSC optimization technique could be used in optimization of anti-platelet drug combinations and might be helpful in designing personal anti-platelet therapy strategy. Furthermore, FSC analysis could also identify interactions between different drugs which might provide useful information for research of signal cascades in platelet. PMID:29780330
Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan
2013-01-01
A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589
Gonzalo-Lumbreras, Raquel; Sanz-Landaluze, Jon; Cámara, Carmen
2015-07-01
The behavior of 15 benzimidazoles, including their main metabolites, using several C18 columns with standard or narrow-bore diameters and different particle size and type were evaluated. These commercial columns were selected because their differences could affect separation of benzimidazoles, and so they can be used as alternative columns. A simple screening method for the analysis of benzimidazole residues and their main metabolites was developed. First, the separation of benzimidazoles was optimized using a Kinetex C18 column; later, analytical performances of other columns using the above optimized conditions were compared and then individually re-optimized. Critical pairs resolution, analysis run time, column type and characteristics, and selectivity were considered for chromatographic columns comparison. Kinetex XB was selected because it provides the shortest analysis time and the best resolution of critical pairs. Using this column, the separation conditions were re-optimized using a factorial design. Separations obtained with the different columns tested can be applied to the analysis of specific benzimidazoles residues or other applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.
2018-06-01
The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.
Methods of increasing efficiency and maintainability of pipeline systems
NASA Astrophysics Data System (ADS)
Ivanov, V. A.; Sokolov, S. M.; Ogudova, E. V.
2018-05-01
This study is dedicated to the issue of pipeline transportation system maintenance. The article identifies two classes of technical-and-economic indices, which are used to select an optimal pipeline transportation system structure. Further, the article determines various system maintenance strategies and strategy selection criteria. Meanwhile, the maintenance strategies turn out to be not sufficiently effective due to non-optimal values of maintenance intervals. This problem could be solved by running the adaptive maintenance system, which includes a pipeline transportation system reliability improvement algorithm, especially an equipment degradation computer model. In conclusion, three model building approaches for determining optimal technical systems verification inspections duration were considered.
Kraschnewski, Jennifer L; Keyserling, Thomas C; Bangdiwala, Shrikant I; Gizlice, Ziya; Garcia, Beverly A; Johnston, Larry F; Gustafson, Alison; Petrovic, Lindsay; Glasgow, Russell E; Samuel-Hodge, Carmen D
2010-01-01
Studies of type 2 translation, the adaption of evidence-based interventions to real-world settings, should include representative study sites and staff to improve external validity. Sites for such studies are, however, often selected by convenience sampling, which limits generalizability. We used an optimized probability sampling protocol to select an unbiased, representative sample of study sites to prepare for a randomized trial of a weight loss intervention. We invited North Carolina health departments within 200 miles of the research center to participate (N = 81). Of the 43 health departments that were eligible, 30 were interested in participating. To select a representative and feasible sample of 6 health departments that met inclusion criteria, we generated all combinations of 6 from the 30 health departments that were eligible and interested. From the subset of combinations that met inclusion criteria, we selected 1 at random. Of 593,775 possible combinations of 6 counties, 15,177 (3%) met inclusion criteria. Sites in the selected subset were similar to all eligible sites in terms of health department characteristics and county demographics. Optimized probability sampling improved generalizability by ensuring an unbiased and representative sample of study sites.
2013-01-01
Background Gene expression data could likely be a momentous help in the progress of proficient cancer diagnoses and classification platforms. Lately, many researchers analyze gene expression data using diverse computational intelligence methods, for selecting a small subset of informative genes from the data for cancer classification. Many computational methods face difficulties in selecting small subsets due to the small number of samples compared to the huge number of genes (high-dimension), irrelevant genes, and noisy genes. Methods We propose an enhanced binary particle swarm optimization to perform the selection of small subsets of informative genes which is significant for cancer classification. Particle speed, rule, and modified sigmoid function are introduced in this proposed method to increase the probability of the bits in a particle’s position to be zero. The method was empirically applied to a suite of ten well-known benchmark gene expression data sets. Results The performance of the proposed method proved to be superior to other previous related works, including the conventional version of binary particle swarm optimization (BPSO) in terms of classification accuracy and the number of selected genes. The proposed method also requires lower computational time compared to BPSO. PMID:23617960
NASA Astrophysics Data System (ADS)
Jahangoshai Rezaee, Mustafa; Yousefi, Samuel; Hayati, Jamileh
2017-06-01
Supplier selection and allocation of optimal order quantity are two of the most important processes in closed-loop supply chain (CLSC) and reverse logistic (RL). So that providing high quality raw material is considered as a basic requirement for a manufacturer to produce popular products, as well as achieve more market shares. On the other hand, considering the existence of competitive environment, suppliers have to offer customers incentives like discounts and enhance the quality of their products in a competition with other manufacturers. Therefore, in this study, a model is presented for CLSC optimization, efficient supplier selection, as well as orders allocation considering quantity discount policy. It is modeled using multi-objective programming based on the integrated simultaneous data envelopment analysis-Nash bargaining game. In this study, maximizing profit and efficiency and minimizing defective and functions of delivery delay rate are taken into accounts. Beside supplier selection, the suggested model selects refurbishing sites, as well as determining the number of products and parts in each network's sector. The suggested model's solution is carried out using global criteria method. Furthermore, based on related studies, a numerical example is examined to validate it.
A non-linear data mining parameter selection algorithm for continuous variables
Razavi, Marianne; Brady, Sean
2017-01-01
In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829
Granleese, Tom; Clark, Samuel A; Swan, Andrew A; van der Werf, Julius H J
2015-09-14
Female reproductive technologies such as multiple ovulation and embryo transfer (MOET) and juvenile in vitro embryo production and embryo transfer (JIVET) can boost rates of genetic gain but they can also increase rates of inbreeding. Inbreeding can be managed using the principles of optimal contribution selection (OCS), which maximizes genetic gain while placing a penalty on the rate of inbreeding. We evaluated the potential benefits and synergies that exist between genomic selection (GS) and reproductive technologies under OCS for sheep and cattle breeding programs. Various breeding program scenarios were simulated stochastically including: (1) a sheep breeding program for the selection of a single trait that could be measured either early or late in life; (2) a beef breeding program with an early or late trait; and (3) a dairy breeding program with a sex limited trait. OCS was applied using a range of penalties (severe to no penalty) on co-ancestry of selection candidates, with the possibility of using multiple ovulation and embryo transfer (MOET) and/or juvenile in vitro embryo production and embryo transfer (JIVET) for females. Each breeding program was simulated with and without genomic selection. All breeding programs could be penalized to result in an inbreeding rate of 1 % increase per generation. The addition of MOET to artificial insemination or natural breeding (AI/N), without the use of GS yielded an extra 25 to 60 % genetic gain. The further addition of JIVET did not yield an extra genetic gain. When GS was used, MOET and MOET + JIVET programs increased rates of genetic gain by 38 to 76 % and 51 to 81 % compared to AI/N, respectively. Large increases in genetic gain were found across species when female reproductive technologies combined with genomic selection were applied and inbreeding was managed, especially for breeding programs that focus on the selection of traits measured late in life or that are sex-limited. Optimal contribution selection was an effective tool to optimally allocate different combinations of reproductive technologies. Applying a range of penalties to co-ancestry of selection candidates allows a comprehensive exploration of the inbreeding vs. genetic gain space.
Aeroelastic Tailoring Study of N+2 Low Boom Supersonic Commerical Transport Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
The Lockheed Martin N+2 Low - boom Supersonic Commercial Transport (LSCT) aircraft was optimized in this study through the use of a multidisciplinary design optimization tool developed at the National Aeronautics and S pace Administration Armstrong Flight Research Center. A total of 111 design variables we re used in the first optimization run. Total structural weight was the objective function in this optimization run. Design requirements for strength, buckling, and flutter we re selected as constraint functions during the first optimization run. The MSC Nastran code was used to obtain the modal, strength, and buckling characteristics. Flutter and trim analyses we re based on ZAERO code, and landing and ground control loads were computed using an in - house code. The w eight penalty to satisfy all the design requirement s during the first optimization run was 31,367 lb, a 9.4% increase from the baseline configuration. The second optimization run was prepared and based on the big-bang big-crunch algorithm. Six composite ply angles for the second and fourth composite layers were selected as discrete design variables for the second optimization run. Composite ply angle changes can't improve the weight configuration of the N+2 LSCT aircraft. However, this second optimization run can create more tolerance for the active and near active strength constraint values for future weight optimization runs.
Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health
NASA Technical Reports Server (NTRS)
2004-01-01
Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2017-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS. PMID:28728470
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
Rosewater, David; Ferreira, Summer; Schoenwald, David; ...
2018-01-25
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
Selection of optimal welding condition for GTA pulse welding in root-pass of V-groove butt joint
NASA Astrophysics Data System (ADS)
Yun, Seok-Chul; Kim, Jae-Woong
2010-12-01
In the manufacture of high-quality welds or pipeline, a full-penetration weld has to be made along the weld joint. Therefore, root-pass welding is very important, and its conditions have to be selected carefully. In this study, an experimental method for the selection of optimal welding conditions is proposed for gas tungsten arc (GTA) pulse welding in the root pass which is done along the V-grooved butt-weld joint. This method uses response surface analysis in which the width and height of back bead are chosen as quality variables of the weld. The overall desirability function, which is the combined desirability function for the two quality variables, is used as the objective function to obtain the optimal welding conditions. In our experiments, the target values of back bead width and height are 4 mm and zero, respectively, for a V-grooved butt-weld joint of a 7-mm-thick steel plate. The optimal welding conditions could determine the back bead profile (bead width and height) as 4.012 mm and 0.02 mm. From a series of welding tests, it was revealed that a uniform and full-penetration weld bead can be obtained by adopting the optimal welding conditions determined according to the proposed method.
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosewater, David; Ferreira, Summer; Schoenwald, David
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
Wang, Ruiping; Jiang, Yonggen; Guo, Xiaoqin; Wu, Yiling; Zhao, Genming
2018-01-01
Objective The Chinese Center for Disease Control and Prevention developed the China Infectious Disease Automated-alert and Response System (CIDARS) in 2008. The CIDARS can detect outbreak signals in a timely manner but generates many false-positive signals, especially for diseases with seasonality. We assessed the influence of seasonality on infectious disease outbreak detection performance. Methods Chickenpox surveillance data in Songjiang District, Shanghai were used. The optimized early alert thresholds for chickenpox were selected according to three algorithm evaluation indexes: sensitivity (Se), false alarm rate (FAR), and time to detection (TTD). Performance of selected proper thresholds was assessed by data external to the study period. Results The optimized early alert threshold for chickenpox during the epidemic season was the percentile P65, which demonstrated an Se of 93.33%, FAR of 0%, and TTD of 0 days. The optimized early alert threshold in the nonepidemic season was P50, demonstrating an Se of 100%, FAR of 18.94%, and TTD was 2.5 days. The performance evaluation demonstrated that the use of an optimized threshold adjusted for seasonality could reduce the FAR and shorten the TTD. Conclusions Selection of optimized early alert thresholds based on local infectious disease seasonality could improve the performance of the CIDARS.
Tam, James; Ahmad, Imad A Haidar; Blasko, Andrei
2018-06-05
A four parameter optimization of a stability indicating method for non-chromophoric degradation products of 1,2-distearoyl-sn-glycero-3-phosphocholine (DSPC), 1-stearoyl-sn-glycero-3-phosphocholine and 2-stearoyl-sn-glycero-3-phosphocholine was achieved using a reverse phase liquid chromatography-charged aerosol detection (RPLC-CAD) technique. Using the hydrophobic subtraction model of selectivity, a core-shell, polar embedded RPLC column was selected followed by gradient-temperature optimization, resulting in ideal relative peak placements for a robust, stability indicating separation. The CAD instrument parameters, power function value (PFV) and evaporator temperature were optimized for lysophosphatidylcholines to give UV absorbance detector-like linearity performance within a defined concentration range. The two lysophosphatidylcholines gave the same response factor in the selected conditions. System specific power function values needed to be set for the two RPLC-CAD instruments used. A custom flow-divert profile, sending only a portion of the column effluent to the detector, was necessary to mitigate detector response drifting effects. The importance of the PFV optimization for each instrument of identical build and how to overcome recovery issues brought on by the matrix effects from the lipid-RP stationary phase interaction is reported. Copyright © 2018 Elsevier B.V. All rights reserved.
Selection of optimal complexity for ENSO-EMR model by minimum description length principle
NASA Astrophysics Data System (ADS)
Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.
2012-12-01
One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression modeling of nonlinear processes: Derivation and applications to climatic variability. J. Climate, 18 (21): 4404-4424. D. Kondrashov, S. Kravtsov, A. W. Robertson and M. Ghil, 2005. A hierarchy of data-based ENSO models. J. Climate, 18, 4425-4444.
2015-07-06
NUMBER 5b. GRANT NUMBER AFOSR FA9550-12-1-0154 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Shabbir Ahmed and Santanu S. Dey 5d. PROJECT NUMBER 5e. TASK...standard mixed-integer programming (MIP) formulations of selective optimization problems. While such formulations can be attacked by commercial...F33615-86-C-5169. 5b. GRANT NUMBER. Enter all grant numbers as they appear in the report, e.g. AFOSR-82-1234. 5c. PROGRAM ELEMENT NUMBER. Enter
Pharmacodynamics of nicotine: implications for rational treatment of nicotine addiction.
Benowitz, N L
1991-05-01
Rational treatment of the pharmacologic aspects of tobacco addiction includes nicotine substitution therapy. Understanding the pharmacodynamics of nicotine and its role in the addiction process provides a basis for rational therapeutic intervention. Pharmacodynamic considerations are discussed in relation to the elements of smoking cessation therapy: setting objectives, selecting appropriate medication and dosing form, selecting the optimal doses and dosage regimens, assessing therapeutic outcome, and adjusting therapy to optimize benefits and minimize risks.
Information Fusion for High Level Situation Assessment and Prediction
2007-03-01
procedure includes deciding a sensor set that achieves the optimal trade -off between its cost and benefit, activating the identified sensors, integrating...and effective decision can be made by dynamic inference based on selecting a subset of sensors with the optimal trade -off between their cost and...first step is achieved by designing a sensor selection criterion that represents the trade -off between the sensor benefit and sensor cost. This is then
Method for determining gene knockouts
Maranas, Costas D [Port Matilda, PA; Burgard, Anthony R [State College, PA; Pharkya, Priti [State College, PA
2011-09-27
A method for determining candidates for gene deletions and additions using a model of a metabolic network associated with an organism, the model includes a plurality of metabolic reactions defining metabolite relationships, the method includes selecting a bioengineering objective for the organism, selecting at least one cellular objective, forming an optimization problem that couples the at least one cellular objective with the bioengineering objective, and solving the optimization problem to yield at least one candidate.
Method for determining gene knockouts
Maranas, Costa D; Burgard, Anthony R; Pharkya, Priti
2013-06-04
A method for determining candidates for gene deletions and additions using a model of a metabolic network associated with an organism, the model includes a plurality of metabolic reactions defining metabolite relationships, the method includes selecting a bioengineering objective for the organism, selecting at least one cellular objective, forming an optimization problem that couples the at least one cellular objective with the bioengineering objective, and solving the optimization problem to yield at least one candidate.
NASA Astrophysics Data System (ADS)
Salehi, Hassan S.; Li, Hai; Kumavor, Patrick D.; Merkulov, Aleksey; Sanders, Melinda; Brewer, Molly; Zhu, Quing
2015-03-01
In this paper, wavelength selection for multispectral photoacoustic/ultrasound tomography was optimized to obtain accurate images of hemoglobin oxygen saturation (sO2) in vivo. Although wavelengths can be selected by theoretical methods, in practice the accuracy of reconstructed images will be affected by wavelength-specific and system-specific factors such as laser source power and ultrasound transducer sensitivity. By performing photoacoustic spectroscopy of mouse tumor models using 14 different wavelengths between 710 and 840 nm, we were able to identify a wavelength set which most accurately reproduced the results obtained using all 14 wavelengths via selection criteria. In clinical studies, the optimal wavelength set was successfully used to image human ovaries in vivo and noninvasively. Although these results are specific to our co-registered photoacoustic/ultrasound imaging system, the approach we developed can be applied to other functional photoacoustic and optical imaging systems.
Varanasi, Jhansi L; Sinha, Pallavi; Das, Debabrata
2017-05-01
To selectively enrich an electrogenic mixed consortium capable of utilizing dark fermentative effluents as substrates in microbial fuel cells and to further enhance the power outputs by optimization of influential anodic operational parameters. A maximum power density of 1.4 W/m 3 was obtained by an enriched mixed electrogenic consortium in microbial fuel cells using acetate as substrate. This was further increased to 5.43 W/m 3 by optimization of influential anodic parameters. By utilizing dark fermentative effluents as substrates, the maximum power densities ranged from 5.2 to 6.2 W/m 3 with an average COD removal efficiency of 75% and a columbic efficiency of 10.6%. A simple strategy is provided for selective enrichment of electrogenic bacteria that can be used in microbial fuel cells for generating power from various dark fermentative effluents.
Zhang, Xin; Li, Weiping; Yao, Jiannian; Zhan, Chuanlang
2016-06-22
Carrier mobility is a vital factor determining the electrical performance of organic solar cells. In this paper we report that a high-efficiency nonfullerene organic solar cell (NF-OSC) with a power conversion efficiency of 6.94 ± 0.27% was obtained by optimizing the hole and electron transportations via following judicious selection of polymer donor and engineering of film-morphology and cathode interlayers: (1) a combination of solvent annealing and solvent vapor annealing optimizes the film morphology and hence both hole and electron mobilities, leading to a trade-off of fill factor and short-circuit current density (Jsc); (2) the judicious selection of polymer donor affords a higher hole and electron mobility, giving a higher Jsc; and (3) engineering the cathode interlayer affords a higher electron mobility, which leads to a significant increase in electrical current generation and ultimately the power conversion efficiency (PCE).
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1993-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1992-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Path planning during combustion mode switch
Jiang, Li; Ravi, Nikhil
2015-12-29
Systems and methods are provided for transitioning between a first combustion mode and a second combustion mode in an internal combustion engine. A current operating point of the engine is identified and a target operating point for the internal combustion engine in the second combustion mode is also determined. A predefined optimized transition operating point is selected from memory. While operating in the first combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion engine to approach the selected optimized transition operating point. When the engine is operating at the selected optimized transition operating point, the combustion mode is switched from the first combustion mode to the second combustion mode. While operating in the second combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion to approach the target operating point.
Dadan, Magdalena; Rybak, Katarzyna; Wiktor, Artur; Nowacka, Malgorzata; Zubernik, Joanna; Witrowa-Rajchert, Dorota
2018-01-15
Parsley leaves contain a high amount of bioactive components (especially lutein), therefore it is crucial to select the most appropriate pre-treatment and drying conditions, in order to obtain high quality of dried leaves, which was the aim of this study. The optimization was done using response surface methodology (RSM) for the following factors: microwave power (100, 200, 300W), air temperature (20, 30, 40°C) and pre-treatment variant (ultrasound, steaming and dipping as a control). Total phenolic content (TPC), antioxidant activity, chlorophyll and lutein contents (using UPLC-PDA) were determined in dried leaves. The analysed responses were dependent on the applied drying parameters and the pre-treatment type. The possibility of ultrasound and steam treatment application was proven and the optimal processing conditions were selected. Copyright © 2017 Elsevier Ltd. All rights reserved.
Riedel, Natalie; Müller, Andreas; Ebener, Melanie
2015-05-01
To investigate whether aging employees' selection, optimization, and compensation (SOC) strategies were associated with work ability over and above job demand and control variables, as well as across professions. Multivariable linear regressions were conducted using a representative sample of German employees born in 1959 and 1965 (N = 6057). SOC was assessed to have an independent effect on work ability. Associations of job demands and control variables with work ability were more prominent. The SOC tended to enhance the positive association between decision authority and work ability. Individual strategies of selection, optimization, and compensation could be considered as psychosocial resources adding up to a better work ability and complement prevention programs. Workplace interventions should deal with job demands and control to maintain older employees' work ability in times of working population shrinkage.
Lundgren, Johanna; Salomonsson, John; Gyllenhaal, Olle; Johansson, Erik
2007-06-22
Metoprolol and a number of related amino alcohols and similar analytes have been chromatographed on aminopropyl (APS) and ethylpyridine (EPS) silica columns. The mobile phase was carbon dioxide with methanol as modifier and no amine additive was present. Optimal isocratic conditions for the selectivity were evaluated based on experiments using design of experiments. A central composite circumscribed model for each column was used. Factors were column temperature, back-pressure and % (v/v) of modifier. The responses were retention and selectivity versus metoprolol. The % of modifier mainly controlled the retention on both columns but pressure and temperature could also be important for optimizing the selectivity between the amino alcohols. The compounds could be divided into four and five groups on both columns, with respect to the selectivity. Furthermore, on the aminopropyl silica the analytes were more spread out whereas on the ethylpyridine silica, due to its aromaticity, retention and selectivity were closer. For optimal conditions the column temperature and back-pressure should be high and the modifier concentration low. A comparison of the selectivity using optimized conditions show a few switches of retention order between the two columns. On aminopropyl silica an aldehyde failed to be eluted owing to Schiff-base formation. Peak symmetry and column efficiency were briefly studied for some structurally close analogues. This revealed some activity from the columns that affected analytes that had less protected amino groups, a methyl group instead of isopropyl. The tailing was more marked with the ethylpyridine column even with the more bulky alkyl substituents. Plate number N was a better measure than the asymmetry factor since some analyte peaks broadened without serious deterioration of symmetry compared to homologues.
Analysis and optimization of hybrid electric vehicle thermal management systems
NASA Astrophysics Data System (ADS)
Hamut, H. S.; Dincer, I.; Naterer, G. F.
2014-02-01
In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.
A Decision Support System for Solving Multiple Criteria Optimization Problems
ERIC Educational Resources Information Center
Filatovas, Ernestas; Kurasova, Olga
2011-01-01
In this paper, multiple criteria optimization has been investigated. A new decision support system (DSS) has been developed for interactive solving of multiple criteria optimization problems (MOPs). The weighted-sum (WS) approach is implemented to solve the MOPs. The MOPs are solved by selecting different weight coefficient values for the criteria…
More Optimism About Future Events with Relative Left Hemisphere Activation.
ERIC Educational Resources Information Center
Drake, Roger A.
Unrealistic personal optimism is the perception that undesirable events are less likely and desirable events are more likely to happen to oneself than they are to happen to other similar people. Three experiments were performed to study the relationships among personal optimism, perceived control, and selective activation of the cerebral…
Spartalis, Michael; Tzatzaki, Eleni; Spartalis, Eleftherios; Damaskos, Christos; Athanasiou, Antonios; Livanis, Efthimios; Voudris, Vassilis
2017-01-01
Cardiac resynchronization therapy (CRT) has become a mainstay in the management of heart failure. Up to one-third of patients who received resynchronization devices do not experience the full benefits of CRT. The clinical factors influencing the likelihood to respond to the therapy are wide QRS complex, left bundle branch block, female gender, non-ischaemic cardiomyopathy (highest responders), male gender, ischaemic cardiomyopathy (moderate responders) and narrow QRS complex, non-left bundle branch block (lowest, non-responders). This review provides a conceptual description of the role of echocardiography in the optimization of CRT. A literature survey was performed using PubMed database search to gather information regarding CRT and echocardiography. A total of 70 studies met selection criteria for inclusion in the review. Echocardiography helps in the initial selection of the patients with dyssynchrony, which will benefit the most from optimal biventricular pacing and provides a guide to left ventricular (LV) lead placement during implantation. Different echocardiographic parameters have shown promise and can offer the possibility of patient selection, response prediction, lead placement optimization strategies and optimization of device configurations. LV ejection fraction along with specific electrocardiographic criteria remains the cornerstone of CRT patient selection. Echocardiography is a non-invasive, cost-effective, highly reproducible method with certain limitations and accuracy that is affected by measurement errors. Echocardiography can assist with the identification of the appropriate electromechanical substrate of CRT response and LV lead placement. The targeted approach can improve the haemodynamic response, as also the patient-specific parameters estimation.
Spartalis, Michael; Tzatzaki, Eleni; Spartalis, Eleftherios; Damaskos, Christos; Athanasiou, Antonios; Livanis, Efthimios; Voudris, Vassilis
2017-01-01
Background: Cardiac resynchronization therapy (CRT) has become a mainstay in the management of heart failure. Up to one-third of patients who received resynchronization devices do not experience the full benefits of CRT. The clinical factors influencing the likelihood to respond to the therapy are wide QRS complex, left bundle branch block, female gender, non-ischaemic cardiomyopathy (highest responders), male gender, ischaemic cardiomyopathy (moderate responders) and narrow QRS complex, non-left bundle branch block (lowest, non-responders). Objective: This review provides a conceptual description of the role of echocardiography in the optimization of CRT. Method: A literature survey was performed using PubMed database search to gather information regarding CRT and echocardiography. Results: A total of 70 studies met selection criteria for inclusion in the review. Echocardiography helps in the initial selection of the patients with dyssynchrony, which will benefit the most from optimal biventricular pacing and provides a guide to left ventricular (LV) lead placement during implantation. Different echocardiographic parameters have shown promise and can offer the possibility of patient selection, response prediction, lead placement optimization strategies and optimization of device configurations. Conclusion: LV ejection fraction along with specific electrocardiographic criteria remains the cornerstone of CRT patient selection. Echocardiography is a non-invasive, cost-effective, highly reproducible method with certain limitations and accuracy that is affected by measurement errors. Echocardiography can assist with the identification of the appropriate electromechanical substrate of CRT response and LV lead placement. The targeted approach can improve the haemodynamic response, as also the patient-specific parameters estimation. PMID:29387277
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less
Surrogate-based Analysis and Optimization
NASA Technical Reports Server (NTRS)
Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin
2005-01-01
A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.
Shape and Reinforcement Optimization of Underground Tunnels
NASA Astrophysics Data System (ADS)
Ghabraie, Kazem; Xie, Yi Min; Huang, Xiaodong; Ren, Gang
Design of support system and selecting an optimum shape for the opening are two important steps in designing excavations in rock masses. Currently selecting the shape and support design are mainly based on designer's judgment and experience. Both of these problems can be viewed as material distribution problems where one needs to find the optimum distribution of a material in a domain. Topology optimization techniques have proved to be useful in solving these kinds of problems in structural design. Recently the application of topology optimization techniques in reinforcement design around underground excavations has been studied by some researchers. In this paper a three-phase material model will be introduced changing between normal rock, reinforced rock, and void. Using such a material model both problems of shape and reinforcement design can be solved together. A well-known topology optimization technique used in structural design is bi-directional evolutionary structural optimization (BESO). In this paper the BESO technique has been extended to simultaneously optimize the shape of the opening and the distribution of reinforcements. Validity and capability of the proposed approach have been investigated through some examples.
Optimization of freeform surfaces using intelligent deformation techniques for LED applications
NASA Astrophysics Data System (ADS)
Isaac, Annie Shalom; Neumann, Cornelius
2018-04-01
For many years, optical designers have great interests in designing efficient optimization algorithms to bring significant improvement to their initial design. However, the optimization is limited due to a large number of parameters present in the Non-uniform Rationaly b-Spline Surfaces. This limitation was overcome by an indirect technique known as optimization using freeform deformation (FFD). In this approach, the optical surface is placed inside a cubical grid. The vertices of this grid are modified, which deforms the underlying optical surface during the optimization. One of the challenges in this technique is the selection of appropriate vertices of the cubical grid. This is because these vertices share no relationship with the optical performance. When irrelevant vertices are selected, the computational complexity increases. Moreover, the surfaces created by them are not always feasible to manufacture, which is the same problem faced in any optimization technique while creating freeform surfaces. Therefore, this research addresses these two important issues and provides feasible design techniques to solve them. Finally, the proposed techniques are validated using two different illumination examples: street lighting lens and stop lamp for automobiles.
Mixture optimization for mixed gas Joule-Thomson cycle
NASA Astrophysics Data System (ADS)
Detlor, J.; Pfotenhauer, J.; Nellis, G.
2017-12-01
An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.
Multiobjective immune algorithm with nondominated neighbor-based selection.
Gong, Maoguo; Jiao, Licheng; Du, Haifeng; Bo, Liefeng
2008-01-01
Abstract Nondominated Neighbor Immune Algorithm (NNIA) is proposed for multiobjective optimization by using a novel nondominated neighbor-based selection technique, an immune inspired operator, two heuristic search operators, and elitism. The unique selection technique of NNIA only selects minority isolated nondominated individuals in the population. The selected individuals are then cloned proportionally to their crowding-distance values before heuristic search. By using the nondominated neighbor-based selection and proportional cloning, NNIA pays more attention to the less-crowded regions of the current trade-off front. We compare NNIA with NSGA-II, SPEA2, PESA-II, and MISA in solving five DTLZ problems, five ZDT problems, and three low-dimensional problems. The statistical analysis based on three performance metrics including the coverage of two sets, the convergence metric, and the spacing, show that the unique selection method is effective, and NNIA is an effective algorithm for solving multiobjective optimization problems. The empirical study on NNIA's scalability with respect to the number of objectives shows that the new algorithm scales well along the number of objectives.
An Optimization Model For Strategy Decision Support to Select Kind of CPO’s Ship
NASA Astrophysics Data System (ADS)
Suaibah Nst, Siti; Nababan, Esther; Mawengkang, Herman
2018-01-01
The selection of marine transport for the distribution of crude palm oil (CPO) is one of strategy that can be considered in reducing cost of transport. The cost of CPO’s transport from one area to CPO’s factory located at the port of destination may affect the level of CPO’s prices and the number of demands. In order to maintain the availability of CPO a strategy is required to minimize the cost of transporting. In this study, the strategy used to select kind of charter ships as barge or chemical tanker. This study aims to determine an optimization model for strategy decision support in selecting kind of CPO’s ship by minimizing costs of transport. The select of ship was done randomly, so that two-stage stochastic programming model was used to select the kind of ship. Model can help decision makers to select either barge or chemical tanker to distribute CPO.
Optimization of single photon detection model based on GM-APD
NASA Astrophysics Data System (ADS)
Chen, Yu; Yang, Yi; Hao, Peiyu
2017-11-01
One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
Jiménez-Moreno, Ester; Montalvillo-Jiménez, Laura; Santana, Andrés G; Gómez, Ana M; Jiménez-Osés, Gonzalo; Corzana, Francisco; Bastida, Agatha; Jiménez-Barbero, Jesús; Cañada, Francisco Javier; Gómez-Pinto, Irene; González, Carlos; Asensio, Juan Luis
2016-05-25
Development of strong and selective binders from promiscuous lead compounds represents one of the most expensive and time-consuming tasks in drug discovery. We herein present a novel fragment-based combinatorial strategy for the optimization of multivalent polyamine scaffolds as DNA/RNA ligands. Our protocol provides a quick access to a large variety of regioisomer libraries that can be tested for selective recognition by combining microdialysis assays with simple isotope labeling and NMR experiments. To illustrate our approach, 20 small libraries comprising 100 novel kanamycin-B derivatives have been prepared and evaluated for selective binding to the ribosomal decoding A-Site sequence. Contrary to the common view of NMR as a low-throughput technique, we demonstrate that our NMR methodology represents a valuable alternative for the detection and quantification of complex mixtures, even integrated by highly similar or structurally related derivatives, a common situation in the context of a lead optimization process. Furthermore, this study provides valuable clues about the structural requirements for selective A-site recognition.
Hoffmann, Thomas J; Zhan, Yiping; Kvale, Mark N; Hesselson, Stephanie E; Gollub, Jeremy; Iribarren, Carlos; Lu, Yontao; Mei, Gangwu; Purdy, Matthew M; Quesenberry, Charles; Rowell, Sarah; Shapero, Michael H; Smethurst, David; Somkin, Carol P; Van den Eeden, Stephen K; Walter, Larry; Webster, Teresa; Whitmer, Rachel A; Finn, Andrea; Schaefer, Catherine; Kwok, Pui-Yan; Risch, Neil
2011-12-01
Four custom Axiom genotyping arrays were designed for a genome-wide association (GWA) study of 100,000 participants from the Kaiser Permanente Research Program on Genes, Environment and Health. The array optimized for individuals of European race/ethnicity was previously described. Here we detail the development of three additional microarrays optimized for individuals of East Asian, African American, and Latino race/ethnicity. For these arrays, we decreased redundancy of high-performing SNPs to increase SNP capacity. The East Asian array was designed using greedy pairwise SNP selection. However, removing SNPs from the target set based on imputation coverage is more efficient than pairwise tagging. Therefore, we developed a novel hybrid SNP selection method for the African American and Latino arrays utilizing rounds of greedy pairwise SNP selection, followed by removal from the target set of SNPs covered by imputation. The arrays provide excellent genome-wide coverage and are valuable additions for large-scale GWA studies. Copyright © 2011 Elsevier Inc. All rights reserved.
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
Selecting a proper design period for heliostat field layout optimization using Campo code
NASA Astrophysics Data System (ADS)
Saghafifar, Mohammad; Gadalla, Mohamed
2016-09-01
In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.
Josiński, Henryk; Kostrzewa, Daniel; Michalczuk, Agnieszka; Switoński, Adam
2014-01-01
This paper introduces an expanded version of the Invasive Weed Optimization algorithm (exIWO) distinguished by the hybrid strategy of the search space exploration proposed by the authors. The algorithm is evaluated by solving three well-known optimization problems: minimization of numerical functions, feature selection, and the Mona Lisa TSP Challenge as one of the instances of the traveling salesman problem. The achieved results are compared with analogous outcomes produced by other optimization methods reported in the literature.
Portfolio selection and asset pricing under a benchmark approach
NASA Astrophysics Data System (ADS)
Platen, Eckhard
2006-10-01
The paper presents classical and new results on portfolio optimization, as well as the fair pricing concept for derivative pricing under the benchmark approach. The growth optimal portfolio is shown to be a central object in a market model. It links asset pricing and portfolio optimization. The paper argues that the market portfolio is a proxy of the growth optimal portfolio. By choosing the drift of the discounted growth optimal portfolio as parameter process, one obtains a realistic theoretical market dynamics.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2015-03-01
Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.
NASA Astrophysics Data System (ADS)
Xu, Shuo; Ji, Ze; Truong Pham, Duc; Yu, Fan
2011-11-01
The simultaneous mission assignment and home allocation for hospital service robots studied is a Multidimensional Assignment Problem (MAP) with multiobjectives and multiconstraints. A population-based metaheuristic, the Binary Bees Algorithm (BBA), is proposed to optimize this NP-hard problem. Inspired by the foraging mechanism of honeybees, the BBA's most important feature is an explicit functional partitioning between global search and local search for exploration and exploitation, respectively. Its key parts consist of adaptive global search, three-step elitism selection (constraint handling, non-dominated solutions selection, and diversity preservation), and elites-centred local search within a Hamming neighbourhood. Two comparative experiments were conducted to investigate its single objective optimization, optimization effectiveness (indexed by the S-metric and C-metric) and optimization efficiency (indexed by computational burden and CPU time) in detail. The BBA outperformed its competitors in almost all the quantitative indices. Hence, the above overall scheme, and particularly the searching history-adapted global search strategy was validated.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M
2018-04-12
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.
2018-01-01
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114
Beam-energy-spread minimization using cell-timing optimization
NASA Astrophysics Data System (ADS)
Rose, C. R.; Ekdahl, C.; Schulze, M.
2012-04-01
Beam energy spread, and related beam motion, increase the difficulty in tuning for multipulse radiographic experiments at the dual-axis radiographic hydrodynamic test facility’s axis-II linear induction accelerator (LIA). In this article, we describe an optimization method to reduce the energy spread by adjusting the timing of the cell voltages (both unloaded and loaded), either advancing or retarding, such that the injector voltage and summed cell voltages in the LIA result in a flatter energy profile. We developed a nonlinear optimization routine which accepts as inputs the 74 cell-voltage, injector voltage, and beam current waveforms. It optimizes cell timing per user-selected groups of cells and outputs timing adjustments, one for each of the selected groups. To verify the theory, we acquired and present data for both unloaded and loaded cell-timing optimizations. For the unloaded cells, the preoptimization baseline energy spread was reduced by 34% and 31% for two shots as compared to baseline. For the loaded-cell case, the measured energy spread was reduced by 49% compared to baseline.
Tree crickets optimize the acoustics of baffles to exaggerate their mate-attraction signal
Balakrishnan, Rohini; Robert, Daniel
2017-01-01
Object manufacture in insects is typically inherited, and believed to be highly stereotyped. Optimization, the ability to select the functionally best material and modify it appropriately for a specific function, implies flexibility and is usually thought to be incompatible with inherited behaviour. Here, we show that tree-crickets optimize acoustic baffles, objects that are used to increase the effective loudness of mate-attraction calls. We quantified the acoustic efficiency of all baffles within the naturally feasible design space using finite-element modelling and found that design affects efficiency significantly. We tested the baffle-making behaviour of tree crickets in a series of experimental contexts. We found that given the opportunity, tree crickets optimised baffle acoustics; they selected the best sized object and modified it appropriately to make a near optimal baffle. Surprisingly, optimization could be achieved in a single attempt, and is likely to be achieved through an inherited yet highly accurate behavioural heuristic. PMID:29227246
Optimal planning and design of a renewable energy based supply system for microgrids
Hafez, Omar; Bhattacharya, Kankar
2012-03-03
This paper presents a technique for optimal planning and design of hybrid renewable energy systems for microgrid applications. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is used to determine the optimal size and type of distributed energy resources (DERs) and their operating schedules for a sample utility distribution system. Using the DER-CAM results, an evaluation is performed to evaluate the electrical performance of the distribution circuit if the DERs selected by the DER-CAM optimization analyses are incorporated. Results of analyses regarding the economic benefits of utilizing the optimal locations identified for the selected DER within the system are alsomore » presented. The actual Brookhaven National Laboratory (BNL) campus electrical network is used as an example to show the effectiveness of this approach. The results show that these technical and economic analyses of hybrid renewable energy systems are essential for the efficient utilization of renewable energy resources for microgird applications.« less
Sniffer Channel Selection for Monitoring Wireless LANs
NASA Astrophysics Data System (ADS)
Song, Yuan; Chen, Xian; Kim, Yoo-Ah; Wang, Bing; Chen, Guanling
Wireless sniffers are often used to monitor APs in wireless LANs (WLANs) for network management, fault detection, traffic characterization, and optimizing deployment. It is cost effective to deploy single-radio sniffers that can monitor multiple nearby APs. However, since nearby APs often operate on orthogonal channels, a sniffer needs to switch among multiple channels to monitor its nearby APs. In this paper, we formulate and solve two optimization problems on sniffer channel selection. Both problems require that each AP be monitored by at least one sniffer. In addition, one optimization problem requires minimizing the maximum number of channels that a sniffer listens to, and the other requires minimizing the total number of channels that the sniffers listen to. We propose a novel LP-relaxation based algorithm, and two simple greedy heuristics for the above two optimization problems. Through simulation, we demonstrate that all the algorithms are effective in achieving their optimization goals, and the LP-based algorithm outperforms the greedy heuristics.
Memetic Algorithm-Based Multi-Objective Coverage Optimization for Wireless Sensor Networks
Chen, Zhi; Li, Shuai; Yue, Wenjing
2014-01-01
Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms. PMID:25360579
Memetic algorithm-based multi-objective coverage optimization for wireless sensor networks.
Chen, Zhi; Li, Shuai; Yue, Wenjing
2014-10-30
Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
AFLP Variation in Populations of Podisus maculiventris
USDA-ARS?s Scientific Manuscript database
We are developing methods to reduce costs of mass producing beneficial insect species for biological control programs. One of our methods entails selecting beneficials for optimal production traits. Currently we are selecting for increased fecundity. Selection protocols, whether based on phenotyp...
Sex differences, sexual selection, and ageing: an experimental evolution approach.
Maklakov, Alexei A; Bonduriansky, Russell; Brooks, Robert C
2009-10-01
Life-history (LH) theory predicts that selection will optimize the trade-off between reproduction and somatic maintenance. Reproductive ageing and finite life span are direct consequences of such optimization. Sexual selection and conflict profoundly affect the reproductive strategies of the sexes and thus can play an important role in the evolution of life span and ageing. In theory, sexual selection can favor the evolution of either faster or slower ageing, but the evidence is equivocal. We used a novel selection experiment to investigate the potential of sexual selection to influence the adaptive evolution of age-specific LH traits. We selected replicate populations of the seed beetle Callosobruchus maculatus for age at reproduction ("Young" and "Old") either with or without sexual selection. We found that LH selection resulted in the evolution of age-specific reproduction and mortality but these changes were largely unaffected by sexual selection. Sexual selection depressed net reproductive performance and failed to promote adaptation. Nonetheless, the evolution of several traits differed between males and females. These data challenge the importance of current sexual selection in promoting rapid adaptation to environmental change but support the hypothesis that sex differences in LH-a historical signature of sexual selection-are key in shaping trait responses to novel selection.
Huang, Jia Hang; Liu, Jin Fu; Lin, Zhi Wei; Zheng, Shi Qun; He, Zhong Sheng; Zhang, Hui Guang; Li, Wen Zhou
2017-01-01
Designing the nature reserves is an effective approach to protecting biodiversity. The traditional approaches to designing the nature reserves could only identify the core area for protecting the species without specifying an appropriate land area of the nature reserve. The site selection approaches, which are based on mathematical model, can select part of the land from the planning area to compose the nature reserve and to protect specific species or ecosystem. They are useful approaches to alleviating the contradiction between ecological protection and development. The existing site selection methods do not consider the ecological differences between each unit and has the bottleneck of computational efficiency in optimization algorithm. In this study, we first constructed the ecological value assessment system which was appropriated for forest ecosystem and that was used for calculating ecological value of Daiyun Mountain and for drawing its distribution map. Then, the Ecological Set Covering Problem (ESCP) was established by integrating the ecological values and then the Space-ecology Set Covering Problem (SSCP) was generated based on the spatial compactness of ESCP. Finally, the STS algorithm which possessed good optimizing performance was utilized to search the approximate optimal solution under diverse protection targets, and the optimization solution of the built-up area of Daiyun Mountain was proposed. According to the experimental results, the difference of ecological values in the spatial distribution was obvious. The ecological va-lue of selected sites of ESCP was higher than that of SCP. SSCP could aggregate the sites with high ecological value based on ESCP. From the results, the level of the aggregation increased with the weight of the perimeter. We suggested that the range of the existing reserve could be expanded for about 136 km 2 and the site of Tsuga longibracteata should be included, which was located in the northwest of the study area. Our research aimed at providing an optimization scheme for the sustai-nable development of Daiyun Mountain nature reserve and the optimal allocation of land resource, and a novel idea for designing the nature reserve of forest ecosystem in China.
Duiverman, Marieke L; Windisch, Wolfram; Storre, Jan H; Wijkstra, Peter J
2016-04-01
Recently, clear benefits have been shown from long-term noninvasive ventilation (NIV) in stable chronic obstructive pulmonary disease (COPD) patients with chronic hypercapnic respiratory failure. In our opinion, these benefits are confirmed and nocturnal NIV using sufficiently high inspiratory pressures should be considered in COPD patients with chronic hypercapnic respiratory failure in stable disease, preferably combined with pulmonary rehabilitation. In contrast, clear benefits from (continuing) NIV at home after an exacerbation in patients who remain hypercapnic have not been shown. In this review we will discuss the results of five trials investigating the use of home nocturnal NIV in patients with prolonged hypercapnia after a COPD exacerbation with acute hypercapnic respiratory failure. Although some uncontrolled trials might have shown some benefits of this therapy, the largest randomized controlled trial did not show benefits in terms of hospital readmission or death. However, further studies are necessary to select the patients that optimally benefit, select the right moment to initiate home NIV, select the optimal ventilatory settings, and to choose optimal follow up programmes. Furthermore, there is insufficient knowledge about the optimal ventilatory settings in the post-exacerbation period. Finally, we are not well informed about exact reasons for readmission in patients on NIV, the course of the exacerbation and the treatment instituted. A careful follow up might probably be necessary to prevent deterioration on NIV early. © The Author(s), 2016.
Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan
2016-01-01
We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.
Image registration via optimization over disjoint image regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitts, Todd; Hathaway, Simon; Karelitz, David B.
Technologies pertaining to registering a target image with a base image are described. In a general embodiment, the base image is selected from a set of images, and the target image is an image in the set of images that is to be registered to the base image. A set of disjoint regions of the target image is selected, and a transform to be applied to the target image is computed based on the optimization of a metric over the selected set of disjoint regions. The transform is applied to the target image so as to register the target imagemore » with the base image.« less
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Du, Jinglei
2018-06-01
A method of selecting appropriate singular values of the transmission matrix to improve the precision of incident wavefront retrieval in focusing light through scattering media is proposed. The optimal singular values selected by this method can reduce the degree of ill-conditionedness of the transmission matrix effectively, which indicates that the incident wavefront retrieved from the optimal set of singular values is more accurate than the incident wavefront retrieved from other sets of singular values. The validity of this method is verified by numerical simulation and actual measurements of the incident wavefront of coherent light through ground glass.
NASA Astrophysics Data System (ADS)
Min, Qing-xu; Zhu, Jun-zhen; Feng, Fu-zhou; Xu, Chao; Sun, Ji-wei
2017-06-01
In this paper, the lock-in vibrothermography (LVT) is utilized for defect detection. Specifically, for a metal plate with an artificial fatigue crack, the temperature rise of the defective area is used for analyzing the influence of different test conditions, i.e. engagement force, excitation intensity, and modulated frequency. The multivariate nonlinear and logistic regression models are employed to estimate the POD (probability of detection) and POA (probability of alarm) of fatigue crack, respectively. The resulting optimal selection of test conditions is presented. The study aims to provide an optimized selection method of the test conditions in the vibrothermography system with the enhanced detection ability.
Nelson, Carl A; Miller, David J; Oleynikov, Dmitry
2008-01-01
As modular systems come into the forefront of robotic telesurgery, streamlining the process of selecting surgical tools becomes an important consideration. This paper presents a method for optimal queuing of tools in modular surgical tool systems, based on patterns in tool-use sequences, in order to minimize time spent changing tools. The solution approach is to model the set of tools as a graph, with tool-change frequency expressed as edge weights in the graph, and to solve the Traveling Salesman Problem for the graph. In a set of simulations, this method has shown superior performance at optimizing tool arrangements for streamlining surgical procedures.
She, Ji; Wang, Fei; Zhou, Jianjiang
2016-01-01
Radar networks are proven to have numerous advantages over traditional monostatic and bistatic radar. With recent developments, radar networks have become an attractive platform due to their low probability of intercept (LPI) performance for target tracking. In this paper, a joint sensor selection and power allocation algorithm for multiple-target tracking in a radar network based on LPI is proposed. It is found that this algorithm can minimize the total transmitted power of a radar network on the basis of a predetermined mutual information (MI) threshold between the target impulse response and the reflected signal. The MI is required by the radar network system to estimate target parameters, and it can be calculated predictively with the estimation of target state. The optimization problem of sensor selection and power allocation, which contains two variables, is non-convex and it can be solved by separating power allocation problem from sensor selection problem. To be specific, the optimization problem of power allocation can be solved by using the bisection method for each sensor selection scheme. Also, the optimization problem of sensor selection can be solved by a lower complexity algorithm based on the allocated powers. According to the simulation results, it can be found that the proposed algorithm can effectively reduce the total transmitted power of a radar network, which can be conducive to improving LPI performance. PMID:28009819
Optimization methods for activities selection problems
NASA Astrophysics Data System (ADS)
Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia
2017-08-01
Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.
Genetics algorithm optimization of DWT-DCT based image Watermarking
NASA Astrophysics Data System (ADS)
Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan
2017-01-01
Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.
Hocharoen, Lalintip; Joyner, Jeff C.; Cowan, J. A.
2014-01-01
The N- and C-terminal domains of human somatic Angiotensin I Converting Enzyme (sACE-1) demonstrate distinct physiological functions, with resulting interest in the development of domain-selective inhibitors for specific therapeutic applications. Herein, the activity of lisinopril-coupled transition metal chelates were tested for both reversible binding and irreversible catalytic inactivation of sACE-1. C/N domain binding selectivity ratios ranged from 1 to 350, while rates of irreversible catalytic inactivation of the N- and C-domains were found to be significantly greater for the N-domain, suggesting a more optimal orientation of the M-chelate-lisinopril complexes within the active site of the N-domain of sACE-1. Finally, the combined effect of binding selectivity and inactivation selectivity was assessed for each catalyst (double-filter selectivity factors), and several catalysts were found to cause domain-selective catalytic inactivation. The results of this study demonstrate the ability to optimize the target selectivity of catalytic metallopeptides through both binding and orientation factors (double-filter effect). PMID:24228790
Hocharoen, Lalintip; Joyner, Jeff C; Cowan, J A
2013-12-27
The N- and C-terminal domains of human somatic angiotensin I converting enzyme (sACE-1) demonstrate distinct physiological functions, with resulting interest in the development of domain-selective inhibitors for specific therapeutic applications. Herein, the activity of lisinopril-coupled transition metal chelates was tested for both reversible binding and irreversible catalytic inactivation of each domain of sACE-1. C/N domain binding selectivity ratios ranged from 1 to 350, while rates of irreversible catalytic inactivation of the N- and C-domains were found to be significantly greater for the N-domain, suggesting a more optimal orientation of M-chelate-lisinopril complexes within the active site of the N-domain of sACE-1. Finally, the combined effect of binding selectivity and inactivation selectivity was assessed for each catalyst (double-filter selectivity factors), and several catalysts were found to cause domain-selective catalytic inactivation. The results of this study demonstrate the ability to optimize the target selectivity of catalytic metallopeptides through both binding and catalytic factors (double-filter effect).
Integrating Test-Form Formatting into Automated Test Assembly
ERIC Educational Resources Information Center
Diao, Qi; van der Linden, Wim J.
2013-01-01
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
2017-01-01
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718
IEEE 802.21 Assisted Seamless and Energy Efficient Handovers in Mixed Networks
NASA Astrophysics Data System (ADS)
Liu, Huaiyu; Maciocco, Christian; Kesavan, Vijay; Low, Andy L. Y.
Network selection is the decision process for a mobile terminal to handoff between homogeneous or heterogeneous networks. With multiple available networks, the selection process must evaluate factors like network services/conditions, monetary cost, system conditions, user preferences etc. In this paper, we investigate network selection using a cost function and information provided by IEEE 802.21. The cost function provides flexibility to balance different factors in decision making and our research is focused on improving both seamlessness and energy efficiency of handovers. Our solution is evaluated using real WiFi, WiMax, and 3G signal strength traces. The results show that appropriate networks were selected based on selection policies, handovers were triggered at optimal times to increase overall network connectivity as compared to traditional triggering schemes, while at the same time the energy consumption of multi-radio devices for both on-going operations as well as during handovers is optimized.
Penningroth, Suzanna L; Scott, Walter D
2012-01-01
Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two theories. Older adults and two groups of younger adults (college students and non-students) listed their current goals, which were then coded by independent raters. Observed age group differences in goals generally supported both theories. Specifically, when compared to younger adults, older adults reported more goals focused on maintenance/loss prevention, the present, emotion-focus and generativity, and social selection, and less goals focused on knowledge acquisition and the future. However, contrary to prediction, older adults also showed less goal focusing than younger adults, reporting goals from a broader set of life domains (e.g., health, property/possessions, friendship).
Lin, Wen-Bin; Tung, I-Wu; Chen, Mei-Jung; Chen, Mei-Yen
2011-08-01
Selection of a qualified pitcher has relied previously on qualitative indices; here, both quantitative and qualitative indices including pitching statistics, defense, mental skills, experience, and managers' recognition were collected, and an analytic hierarchy process was used to rank baseball pitchers. The participants were 8 experts who ranked characteristics and statistics of 15 baseball pitchers who comprised the first round of potential representatives for the Chinese Taipei National Baseball team. The results indicated a selection rate that was 91% consistent with the official national team roster, as 11 pitchers with the highest scores who were recommended as optimal choices to be official members of the Chinese Tai-pei National Baseball team actually participated in the 2009 Baseball World Cup. An analytic hierarchy can aid in selection of qualified pitchers, depending on situational and practical needs; the method could be extended to other sports and team-selection situations.
[Analysis of visible extinction spectrum of particle system and selection of optimal wavelength].
Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin
2008-09-01
In the total light scattering particle sizing technique, the extinction spectrum of particle system contains some information about the particle size and refractive index. The visible extinction spectra of the common monomodal and biomodal R-R particle size distribution were computed, and the variation in the visible extinction spectrum with the particle size and refractive index was analyzed. The corresponding wavelengths were selected as the measurement wavelengths at which the second order differential extinction spectrum was discontinuous. Furthermore, the minimum and the maximum wavelengths in the visible region were also selected as the measurement wavelengths. The genetic algorithm was used as the inversion method under the dependent model The computer simulation and experiments illustrate that it is feasible to make an analysis of the extinction spectrum and use this selection method of the optimal wavelength in the total light scattering particle sizing. The rough contour of the particle size distribution can be determined after the analysis of visible extinction spectrum, so the search range of the particle size parameter is reduced in the optimal algorithm, and then a more accurate inversion result can be obtained using the selection method. The inversion results of monomodal and biomodal distribution are all still satisfactory when 1% stochastic noise is put in the transmission extinction measurement values.
Automated sample plan selection for OPC modeling
NASA Astrophysics Data System (ADS)
Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas
2014-03-01
It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.
A performance analysis in AF full duplex relay selection network
NASA Astrophysics Data System (ADS)
Ngoc, Long Nguyen; Hong, Nhu Nguyen; Loan, Nguyen Thi Phuong; Kieu, Tam Nguyen; Voznak, Miroslav; Zdralek, Jaroslav
2018-04-01
This paper studies on the relaying selective matter in amplify-and-forward (AF) cooperation communication with full-duplex (FD) activity. Various relay choice models supposing the present of different instant information are investigated. We examine a maximal relaying choice that optimizes the instant FD channel capacity and asks for global channel state information (CSI) as well as partial CSI learning. To make comparison easy, accurate outage probability clauses and asymptote form of these strategies that give a diversity rank are extracted. From that, we can see clearly that the number of relays, noise factor, the transmittance coefficient as well as the information transfer power had impacted on their performance. Besides, the optimal relay selection (ORS) model can promote than that of the partial relay selection (PRS) model.
A look at ligand binding thermodynamics in drug discovery.
Claveria-Gimeno, Rafael; Vega, Sonia; Abian, Olga; Velazquez-Campoy, Adrian
2017-04-01
Drug discovery is a challenging endeavor requiring the interplay of many different research areas. Gathering information on ligand binding thermodynamics may help considerably in reducing the risk within a high uncertainty scenario, allowing early rejection of flawed compounds and pushing forward optimal candidates. In particular, the free energy, the enthalpy, and the entropy of binding provide fundamental information on the intermolecular forces driving such interaction. Areas covered: The authors review the current status and recent developments in the application of ligand binding thermodynamics in drug discovery. The thermodynamic binding profile (Gibbs energy, enthalpy, and entropy of binding) can be used for lead selection and optimization (binding enthalpy, selectivity, and adaptability). Expert opinion: Binding thermodynamics provides fundamental information on the forces driving the formation of the drug-target complex. It has been widely accepted that binding thermodynamics may be used as a decision criterion along the ligand optimization process in drug discovery and development. In particular, the binding enthalpy may be used as a guide when selecting and optimizing compounds over a set of potential candidates. However, this has been recently called into question by arguing certain difficulties and in the light of certain experimental examples.
Mutturi, Sarma
2017-06-27
Although handful tools are available for constraint-based flux analysis to generate knockout strains, most of these are either based on bilevel-MIP or its modifications. However, metaheuristic approaches that are known for their flexibility and scalability have been less studied. Moreover, in the existing tools, sectioning of search space to find optimal knocks has not been considered. Herein, a novel computational procedure, termed as FOCuS (Flower-pOllination coupled Clonal Selection algorithm), was developed to find the optimal reaction knockouts from a metabolic network to maximize the production of specific metabolites. FOCuS derives its benefits from nature-inspired flower pollination algorithm and artificial immune system-inspired clonal selection algorithm to converge to an optimal solution. To evaluate the performance of FOCuS, reported results obtained from both MIP and other metaheuristic-based tools were compared in selected case studies. The results demonstrated the robustness of FOCuS irrespective of the size of metabolic network and number of knockouts. Moreover, sectioning of search space coupled with pooling of priority reactions based on their contribution to objective function for generating smaller search space significantly reduced the computational time.
Optimization of cDNA-AFLP experiments using genomic sequence data.
Kivioja, Teemu; Arvas, Mikko; Saloheimo, Markku; Penttilä, Merja; Ukkonen, Esko
2005-06-01
cDNA amplified fragment length polymorphism (cDNA-AFLP) is one of the few genome-wide level expression profiling methods capable of finding genes that have not yet been cloned or even predicted from sequence but have interesting expression patterns under the studied conditions. In cDNA-AFLP, a complex cDNA mixture is divided into small subsets using restriction enzymes and selective PCR. A large cDNA-AFLP experiment can require a substantial amount of resources, such as hundreds of PCR amplifications and gel electrophoresis runs, followed by manual cutting of a large number of bands from the gels. Our aim was to test whether this workload can be reduced by rational design of the experiment. We used the available genomic sequence information to optimize cDNA-AFLP experiments beforehand so that as many transcripts as possible could be profiled with a given amount of resources. Optimization of the selection of both restriction enzymes and selective primers for cDNA-AFLP experiments has not been performed previously. The in silico tests performed suggest that substantial amounts of resources can be saved by the optimization of cDNA-AFLP experiments.
Deployable wavelength optimizer for multi-laser sensing and communication undersea
NASA Astrophysics Data System (ADS)
Neuner, Burton; Hening, Alexandru; Pascoguin, B. Melvin; Dick, Brian; Miller, Martin; Tran, Nghia; Pfetsch, Michael
2017-05-01
This effort develops and tests algorithms and a user-portable optical system designed to autonomously optimize the laser communication wavelength in open and coastal oceans. In situ optical meteorology and oceanography (METOC) data gathered and analyzed as part of the auto-selection process can be stored and forwarded. The system performs closedloop optimization of three visible-band lasers within one minute by probing the water column via passive retroreflector and polarization optics, selecting the ideal wavelength, and enabling high-speed communication. Backscattered and stray light is selectively blocked by employing polarizers and wave plates, thus increasing the signal-to-noise ratio. As an advancement in instrumentation, we present autonomy software and portable hardware, and demonstrate this new system in two environments: ocean bay seawater and outdoor test pool freshwater. The next generation design is also presented. Once fully miniaturized, the optical payload and software will be ready for deployment on manned and unmanned platforms such as buoys and vehicles. Gathering timely and accurate ocean sensing data in situ will dramatically increase the knowledge base and capabilities for environmental sensing, defense, and industrial applications. Furthermore, communicating on the optimal channel increases transfer rates, propagation range, and mission length, all while reducing power consumption in undersea platforms.
Wing Configuration Impact on Design Optimums for a Subsonic Passenger Transport
NASA Technical Reports Server (NTRS)
Wells, Douglas P.
2014-01-01
This study sought to compare four aircraft wing configurations at a conceptual level using a multi-disciplinary optimization (MDO) process. The MDO framework used was created by Georgia Institute of Technology and Virginia Polytechnic Institute and State University. They created a multi-disciplinary design and optimization environment that could capture the unique features of the truss-braced wing (TBW) configuration. The four wing configurations selected for the study were a low wing cantilever installation, a high wing cantilever, a strut-braced wing, and a single jury TBW. The mission that was used for this study was a 160 passenger transport aircraft with a design range of 2,875 nautical miles at the design payload, flown at a cruise Mach number of 0.78. This paper includes discussion and optimization results for multiple design objectives. Five design objectives were chosen to illustrate the impact of selected objective on the optimization result: minimum takeoff gross weight (TOGW), minimum operating empty weight, minimum block fuel weight, maximum start of cruise lift-to-drag ratio, and minimum start of cruise drag coefficient. The results show that the design objective selected will impact the characteristics of the optimized aircraft. Although minimum life cycle cost was not one of the objectives, TOGW is often used as a proxy for life cycle cost. The low wing cantilever had the lowest TOGW followed by the strut-braced wing.
NASA Astrophysics Data System (ADS)
Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi
2017-07-01
The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.
Design of wide-angle solar-selective absorbers using aperiodic metal-dielectric stacks.
Sergeant, Nicholas P; Pincon, Olivier; Agrawal, Mukul; Peumans, Peter
2009-12-07
Spectral control of the emissivity of surfaces is essential in applications such as solar thermal and thermophotovoltaic energy conversion in order to achieve the highest conversion efficiencies possible. We investigated the spectral performance of planar aperiodic metal-dielectric multilayer coatings for these applications. The response of the coatings was optimized for a target operational temperature using needle-optimization based on a transfer matrix approach. Excellent spectral selectivity was achieved over a wide angular range. These aperiodic metal-dielectric stacks have the potential to significantly increase the efficiency of thermophotovoltaic and solar thermal conversion systems. Optimal coatings for concentrated solar thermal conversion were modeled to have a thermal emissivity <7% at 720K while absorbing >94% of the incident light. In addition, optimized coatings for solar thermophotovoltaic applications were modeled to have thermal emissivity <16% at 1750K while absorbing >85% of the concentrated solar radiation.
Cho, Ming-Yuan; Hoang, Thi Thom
2017-01-01
Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.
Optimal experimental designs for fMRI when the model matrix is uncertain.
Kao, Ming-Hung; Zhou, Lin
2017-07-15
This study concerns optimal designs for functional magnetic resonance imaging (fMRI) experiments when the model matrix of the statistical model depends on both the selected stimulus sequence (fMRI design), and the subject's uncertain feedback (e.g. answer) to each mental stimulus (e.g. question) presented to her/him. While practically important, this design issue is challenging. This mainly is because that the information matrix cannot be fully determined at the design stage, making it difficult to evaluate the quality of the selected designs. To tackle this challenging issue, we propose an easy-to-use optimality criterion for evaluating the quality of designs, and an efficient approach for obtaining designs optimizing this criterion. Compared with a previously proposed method, our approach requires a much less computing time to achieve designs with high statistical efficiencies. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji
2017-12-01
Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.
Optimizing flurbiprofen-loaded NLC by central composite factorial design for ocular delivery.
Gonzalez-Mira, E; Egea, M A; Souto, E B; Calpena, A C; García, M L
2011-01-28
The purpose of this study was to design and optimize a new topical delivery system for ocular administration of flurbiprofen (FB), based on lipid nanoparticles. These particles, called nanostructured lipid carriers (NLC), were composed of a fatty acid (stearic acid (SA)) as the solid lipid and a mixture of Miglyol(®) 812 and castor oil (CO) as the liquid lipids, prepared by the hot high pressure homogenization method. After selecting the critical variables influencing the physicochemical characteristics of the NLC (the liquid lipid (i.e. oil) concentration with respect to the total lipid (cOil/L (wt%)), the surfactant and the flurbiprofen concentration, on particle size, polydispersity index and encapsulation efficiency), a three-factor five-level central rotatable composite design was employed to plan and perform the experiments. Morphological examination, crystallinity and stability studies were also performed to accomplish the optimization study. The results showed that increasing cOil/L (wt%) was followed by an enhanced tendency to produce smaller particles, but the liquid to solid lipid proportion should not exceed 30 wt% due to destabilization problems. Therefore, a 70:30 ratio of SA to oil (miglyol + CO) was selected to develop an optimal NLC formulation. The smaller particles obtained when increasing surfactant concentration led to the selection of 3.2 wt% of Tween(®) 80 (non-ionic surfactant). The positive effect of the increase in FB concentration on the encapsulation efficiency (EE) and its total solubilization in the lipid matrix led to the selection of 0.25 wt% of FB in the formulation. The optimal NLC showed an appropriate average size for ophthalmic administration (228.3 nm) with a narrow size distribution (0.156), negatively charged surface (-33.3 mV) and high EE (∼90%). The in vitro experiments proved that sustained release FB was achieved using NLC as drug carriers. Optimal NLC formulation did not show toxicity on ocular tissues.
Optimizing flurbiprofen-loaded NLC by central composite factorial design for ocular delivery
NASA Astrophysics Data System (ADS)
Gonzalez-Mira, E.; Egea, M. A.; Souto, E. B.; Calpena, A. C.; García, M. L.
2011-01-01
The purpose of this study was to design and optimize a new topical delivery system for ocular administration of flurbiprofen (FB), based on lipid nanoparticles. These particles, called nanostructured lipid carriers (NLC), were composed of a fatty acid (stearic acid (SA)) as the solid lipid and a mixture of Miglyol® 812 and castor oil (CO) as the liquid lipids, prepared by the hot high pressure homogenization method. After selecting the critical variables influencing the physicochemical characteristics of the NLC (the liquid lipid (i.e. oil) concentration with respect to the total lipid (cOil/L (wt%)), the surfactant and the flurbiprofen concentration, on particle size, polydispersity index and encapsulation efficiency), a three-factor five-level central rotatable composite design was employed to plan and perform the experiments. Morphological examination, crystallinity and stability studies were also performed to accomplish the optimization study. The results showed that increasing cOil/L (wt%) was followed by an enhanced tendency to produce smaller particles, but the liquid to solid lipid proportion should not exceed 30 wt% due to destabilization problems. Therefore, a 70:30 ratio of SA to oil (miglyol + CO) was selected to develop an optimal NLC formulation. The smaller particles obtained when increasing surfactant concentration led to the selection of 3.2 wt% of Tween® 80 (non-ionic surfactant). The positive effect of the increase in FB concentration on the encapsulation efficiency (EE) and its total solubilization in the lipid matrix led to the selection of 0.25 wt% of FB in the formulation. The optimal NLC showed an appropriate average size for ophthalmic administration (228.3 nm) with a narrow size distribution (0.156), negatively charged surface (-33.3 mV) and high EE (~90%). The in vitro experiments proved that sustained release FB was achieved using NLC as drug carriers. Optimal NLC formulation did not show toxicity on ocular tissues.
Constraining neutron guide optimizations with phase-space considerations
NASA Astrophysics Data System (ADS)
Bertelsen, Mads; Lefmann, Kim
2016-09-01
We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.
Optimized Chemical Probes for REV-ERBα
Trump, Ryan P.; Bresciani, Stefano; Cooper, Anthony W. J.; Tellam, James P.; Wojno, Justyna; Blaikley, John; Orband-Miller, Lisa A.; Kashatus, Jennifer A.; Dawson, Helen C.; Loudon, Andrew; Ray, David; Grant, Daniel; Farrow, Stuart N.; Willson, Timothy M.; Tomkinson, Nicholas C. O.
2015-01-01
REV-ERBα has emerged as an important target for regulation of circadian rhythm and its associated physiology. Herein, we report on the optimization of a series of REV-ERBα agonists based on GSK4112 (1) for potency, selectivity, and bioavailability. Potent REV-ERBα agonists 4, 10, 16, and 23 are detailed for their ability to suppress BMAL and IL-6 expression from human cells while also demonstrating excellent selectivity over LXRα. Amine 4 demonstrated in vivo bioavailability after either IV or oral dosing. PMID:23656296
Experimental study of high-performance cooling system pipeline diameter and working fluid amount
NASA Astrophysics Data System (ADS)
Nemec, Patrik; Malcho, Milan; Hrabovsky, Peter; Papučík, Štefan
2016-03-01
This work deals with heat transfer resulting from the operation of power electronic components. Heat is removed from the mounting plate, which is the evaporator of the loop thermosyphon to the condenser and by natural convection is transferred to ambient. This work includes proposal of cooling device - loop thermosyphon, with its construct and follow optimization of cooling effect. Optimization proceeds by selecting the quantity of working fluid and selection of diameters vapour line and liquid line of loop thermosyphon.
Lentz, Christian S; Ordonez, Alvaro A; Kasperkiewicz, Paulina; La Greca, Florencia; O'Donoghue, Anthony J; Schulze, Christopher J; Powers, James C; Craik, Charles S; Drag, Marcin; Jain, Sanjay K; Bogyo, Matthew
2016-11-11
Although serine proteases are important mediators of Mycobacterium tuberculosis (Mtb) virulence, there are currently no tools to selectively block or visualize members of this family of enzymes. Selective reporter substrates or activity-based probes (ABPs) could provide a means to monitor infection and response to therapy using imaging methods. Here, we use a combination of substrate selectivity profiling and focused screening to identify optimized reporter substrates and ABPs for the Mtb "Hydrolase important for pathogenesis 1" (Hip1) serine protease. Hip1 is a cell-envelope-associated enzyme with minimal homology to host proteases, making it an ideal target for probe development. We identified substituted 7-amino-4-chloro-3-(2-bromoethoxy)isocoumarins as irreversible inhibitor scaffolds. Furthermore, we used specificity data to generate selective reporter substrates and to further optimize a selective chloroisocoumarin inhibitor. These new reagents are potentially useful in delineating the roles of Hip1 during pathogenesis or as diagnostic imaging tools for specifically monitoring Mtb infections.
2016-01-01
Although serine proteases are important mediators of Mycobacterium tuberculosis (Mtb) virulence, there are currently no tools to selectively block or visualize members of this family of enzymes. Selective reporter substrates or activity-based probes (ABPs) could provide a means to monitor infection and response to therapy using imaging methods. Here, we use a combination of substrate selectivity profiling and focused screening to identify optimized reporter substrates and ABPs for the Mtb “Hydrolase important for pathogenesis 1” (Hip1) serine protease. Hip1 is a cell-envelope-associated enzyme with minimal homology to host proteases, making it an ideal target for probe development. We identified substituted 7-amino-4-chloro-3-(2-bromoethoxy)isocoumarins as irreversible inhibitor scaffolds. Furthermore, we used specificity data to generate selective reporter substrates and to further optimize a selective chloroisocoumarin inhibitor. These new reagents are potentially useful in delineating the roles of Hip1 during pathogenesis or as diagnostic imaging tools for specifically monitoring Mtb infections. PMID:27739665
Kassem, Mohammed A; Megahed, Mohamed A; Abu Elyazid, Sherif K; Abd-Allah, Fathy I; Abdelghany, Tamer M; Al-Abd, Ahmed M; El-Say, Khalid M
2018-05-01
Serious adverse effects and low selectivity to cancer cells are the main obstacles of long term therapy with Tamoxifen (Tmx). This study aimed to develop Tmx-loaded span-based nano-vesicles for delivery to malignant tissues with maximum efficacy. The effect of three variables on vesicle size (Y 1 ), zeta potential (Y 2 ), entrapment efficiency (Y 3 ) and the cumulative percent release after 24 h (Y 4 ) were optimized using Box-Behnken design. The optimized formula was prepared and tested for its stability in different storage conditions. The observed values for the optimized formula were 310.2 nm, - 42.09 mV, 75.45 and 71.70% for Y 1 , Y 2 , Y 3 , and Y 4 , respectively. The examination using electron microscopy confirmed the formation of rounded vesicles with distinctive bilayer structure. Moreover, the cytotoxic activity of the optimized formula on both breast cancer cells (MCF-7) and normal cells (BHK) showed enhanced selectivity (9.4 folds) on cancerous cells with IC 50 values 4.7 ± 1.5 and 44.3 ± 1.3 μg/ml on cancer and normal cells, respectively. While, free Tmx exhibited lower selectivity (2.5 folds) than optimized nano-vesicles on cancer cells with IC 50 values of 9.0 ± 1.1 μg/ml and 22.5 ± 5.3 μg/ml on MCF-7 and BHK cells, respectively. The promising prepared vesicular system, with greater efficacy and selectivity, provides a marvelous tool to overcome breast cancer treatment challenges.
NASA Astrophysics Data System (ADS)
Lee, X. N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Shazzuan, S.
2017-09-01
Plastic injection moulding is a popular manufacturing method not only it is reliable, but also efficient and cost saving. It able to produce plastic part with detailed features and complex geometry. However, defects in injection moulding process degrades the quality and aesthetic of the injection moulded product. The most common defect occur in the process is warpage. Inappropriate process parameter setting of injection moulding machine is one of the reason that leads to the occurrence of warpage. The aims of this study were to improve the quality of injection moulded part by investigating the optimal parameters in minimizing warpage using Response Surface Methodology (RSM) and Glowworm Swarm Optimization (GSO). Subsequent to this, the most significant parameter was identified and recommended parameters setting was compared with the optimized parameter setting using RSM and GSO. In this research, the mobile phone case was selected as case study. The mould temperature, melt temperature, packing pressure, packing time and cooling time were selected as variables whereas warpage in y-direction was selected as responses in this research. The simulation was carried out by using Autodesk Moldflow Insight 2012. In addition, the RSM was performed by using Design Expert 7.0 whereas the GSO was utilized by using MATLAB. The warpage in y direction recommended by RSM were reduced by 70 %. The warpages recommended by GSO were decreased by 61 % in y direction. The resulting warpages under optimal parameter setting by RSM and GSO were validated by simulation in AMI 2012. RSM performed better than GSO in solving warpage issue.
Sun, Yu; Tamarit, Daniel
2017-01-01
Abstract The major codon preference model suggests that codons read by tRNAs in high concentrations are preferentially utilized in highly expressed genes. However, the identity of the optimal codons differs between species although the forces driving such changes are poorly understood. We suggest that these questions can be tackled by placing codon usage studies in a phylogenetic framework and that bacterial genomes with extreme nucleotide composition biases provide informative model systems. Switches in the background substitution biases from GC to AT have occurred in Gardnerella vaginalis (GC = 32%), and from AT to GC in Lactobacillus delbrueckii (GC = 62%) and Lactobacillus fermentum (GC = 63%). We show that despite the large effects on codon usage patterns by these switches, all three species evolve under selection on synonymous sites. In G. vaginalis, the dramatic codon frequency changes coincide with shifts of optimal codons. In contrast, the optimal codons have not shifted in the two Lactobacillus genomes despite an increased fraction of GC-ending codons. We suggest that all three species are in different phases of an on-going shift of optimal codons, and attribute the difference to a stronger background substitution bias and/or longer time since the switch in G. vaginalis. We show that comparative and correlative methods for optimal codon identification yield conflicting results for genomes in flux and discuss possible reasons for the mispredictions. We conclude that switches in the direction of the background substitution biases can drive major shifts in codon preference patterns even under sustained selection on synonymous codon sites. PMID:27540085
NASA Astrophysics Data System (ADS)
Pei, Ji; Wang, Wenjie; Yuan, Shouqi; Zhang, Jinfeng
2016-09-01
In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0 Q d and 1.4 Q d is proposed. Three parameters, namely, the blade outlet width b 2, blade outlet angle β 2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0 Q d and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.
ToTem: a tool for variant calling pipeline optimization.
Tom, Nikola; Tom, Ondrej; Malcikova, Jitka; Pavlova, Sarka; Kubesova, Blanka; Rausch, Tobias; Kolarik, Miroslav; Benes, Vladimir; Bystry, Vojtech; Pospisilova, Sarka
2018-06-26
High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. ToTem is a tool for automated pipeline optimization which is freely available as a web application at https://totem.software .
Rigo, Vincent; Graas, Estelle; Rigo, Jacques
2012-07-01
Selected optimal respiratory cycles should allow calculation of respiratory mechanic parameters focusing on patient-ventilator interaction. New computer software automatically selecting optimal breaths and respiratory mechanics derived from those cycles are evaluated. Retrospective study. University level III neonatal intensive care unit. Ten mins synchronized intermittent mandatory ventilation and assist/control ventilation recordings from ten newborns. The ventilator provided respiratory mechanic data (ventilator respiratory cycles) every 10 secs. Pressure, flow, and volume waves and pressure-volume, pressure-flow, and volume-flow loops were reconstructed from continuous pressure-volume recordings. Visual assessment determined assisted leak-free optimal respiratory cycles (selected respiratory cycles). New software graded the quality of cycles (automated respiratory cycles). Respiratory mechanic values were derived from both sets of optimal cycles. We evaluated quality selection and compared mean values and their variability according to ventilatory mode and respiratory mechanic provenance. To assess discriminating power, all 45 "t" values obtained from interpatient comparisons were compared for each respiratory mechanic parameter. A total of 11,724 breaths are evaluated. Automated respiratory cycle/selected respiratory cycle selections agreement is high: 88% of maximal κ with linear weighting. Specificity and positive predictive values are 0.98 and 0.96, respectively. Averaged values are similar between automated respiratory cycle and ventilator respiratory cycle. C20/C alone is markedly decreased in automated respiratory cycle (1.27 ± 0.37 vs. 1.81 ± 0.67). Tidal volume apparent similarity disappears in assist/control: automated respiratory cycle tidal volume (4.8 ± 1.0 mL/kg) is significantly lower than for ventilator respiratory cycle (5.6 ± 1.8 mL/kg). Coefficients of variation decrease for all automated respiratory cycle parameters in all infants. "t" values from ventilator respiratory cycle data are two to three times higher than ventilator respiratory cycles. Automated selection is highly specific. Automated respiratory cycle reflects most the interaction of both ventilator and patient. Improving discriminating power of ventilator monitoring will likely help in assessing disease status and following trends. Averaged parameters derived from automated respiratory cycles are more precise and could be displayed by ventilators to improve real-time fine tuning of ventilator settings.
Computational design of nanoparticle drug delivery systems for selective targeting
NASA Astrophysics Data System (ADS)
Duncan, Gregg A.; Bevan, Michael A.
2015-09-01
Ligand-functionalized nanoparticles capable of selectively binding to diseased versus healthy cell populations are attractive for improved efficacy of nanoparticle-based drug and gene therapies. However, nanoparticles functionalized with high affinity targeting ligands may lead to undesired off-target binding to healthy cells. In this work, Monte Carlo simulations were used to quantitatively determine net surface interactions, binding valency, and selectivity between targeted nanoparticles and cell surfaces. Dissociation constant, KD, and target membrane protein density, ρR, are explored over a range representative of healthy and cancerous cell surfaces. Our findings show highly selective binding to diseased cell surfaces can be achieved with multiple, weaker affinity targeting ligands that can be further optimized by varying the targeting ligand density, ρL. Using the approach developed in this work, nanomedicines can be optimally designed for exclusively targeting diseased cells and tissues.Ligand-functionalized nanoparticles capable of selectively binding to diseased versus healthy cell populations are attractive for improved efficacy of nanoparticle-based drug and gene therapies. However, nanoparticles functionalized with high affinity targeting ligands may lead to undesired off-target binding to healthy cells. In this work, Monte Carlo simulations were used to quantitatively determine net surface interactions, binding valency, and selectivity between targeted nanoparticles and cell surfaces. Dissociation constant, KD, and target membrane protein density, ρR, are explored over a range representative of healthy and cancerous cell surfaces. Our findings show highly selective binding to diseased cell surfaces can be achieved with multiple, weaker affinity targeting ligands that can be further optimized by varying the targeting ligand density, ρL. Using the approach developed in this work, nanomedicines can be optimally designed for exclusively targeting diseased cells and tissues. Electronic supplementary information (ESI) available: Movie showing simulation renderings of targeted (ρL = 1820/μm2, KD = 120 μM) nanoparticle selective binding to cancer (ρR = 256/μm2) vs. healthy (ρR = 64/μm2) cell surfaces. Target membrane proteins have linear color scale depending on binding energy ranging from white when unbound (URL = 0) to red when tightly bound (URL = UM). See DOI: 10.1039/c5nr03691g
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
Optimized positioning of autonomous surgical lamps
NASA Astrophysics Data System (ADS)
Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel
2017-03-01
We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.