Sample records for simultaneous optimization technique

  1. Multi-rendezvous low-thrust trajectory optimization using costate transforming and homotopic approach

    NASA Astrophysics Data System (ADS)

    Chen, Shiyu; Li, Haiyang; Baoyin, Hexi

    2018-06-01

    This paper investigates a method for optimizing multi-rendezvous low-thrust trajectories using indirect methods. An efficient technique, labeled costate transforming, is proposed to optimize multiple trajectory legs simultaneously rather than optimizing each trajectory leg individually. Complex inner-point constraints and a large number of free variables are one main challenge in optimizing multi-leg transfers via shooting algorithms. Such a difficulty is reduced by first optimizing each trajectory leg individually. The results may be, next, utilized as an initial guess in the simultaneous optimization of multiple trajectory legs. In this paper, the limitations of similar techniques in previous research is surpassed and a homotopic approach is employed to improve the convergence efficiency of the shooting process in multi-rendezvous low-thrust trajectory optimization. Numerical examples demonstrate that newly introduced techniques are valid and efficient.

  2. Recent experience in simultaneous control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Ramaker, R.; Milman, M.

    1989-01-01

    To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.

  3. Simultaneous analysis and design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1984-01-01

    Optimization techniques are increasingly being used for performing nonlinear structural analysis. The development of element by element (EBE) preconditioned conjugate gradient (CG) techniques is expected to extend this trend to linear analysis. Under these circumstances the structural design problem can be viewed as a nested optimization problem. There are computational benefits to treating this nested problem as a large single optimization problem. The response variables (such as displacements) and the structural parameters are all treated as design variables in a unified formulation which performs simultaneously the design and analysis. Two examples are used for demonstration. A seventy-two bar truss is optimized subject to linear stress constraints and a wing box structure is optimized subject to nonlinear collapse constraints. Both examples show substantial computational savings with the unified approach as compared to the traditional nested approach.

  4. A technique for locating function roots and for satisfying equality constraints in optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.

  5. A technique for locating function roots and for satisfying equality constraints in optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1992-01-01

    A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.

  6. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

    PubMed Central

    Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong

    2011-01-01

    In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104

  7. Data Capture Technique for High Speed Signaling

    DOEpatents

    Barrett, Wayne Melvin; Chen, Dong; Coteus, Paul William; Gara, Alan Gene; Jackson, Rory; Kopcsay, Gerard Vincent; Nathanson, Ben Jesse; Vranas, Paylos Michael; Takken, Todd E.

    2008-08-26

    A data capture technique for high speed signaling to allow for optimal sampling of an asynchronous data stream. This technique allows for extremely high data rates and does not require that a clock be sent with the data as is done in source synchronous systems. The present invention also provides a hardware mechanism for automatically adjusting transmission delays for optimal two-bit simultaneous bi-directional (SiBiDi) signaling.

  8. Hybrid computer optimization of systems with random parameters

    NASA Technical Reports Server (NTRS)

    White, R. C., Jr.

    1972-01-01

    A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.

  9. Simultaneous polarized neutron reflectometry and anisotropic magnetoresistance measurements.

    PubMed

    Demeter, J; Teichert, A; Kiefer, K; Wallacher, D; Ryll, H; Menéndez, E; Paramanik, D; Steitz, R; Van Haesendonck, C; Vantomme, A; Temst, K

    2011-03-01

    A novel experimental facility to carry out simultaneous polarized neutron reflectometry (PNR) and anisotropic magnetoresistance (AMR) measurements is presented. Performing both techniques at the same time increases their strength considerably. The proof of concept of this method is demonstrated on a CoO/Co bilayer exchange bias system. Although information on the same phenomena, such as the coercivity or the reversal mechanism, can be separately obtained from either of these techniques, the simultaneous application optimizes the consistency between both. In this way, possible differences in experimental conditions, such as applied magnetic field amplitude and orientation, sample temperature, magnetic history, etc., can be ruled out. Consequently, only differences in the fundamental sensitivities of the techniques can cause discrepancies in the interpretation between the two. The almost instantaneous information obtained from AMR can be used to reveal time-dependent effects during the PNR acquisition. Moreover, the information inferred from the AMR measurements can be used for optimizing the experimental conditions for the PNR measurements in a more efficient way than with the PNR measurements alone.

  10. Simultaneous beam sampling and aperture shape optimization for SPORT.

    PubMed

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.

  11. Simultaneous beam sampling and aperture shape optimization for SPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. Conclusions: The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.« less

  12. Simultaneous structural and control optimization via linear quadratic regulator eigenstructure assignment

    NASA Technical Reports Server (NTRS)

    Becus, G. A.; Lui, C. Y.; Venkayya, V. B.; Tischler, V. A.

    1987-01-01

    A method for simultaneous structural and control design of large flexible space structures (LFSS) to reduce vibration generated by disturbances is presented. Desired natural frequencies and damping ratios for the closed loop system are achieved by using a combination of linear quadratic regulator (LQR) synthesis and numerical optimization techniques. The state and control weighing matrices (Q and R) are expressed in terms of structural parameters such as mass and stiffness. The design parameters are selected by numerical optimization so as to minimize the weight of the structure and to achieve the desired closed-loop eigenvalues. An illustrative example of the design of a two bar truss is presented.

  13. Optimal cooperative control synthesis of active displays

    NASA Technical Reports Server (NTRS)

    Garg, S.; Schmidt, D. K.

    1985-01-01

    A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.

  14. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, M; Li, R; Xing, L

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves quality of resultant treatment plans as compared with conventional VMAT or IMRT treatments.« less

  15. Optimization of the volume reconstruction for classical Tomo-PIV algorithms (MART, BIMART and SMART): synthetic and experimental studies

    NASA Astrophysics Data System (ADS)

    Thomas, L.; Tremblais, B.; David, L.

    2014-03-01

    Optimization of multiplicative algebraic reconstruction technique (MART), simultaneous MART and block iterative MART reconstruction techniques was carried out on synthetic and experimental data. Different criteria were defined to improve the preprocessing of the initial images. Knowledge of how each reconstruction parameter influences the quality of particle volume reconstruction and computing time is the key in Tomo-PIV. These criteria were applied to a real case, a jet in cross flow, and were validated.

  16. Pilot-optimal augmentation synthesis

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.

    1978-01-01

    An augmentation synthesis method usable in the absence of quantitative handling qualities specifications, and yet explicitly including design objectives based on pilot-rating concepts, is presented. The algorithm involves the unique approach of simultaneously solving for the stability augmentation system (SAS) gains, pilot equalization and pilot rating prediction via optimal control techniques. Simultaneous solution is required in this case since the pilot model (gains, etc.) depends upon the augmented plant dynamics, and the augmentation is obviously not a priori known. Another special feature is the use of the pilot's objective function (from which the pilot model evolves) to design the SAS.

  17. Simultaneous Voltammetric Detection of Carbaryl and Paraquat Pesticides on Graphene-Modified Boron-Doped Diamond Electrode

    PubMed Central

    Pop, Aniela; Manea, Florica; Flueras, Adriana; Schoonman, Joop

    2017-01-01

    Monitoring of pesticide residues in food, beverages, and the environment requires fast, versatile, and sensitive analyzing methods. Direct electrochemical detection of pesticides could represent an efficient solution. Adequate electrode material, electrochemical technique, and optimal operation parameters define the detection method for practical application. In this study, cyclic voltammetric and differential pulse voltammetric techniques were used in order to individually and simultaneously detect two pesticides, i.e., carbaryl (CR) and paraquat (PQ), from an acetate buffer solution and also from natural apple juice. A graphene-modified boron-doped diamond electrode, denoted BDDGR, was obtained and successfully applied in the simultaneous detection of CR and PQ pesticides, using the differential pulse voltammetric technique with remarkable electroanalytical parameters in terms of sensitivity: 33.27 μA μM−1 cm−2 for CR and 31.83 μA μM−1 cm−2 for PQ. These outstanding results obtained in the acetate buffer supporting electrolyte allowed us to simultaneously detect the targeted pesticides in natural apple juice. PMID:28878151

  18. Multiobjective genetic algorithm conjunctive use optimization for production, cost, and energy with dynamic return flow

    NASA Astrophysics Data System (ADS)

    Peralta, Richard C.; Forghani, Ali; Fayad, Hala

    2014-04-01

    Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.

  19. Shape and Reinforcement Optimization of Underground Tunnels

    NASA Astrophysics Data System (ADS)

    Ghabraie, Kazem; Xie, Yi Min; Huang, Xiaodong; Ren, Gang

    Design of support system and selecting an optimum shape for the opening are two important steps in designing excavations in rock masses. Currently selecting the shape and support design are mainly based on designer's judgment and experience. Both of these problems can be viewed as material distribution problems where one needs to find the optimum distribution of a material in a domain. Topology optimization techniques have proved to be useful in solving these kinds of problems in structural design. Recently the application of topology optimization techniques in reinforcement design around underground excavations has been studied by some researchers. In this paper a three-phase material model will be introduced changing between normal rock, reinforced rock, and void. Using such a material model both problems of shape and reinforcement design can be solved together. A well-known topology optimization technique used in structural design is bi-directional evolutionary structural optimization (BESO). In this paper the BESO technique has been extended to simultaneously optimize the shape of the opening and the distribution of reinforcements. Validity and capability of the proposed approach have been investigated through some examples.

  20. Formulation for Simultaneous Aerodynamic Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.

    1993-01-01

    An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.

  1. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  2. Strategies of experiment standardization and response optimization in a rat model of hemorrhagic shock and chronic hypertension.

    PubMed

    Reynolds, Penny S; Tamariz, Francisco J; Barbee, Robert Wayne

    2010-04-01

    Exploratory pilot studies are crucial to best practice in research but are frequently conducted without a systematic method for maximizing the amount and quality of information obtained. We describe the use of response surface regression models and simultaneous optimization methods to develop a rat model of hemorrhagic shock in the context of chronic hypertension, a clinically relevant comorbidity. Response surface regression model was applied to determine optimal levels of two inputs--dietary NaCl concentration (0.49%, 4%, and 8%) and time on the diet (4, 6, 8 weeks)--to achieve clinically realistic and stable target measures of systolic blood pressure while simultaneously maximizing critical oxygen delivery (a measure of vulnerability to hemorrhagic shock) and body mass M. Simultaneous optimization of the three response variables was performed though a dimensionality reduction strategy involving calculation of a single aggregate measure, the "desirability" function. Optimal conditions for inducing systolic blood pressure of 208 mmHg, critical oxygen delivery of 4.03 mL/min, and M of 290 g were determined to be 4% [NaCl] for 5 weeks. Rats on the 8% diet did not survive past 7 weeks. Response surface regression model and simultaneous optimization method techniques are commonly used in process engineering but have found little application to date in animal pilot studies. These methods will ensure both the scientific and ethical integrity of experimental trials involving animals and provide powerful tools for the development of novel models of clinically interacting comorbidities with shock.

  3. Low order H∞ optimal control for ACFA blended wing body aircraft

    NASA Astrophysics Data System (ADS)

    Haniš, T.; Kucera, V.; Hromčík, M.

    2013-12-01

    Advanced nonconvex nonsmooth optimization techniques for fixed-order H∞ robust control are proposed in this paper for design of flight control systems (FCS) with prescribed structure. Compared to classical techniques - tuning of and successive closures of particular single-input single-output (SISO) loops like dampers, attitude stabilizers, etc. - all loops are designed simultaneously by means of quite intuitive weighting filters selection. In contrast to standard optimization techniques, though (H2, H∞ optimization), the resulting controller respects the prescribed structure in terms of engaged channels and orders (e. g., proportional (P), proportional-integral (PI), and proportional-integralderivative (PID) controllers). In addition, robustness with regard to multimodel uncertainty is also addressed which is of most importance for aerospace applications as well. Such a way, robust controllers for various Mach numbers, altitudes, or mass cases can be obtained directly, based only on particular mathematical models for respective combinations of the §ight parameters.

  4. Design Optimization of Composite Structures under Uncertainty

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    2003-01-01

    Design optimization under uncertainty is computationally expensive and is also challenging in terms of alternative formulation. The work under the grant focused on developing methods for design against uncertainty that are applicable to composite structural design with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and simultaneous design of structure and inspection periods for fail-safe structures.

  5. Simultaneous deterministic control of distant qubits in two semiconductor quantum dots.

    PubMed

    Gamouras, A; Mathew, R; Freisem, S; Deppe, D G; Hall, K C

    2013-10-09

    In optimal quantum control (OQC), a target quantum state of matter is achieved by tailoring the phase and amplitude of the control Hamiltonian through femtosecond pulse-shaping techniques and powerful adaptive feedback algorithms. Motivated by recent applications of OQC in quantum information science as an approach to optimizing quantum gates in atomic and molecular systems, here we report the experimental implementation of OQC in a solid-state system consisting of distinguishable semiconductor quantum dots. We demonstrate simultaneous high-fidelity π and 2π single qubit gates in two different quantum dots using a single engineered infrared femtosecond pulse. These experiments enhance the scalability of semiconductor-based quantum hardware and lay the foundation for applications of pulse shaping to optimize quantum gates in other solid-state systems.

  6. SU-F-T-387: A Novel Optimization Technique for Field in Field (FIF) Chestwall Radiation Therapy Using a Single Plan to Improve Delivery Safety and Treatment Planning Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabibian, A; Kim, A; Rose, J

    Purpose: A novel optimization technique was developed for field-in-field (FIF) chestwall radiotherapy using bolus every other day. The dosimetry was compared to currently used optimization. Methods: The prior five patients treated at our clinic to the chestwall and supraclavicular nodes with a mono-isocentric four-field arrangement were selected for this study. The prescription was 5040 cGy in 28 fractions, 5 mm bolus every other day on the tangent fields, 6 and/or 10 MV x-rays, and multileaf collimation.Novelly, tangents FIF segments were forward planned optimized based on the composite bolus and non-bolus dose distribution simultaneously. The prescription was spilt into 14 fractionsmore » for both bolus and non-bolus tangents. The same segments and monitor units were used for the bolus and non-bolus treatment. The plan was optimized until the desired coverage was achieved, minimized 105% hotspots, and a maximum dose of less than 108%. Each tangential field had less than 5 segments.Comparison plans were generated using FIF optimization with the same dosimetric goals, but using only the non-bolus calculation for FIF optimization. The non-bolus fields were then copied and bolus was applied. The same segments and monitor units were used for the bolus and non-bolus segments. Results: The prescription coverage of the chestwall, as defined by RTOG guidelines, was on average 51.8% for the plans that optimized bolus and non-bolus treatments simultaneous (SB) and 43.8% for the plans optimized to the non-bolus treatments (NB). Chestwall coverage of 90% prescription averaged to 80.4% for SB and 79.6% for NB plans. The volume receiving 105% of the prescription was 1.9% for SB and 0.8% for NB plans on average. Conclusion: Simultaneously optimizing for bolus and non-bolus treatments noticeably improves prescription coverage of the chestwall while maintaining similar hotspots and 90% prescription coverage in comparison to optimizing only to non-bolus treatments.« less

  7. Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.

    PubMed

    Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank

    2017-12-01

    Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.

  8. Debiasing comparative optimism and increasing worry for health outcomes.

    PubMed

    Rose, Jason P

    2012-11-01

    Comparative optimism - feeling at less personal risk for negative outcomes than one's peers - has been linked to reduced prevention efforts. This study examined a novel debiasing technique aimed at simultaneously reducing both indirectly and directly measured comparative optimism. Before providing direct comparative estimates, participants provided absolute self and peer estimates in a joint format (same computer screen) or a separate format (different computer screens). Relative to the separate format condition, participants in the joint format condition showed (1) lower comparative optimism in absolute/indirect measures, (2) lower direct comparative optimism, and (3) heightened worry. Implications for risk perception screening are discussed.

  9. Optimization of brushless direct current motor design using an intelligent technique.

    PubMed

    Shabanian, Alireza; Tousiwas, Armin Amini Poustchi; Pourmandi, Massoud; Khormali, Aminollah; Ataei, Abdolhay

    2015-07-01

    This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using an improved bee algorithm (IBA). The characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. This method is based on the capability of swarm-based algorithms in finding the optimal solution. One sample case is used to illustrate the performance of the design approach and optimization technique. The IBA has a better performance and speed of convergence compared with bee algorithm (BA). Simulation results show that the proposed method has a very high/efficient performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm

    NASA Astrophysics Data System (ADS)

    Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi

    2014-01-01

    This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.

  11. The art of spacecraft design: A multidisciplinary challenge

    NASA Technical Reports Server (NTRS)

    Abdi, F.; Ide, H.; Levine, M.; Austel, L.

    1989-01-01

    Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.

  12. The Auburn Engineering Technical Assistance Program investigation of polyvinyl alcohol film developments pertaining to radioactive particle decontamination and industrial waste minimization

    NASA Astrophysics Data System (ADS)

    Mole, Tracey Lawrence

    In this work, an effective and systematic model is devised to synthesize the optimal formulation for an explicit engineering application in the nuclear industry, i.e. radioactive decontamination and waste reduction. Identification of an optimal formulation that is suitable for the desired system requires integration of all the interlacing behaviors of the product constituents. This work is unique not only in product design, but also in these design techniques. The common practice of new product development is to design the optimized product for a particular industrial niche and then subsequent research for the production process is conducted, developed and optimized separately from the product formulation. In this proposed optimization design technique, the development process, disposal technique and product formulation is optimized simultaneously to improve production profit, product behavior and disposal emissions. This "cradle to grave" optimization approach allowed a complex product formulation development process to be drastically simplified. The utilization of these modeling techniques took an industrial idea to full scale testing and production in under 18 months by reducing the number of subsequent laboratory trials required to optimize the formula, production and waste treatment aspects of the product simultaneously. This particular development material involves the use of a polymer matrix that is applied to surfaces as part of a decontamination system. The polymer coating serves to initially "fix" the contaminants in place for detection and ultimate elimination. Upon mechanical entrapment and removal, the polymer coating containing the radioactive isotopes can be dissolved in a solvent processor, where separation of the radioactive metallic particles can take place. Ultimately, only the collection of divided solids should be disposed of as nuclear waste. This creates an attractive alternative to direct land filling or incineration. This philosophy also provides waste generators a way to significantly reduce waste and associated costs, and help meet regulatory, safety and environmental requirements. In order for the polymeric film exhibit the desired performance, a combination of discrete constraints must be fulfilled. These interacting characteristics include the choice of polymer used for construction, drying time, storage constraints, decontamination ability, removal behavior, application process, coating strength and dissolvability processes. Identification of an optimized formulation that is suitable for this entire decontamination system requires integration of all the interlacing characteristics of the coating composition that affect the film behavior. A novel systematic method for developing quantitative values for theses qualitative characteristics is being developed in order to simultaneously optimize the design formulation subject to the discrete product specifications. This synthesis procedure encompasses intrinsic characteristics vital to successful product development, which allows for implementation of the derived model optimizations to operate independent of the polymer film application. This contribution illustrates the optimized synthesis example by which a large range of polymeric compounds and mixtures can be completed. (Abstract shortened by UMI.)

  13. Technical note: Simultaneous carotenoid and vitamin analysis of milk from total mixed ration-fed cows optimized for xanthophyll detection.

    PubMed

    Stout, M A; Benoist, D M; Drake, M A

    2018-06-01

    Concentrations of retinol, α-tocopherol, and major carotenoids in dairy products are often determined simultaneously by liquid chromatography. These compounds have different polarity and solubility; thus, extracting them simultaneously can be difficult and inefficient. In milks with low carotenoid concentrations, the xanthophylls lutein and zeaxanthin may not be completely resolved using common extraction techniques. A simplified method was developed to optimize extraction efficiency and the limit of detection and limit of quantification (LoQ) of lutein and zeaxanthin in bovine milk without decreasing sensitivity to other vitamins or carotenoids. The developed method evaluates lutein, zeaxanthin, β-carotene, retinol, and α-tocopherol simultaneously by ultra-high performance liquid chromatography-photodiode array detection. Common saponification temperatures (40-60°C) and concentrations of KOH in water (10-50% KOH wt/vol) were evaluated. Multiple solvents were evaluated for optimal xanthophyll extraction (diethyl ether, dichloromethane, hexane, and tetrahydrofuran) following saponification. The limit of detection and LoQ were defined as 3:1 and 10:1 signal-to-noise ratio, respectively. All experiments were performed in triplicate. The optimal saponification procedure was a concentration of 25% KOH at either 40 or 50°C. Saponified extracts solubilized in solutions containing diethyl ether had greater concentrations of lutein- than hexane- or tetrahydrofuran-based solutions, with peak areas above LoQ values. The solution containing diethyl ether solubilized similar concentrations of retinol, α-tocopherol, and β-carotene when compared with other solutions. The proposed optimized method allows for the simultaneous determination of carotenoids from milk with increased lutein and zeaxanthin sensitivity without sacrificing recovery of retinol, α-tocopherol, and β-carotene. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes

    NASA Astrophysics Data System (ADS)

    Chakraborty, Shankar; Mitra, Ankan

    2018-05-01

    Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.

  15. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  16. Inter-slice Leakage Artifact Reduction Technique for Simultaneous Multi-Slice Acquisitions

    PubMed Central

    Cauley, Stephen F.; Polimeni, Jonathan R.; Bhat, Himanshu; Wang, Dingxin; Wald, Lawrence L.; Setsompop, Kawin

    2015-01-01

    Purpose Controlled aliasing techniques for simultaneously acquired EPI slices have been shown to significantly increase the temporal efficiency for both diffusion-weighted imaging (DWI) and fMRI studies. The “slice-GRAPPA” (SG) method has been widely used to reconstruct such data. We investigate robust optimization techniques for SG to ensure image reconstruction accuracy through a reduction of leakage artifacts. Methods Split slice-GRAPPA (SP-SG) is proposed as an alternative kernel optimization method. The performance of SP-SG is compared to standard SG using data collected on a spherical phantom and in-vivo on two subjects at 3T. Slice accelerated and non-accelerated data were collected for a spin-echo diffusion weighted acquisition. Signal leakage metrics and time-series SNR were used to quantify the performance of the kernel fitting approaches. Results The SP-SG optimization strategy significantly reduces leakage artifacts for both phantom and in-vivo acquisitions. In addition, a significant boost in time-series SNR for in-vivo diffusion weighted acquisitions with in-plane 2× and slice 3× accelerations was observed with the SP-SG approach. Conclusion By minimizing the influence of leakage artifacts during the training of slice-GRAPPA kernels, we have significantly improved reconstruction accuracy. Our robust kernel fitting strategy should enable better reconstruction accuracy and higher slice-acceleration across many applications. PMID:23963964

  17. Near-optimal strategies for sub-decimeter satellite tracking with GPS

    NASA Technical Reports Server (NTRS)

    Yunck, Thomas P.; Wu, Sien-Chong; Wu, Jiun-Tsong

    1986-01-01

    Decimeter tracking of low Earth orbiters using differential Global Positioning System (GPS) techniques is discussed. A precisely known global network of GPS ground receivers and a receiver aboard the user satellite are needed, and all techniques simultaneously estimate the user and GPS satellite orbits. Strategies include a purely geometric, a fully dynamic, and a hybrid strategy. The last combines dynamic GPS solutions with a geometric user solution. Two powerful extensions of the hybrid strategy show the most promise. The first uses an optimized synthesis of dynamics and geometry in the user solution, while the second uses a gravity adjustment method to exploit data from repeat ground tracks. These techniques promise to deliver subdecimeter accuracy down to the lowest satellite altitudes.

  18. Optimized mode-field adapter for low-loss fused fiber bundle signal and pump combiners

    NASA Astrophysics Data System (ADS)

    Koška, Pavel; Baravets, Yauhen; Peterka, Pavel; Písařík, Michael; Bohata, Jan

    2015-03-01

    In our contribution we report novel mode field adapter incorporated inside bundled tapered pump and signal combiner. Pump and signal combiners are crucial component of contemporary double clad high power fiber lasers. Proposed combiner allows simultaneous matching to single mode core on input and output. We used advanced optimization techniques to match the combiner to a single mode core simultaneously on input and output and to minimalize losses of the combiner signal branch. We designed two arrangements of combiners' mode field adapters. Our numerical simulations estimates losses in signal branches of optimized combiners of 0.23 dB for the first design and 0.16 dB for the second design for SMF-28 input fiber and SMF-28 matched output double clad fiber for the wavelength of 2000 nm. The splice losses of the actual combiner are expected to be even lower thanks to dopant diffusion during the splicing process.

  19. Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Negro Maggio, Valentina; Iocchi, Luca

    2015-02-01

    Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.

  20. Optimization of binary thermodynamic and phase diagram data

    NASA Astrophysics Data System (ADS)

    Bale, Christopher W.; Pelton, A. D.

    1983-03-01

    An optimization technique based upon least squares regression is presented to permit the simultaneous analysis of diverse experimental binary thermodynamic and phase diagram data. Coefficients of polynomial expansions for the enthalpy and excess entropy of binary solutions are obtained which can subsequently be used to calculate the thermodynamic properties or the phase diagram. In an interactive computer-assisted analysis employing this technique, one can critically analyze a large number of diverse data in a binary system rapidly, in a manner which is fully self-consistent thermodynamically. Examples of applications to the Bi-Zn, Cd-Pb, PbCl2-KCl, LiCl-FeCl2, and Au-Ni binary systems are given.

  1. Process Mining-Based Method of Designing and Optimizing the Layouts of Emergency Departments in Hospitals.

    PubMed

    Rismanchian, Farhood; Lee, Young Hoon

    2017-07-01

    This article proposes an approach to help designers analyze complex care processes and identify the optimal layout of an emergency department (ED) considering several objectives simultaneously. These objectives include minimizing the distances traveled by patients, maximizing design preferences, and minimizing the relocation costs. Rising demand for healthcare services leads to increasing demand for new hospital buildings as well as renovating existing ones. Operations management techniques have been successfully applied in both manufacturing and service industries to design more efficient layouts. However, high complexity of healthcare processes makes it challenging to apply these techniques in healthcare environments. Process mining techniques were applied to address the problem of complexity and to enhance healthcare process analysis. Process-related information, such as information about the clinical pathways, was extracted from the information system of an ED. A goal programming approach was then employed to find a single layout that would simultaneously satisfy several objectives. The layout identified using the proposed method improved the distances traveled by noncritical and critical patients by 42.2% and 47.6%, respectively, and minimized the relocation costs. This study has shown that an efficient placement of the clinical units yields remarkable improvements in the distances traveled by patients.

  2. A mathematical model for the generation and control of a pH gradient in an immobilized enzyme system involving acid generation.

    PubMed

    Chen, G; Fournier, R L; Varanasi, S

    1998-02-20

    An optimal pH control technique has been developed for multistep enzymatic synthesis reactions where the optimal pH differs by several units for each step. This technique separates an acidic environment from a basic environment by the hydrolysis of urea within a thin layer of immobilized urease. With this technique, a two-step enzymatic reaction can take place simultaneously, in proximity to each other, and at their respective optimal pH. Because a reaction system involving an acid generation represents a more challenging test of this pH control technique, a number of factors that affect the generation of such a pH gradient are considered in this study. The mathematical model proposed is based on several simplifying assumptions and represents a first attempt to provide an analysis of this complex problem. The results show that, by choosing appropriate parameters, the pH control technique still can generate the desired pH gradient even if there is an acid-generating reaction in the system. Copyright 1998 John Wiley & Sons, Inc.

  3. Genetic programming assisted stochastic optimization strategies for optimization of glucose to gluconic acid fermentation.

    PubMed

    Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D

    2002-01-01

    This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.

  4. Power and Efficiency Optimized in Traveling-Wave Tubes Over a Broad Frequency Bandwidth

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    2001-01-01

    A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT's are critical components in deep space probes, communication satellites, and high-power radar systems. Power conversion efficiency is of paramount importance for TWT's employed in deep space probes and communication satellites. A previous effort was very successful in increasing efficiency and power at a single frequency (ref. 1). Such an algorithm is sufficient for narrow bandwidth designs, but for optimal designs in applications that require high radiofrequency power over a wide bandwidth, such as high-density communications or high-resolution radar, the variation of the circuit response with respect to frequency must be considered. This work at the NASA Glenn Research Center is the first to develop techniques for optimizing TWT efficiency and output power over a broad frequency bandwidth (ref. 2). The techniques are based on simulated annealing, which has the advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 3). Two new broadband simulated annealing algorithms were developed that optimize (1) minimum saturated power efficiency over a frequency bandwidth and (2) simultaneous bandwidth and minimum power efficiency over the frequency band with constant input power. The algorithms were incorporated into the NASA coupled-cavity TWT computer model (ref. 4) and used to design optimal phase velocity tapers using the 59- to 64-GHz Hughes 961HA coupled-cavity TWT as a baseline model. In comparison to the baseline design, the computational results of the first broad-band design algorithm show an improvement of 73.9 percent in minimum saturated efficiency (see the top graph). The second broadband design algorithm (see the bottom graph) improves minimum radiofrequency efficiency with constant input power drive by a factor of 2.7 at the high band edge (64 GHz) and increases simultaneous bandwidth by 500 MHz.

  5. Distributed computer system enhances productivity for SRB joint optimization

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1987-01-01

    Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.

  6. Automated simultaneous multiple feature classification of MTI data

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Theiler, James P.; Balick, Lee K.; Pope, Paul A.; Szymanski, John J.; Perkins, Simon J.; Porter, Reid B.; Brumby, Steven P.; Bloch, Jeffrey J.; David, Nancy A.; Galassi, Mark C.

    2002-08-01

    Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.

  7. Constrained Burn Optimization for the International Space Station

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.; Jones, Brandon A.

    2017-01-01

    In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.

  8. Building Development Monitoring in Multitemporal Remotely Sensed Image Pairs with Stochastic Birth-Death Dynamics.

    PubMed

    Benedek, C; Descombes, X; Zerubia, J

    2012-01-01

    In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.

  9. A new simultaneous compression and encryption method for images suitable to recognize form by optical correlation

    NASA Astrophysics Data System (ADS)

    Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain

    2009-09-01

    In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.

  10. Active distribution network planning considering linearized system loss

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Wang, Mingqiang; Xu, Hao

    2018-02-01

    In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.

  11. A pilot study for determining the optimal operation condition for simultaneously controlling the emissions of PCDD/Fs and PAHs from the iron ore sintering process.

    PubMed

    Chen, Yu-Cheng; Tsai, Perng-Jy; Mou, Jin-Luh; Kuo, Yu-Chieh; Wang, Shih-Min; Young, Li-Hao; Wang, Ya-Fen

    2012-09-01

    In this study, the cost-benefit analysis technique was developed and incorporated into the Taguchi experimental design to determine the optimal operation combination for the purpose of providing a technique solution for controlling both emissions of PCDD/Fs and PAHs, and increasing both the sinter productivity (SP) and sinter strength (SS) simultaneously. Four operating parameters, including the water content, suction pressure, bed height, and type of hearth layer, were selected and all experimental campaigns were conducted on a pilot-scale sinter pot to simulate various sintering operating conditions of a real-scale sinter plant. The resultant optimal combination could reduce the total carcinogenic emissions arising from both emissions of PCDD/Fs and PAHs by 49.8%, and increase the sinter benefit associated with the increase in both SP and SS by 10.1%, as in comparison with the operation condition currently used in the real plant. The ANOVA results indicate that the suction pressure was the most dominant parameter in determining the optimal operation combination. The above result was theoretically plausible since the higher suction pressure provided more oxygen contents leading to the decrease in both PCDD/F and PAH emissions. But it should be noted that the results obtained from the present study were based on pilot scale experiments, conducting confirmation tests in a real scale plant are still necessary in the future. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Rapid alkali catalyzed transesterification of microalgae lipids to biodiesel using simultaneous cooling and microwave heating and its optimization.

    PubMed

    Chee Loong, Teo; Idris, Ani

    2014-12-01

    Biodiesel with improved yield was produced from microalgae biomass under simultaneous cooling and microwave heating (SCMH). Nannochloropsis sp. and Tetraselmis sp. which were known to contain higher lipid species were used. The yield obtained using this novel technique was compared with the conventional heating (CH) and microwave heating (MWH) as the control method. The results revealed that the yields obtained using the novel SCMH were higher; Nannochloropsis sp. (83.33%) and Tetraselmis sp. (77.14%) than the control methods. Maximum yields were obtained using SCMH when the microwave was set at 50°C, 800W, 16h of reaction with simultaneous cooling at 15°C; and water content and lipid to methanol ratio in reaction mixture was kept to 0 and 1:12 respectively. GC analysis depicted that the biodiesel produced from this technique has lower carbon components (<19 C) and has both reasonable CN and IV reflecting good ignition and lubricating properties. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Simultaneous parameter optimization of x-ray and neutron reflectivity data using genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Surendra, E-mail: surendra@barc.gov.in; Basu, Saibal

    2016-05-23

    X-ray and neutron reflectivity are two non destructive techniques which provide a wealth of information on thickness, structure and interracial properties in nanometer length scale. Combination of X-ray and neutron reflectivity is well suited for obtaining physical parameters of nanostructured thin films and superlattices. Neutrons provide a different contrast between the elements than X-rays and are also sensitive to the magnetization depth profile in thin films and superlattices. The real space information is extracted by fitting a model for the structure of the thin film sample in reflectometry experiments. We have applied a Genetic Algorithms technique to extract depth dependentmore » structure and magnetic in thin film and multilayer systems by simultaneously fitting X-ray and neutron reflectivity data.« less

  14. Optimal spectral structure for simultaneous Stimulated Brillouin Scattering suppression and coherent property preservation in high power coherent beam combination system

    NASA Astrophysics Data System (ADS)

    Han, Kai; Xu, Xiaojun; Liu, Zejin

    2013-05-01

    Based on the spectral manipulation technique, the Stimulated Brillouin Scattering (SBS) suppression effect and the coherent beam combination (CBC) effect in multi-tone CBC system are researched theoretically and experimentally. To get satisfactory SBS suppression, the frequency interval of the multi-tone seed laser should be large enough, at least larger than the SBS gain bandwidth. In order to attain excellent CBC effect, the spectra of the multi-tone seed laser need to be matched with the optical path differences among the amplifier chains. Hence, a sufficiently separated matching spectrum is capable at both SBS mitigation and coherent property preservation. By comparing the SBS suppression effect and the CBC effect at various spectra, the optimal spectral structure for simultaneous SBS suppression and excellent CBC effect is found.

  15. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    PubMed

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  16. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm

    PubMed Central

    Tamjidy, Mehran; Baharudin, B. T. Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-01-01

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon’s entropy. PMID:28772893

  17. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm.

    PubMed

    Tamjidy, Mehran; Baharudin, B T Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-05-15

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon's entropy.

  18. The "Best Worst" Field Optimization and Focusing

    NASA Technical Reports Server (NTRS)

    Vaughnn, David; Moore, Ken; Bock, Noah; Zhou, Wei; Ming, Liang; Wilson, Mark

    2008-01-01

    A simple algorithm for optimizing and focusing lens designs is presented. The goal of the algorithm is to simultaneously create the best and most uniform image quality over the field of view. Rather than relatively weighting multiple field points, only the image quality from the worst field point is considered. When optimizing a lens design, iterations are made to make this worst field point better until such a time as a different field point becomes worse. The same technique is used to determine focus position. The algorithm works with all the various image quality metrics. It works with both symmetrical and asymmetrical systems. It works with theoretical models and real hardware.

  19. Optimization of combined electron and photon beams for breast cancer

    NASA Astrophysics Data System (ADS)

    Xiong, W.; Li, J.; Chen, L.; Price, R. A.; Freedman, G.; Ding, M.; Qin, L.; Yang, J.; Ma, C.-M.

    2004-05-01

    Recently, intensity-modulated radiation therapy and modulated electron radiotherapy have gathered a growing interest for the treatment of breast and head and neck tumours. In this work, we carried out a study to combine electron and photon beams to achieve differential dose distributions for multiple target volumes simultaneously. A Monte Carlo based treatment planning system was investigated, which consists of a set of software tools to perform accurate dose calculation, treatment optimization, leaf sequencing and plan analysis. We compared breast treatment plans generated using this home-grown optimization and dose calculation software for different treatment techniques. Five different planning techniques have been developed for this study based on a standard photon beam whole breast treatment and an electron beam tumour bed cone down. Technique 1 includes two 6 MV tangential wedged photon beams followed by an anterior boost electron field. Technique 2 includes two 6 MV tangential intensity-modulated photon beams and the same boost electron field. Technique 3 optimizes two intensity-modulated photon beams based on a boost electron field. Technique 4 optimizes two intensity-modulated photon beams and the weight of the boost electron field. Technique 5 combines two intensity-modulated photon beams with an intensity-modulated electron field. Our results show that technique 2 can reduce hot spots both in the breast and the tumour bed compared to technique 1 (dose inhomogeneity is reduced from 34% to 28% for the target). Techniques 3, 4 and 5 can deliver a more homogeneous dose distribution to the target (with dose inhomogeneities for the target of 22%, 20% and 9%, respectively). In many cases techniques 3, 4 and 5 can reduce the dose to the lung and heart. It is concluded that combined photon and electron beam therapy may be advantageous for treating breast cancer compared to conventional treatment techniques using tangential wedged photon beams followed by a boost electron field.

  20. Dynamic nuclear polarization and optimal control spatial-selective 13C MRI and MRS

    NASA Astrophysics Data System (ADS)

    Vinding, Mads S.; Laustsen, Christoffer; Maximov, Ivan I.; Søgaard, Lise Vejby; Ardenkjær-Larsen, Jan H.; Nielsen, Niels Chr.

    2013-02-01

    Aimed at 13C metabolic magnetic resonance imaging (MRI) and spectroscopy (MRS) applications, we demonstrate that dynamic nuclear polarization (DNP) may be combined with optimal control 2D spatial selection to simultaneously obtain high sensitivity and well-defined spatial restriction. This is achieved through the development of spatial-selective single-shot spiral-readout MRI and MRS experiments combined with dynamic nuclear polarization hyperpolarized [1-13C]pyruvate on a 4.7 T pre-clinical MR scanner. The method stands out from related techniques by facilitating anatomic shaped region-of-interest (ROI) single metabolite signals available for higher image resolution or single-peak spectra. The 2D spatial-selective rf pulses were designed using a novel Krotov-based optimal control approach capable of iteratively fast providing successful pulse sequences in the absence of qualified initial guesses. The technique may be important for early detection of abnormal metabolism, monitoring disease progression, and drug research.

  1. Artificial neural networks in evaluation and optimization of modified release solid dosage forms.

    PubMed

    Ibrić, Svetlana; Djuriš, Jelena; Parojčić, Jelena; Djurić, Zorica

    2012-10-18

    Implementation of the Quality by Design (QbD) approach in pharmaceutical development has compelled researchers in the pharmaceutical industry to employ Design of Experiments (DoE) as a statistical tool, in product development. Among all DoE techniques, response surface methodology (RSM) is the one most frequently used. Progress of computer science has had an impact on pharmaceutical development as well. Simultaneous with the implementation of statistical methods, machine learning tools took an important place in drug formulation. Twenty years ago, the first papers describing application of artificial neural networks in optimization of modified release products appeared. Since then, a lot of work has been done towards implementation of new techniques, especially Artificial Neural Networks (ANN) in modeling of production, drug release and drug stability of modified release solid dosage forms. The aim of this paper is to review artificial neural networks in evaluation and optimization of modified release solid dosage forms.

  2. Artificial Neural Networks in Evaluation and Optimization of Modified Release Solid Dosage Forms

    PubMed Central

    Ibrić, Svetlana; Djuriš, Jelena; Parojčić, Jelena; Djurić, Zorica

    2012-01-01

    Implementation of the Quality by Design (QbD) approach in pharmaceutical development has compelled researchers in the pharmaceutical industry to employ Design of Experiments (DoE) as a statistical tool, in product development. Among all DoE techniques, response surface methodology (RSM) is the one most frequently used. Progress of computer science has had an impact on pharmaceutical development as well. Simultaneous with the implementation of statistical methods, machine learning tools took an important place in drug formulation. Twenty years ago, the first papers describing application of artificial neural networks in optimization of modified release products appeared. Since then, a lot of work has been done towards implementation of new techniques, especially Artificial Neural Networks (ANN) in modeling of production, drug release and drug stability of modified release solid dosage forms. The aim of this paper is to review artificial neural networks in evaluation and optimization of modified release solid dosage forms. PMID:24300369

  3. Review of optimization techniques of polygeneration systems for building applications

    NASA Astrophysics Data System (ADS)

    Y, Rong A.; Y, Su; R, Lahdelma

    2016-08-01

    Polygeneration means simultaneous production of two or more energy products in a single integrated process. Polygeneration is an energy-efficient technology and plays an important role in transition into future low-carbon energy systems. It can find wide applications in utilities, different types of industrial sectors and building sectors. This paper mainly focus on polygeneration applications in building sectors. The scales of polygeneration systems in building sectors range from the micro-level for a single home building to the large- level for residential districts. Also the development of polygeneration microgrid is related to building applications. The paper aims at giving a comprehensive review for optimization techniques for designing, synthesizing and operating different types of polygeneration systems for building applications.

  4. Optimizing Dynamical Network Structure for Pinning Control

    NASA Astrophysics Data System (ADS)

    Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo

    2016-04-01

    Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.

  5. Basic research for the geodynamics program

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Some objectives of this geodynamic program are: (1) optimal utilization of laser and VLBI observations as reference frames for geodynamics, (2) utilization of range difference observations in geodynamics, and (3) estimation techniques in crustal deformation analysis. The determination of Earth rotation parameters from different space geodetic systems is studied. Also reported on is the utilization of simultaneous laser range differences for the determination of baseline variation. An algorithm for the analysis of regional or local crustal deformation measurements is proposed along with other techniques and testing procedures. Some results of the reference from comparisons in terms of the pole coordinates from different techniques are presented.

  6. Simultaneous identification of optical constants and PSD of spherical particles by multi-wavelength scattering-transmittance measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Jun-You; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming

    2018-04-01

    An accurate and stable identification technique is developed to retrieve the optical constants and particle size distributions (PSDs) of particle system simultaneously from the multi-wavelength scattering-transmittance signals by using the improved quantum particle swarm optimization algorithm. The Mie theory are selected to calculate the directional laser intensity scattered by particles and the spectral collimated transmittance. The sensitivity and objective function distribution analysis were conducted to evaluate the mathematical properties (i.e. ill-posedness and multimodality) of the inverse problems under three different optical signals combinations (i.e. the single-wavelength multi-angle light scattering signal, the single-wavelength multi-angle light scattering and spectral transmittance signal, and the multi-angle light scattering and spectral transmittance signal). It was found the best global convergence performance can be obtained by using the multi-wavelength scattering-transmittance signals. Meanwhile, the present technique have been tested under different Gaussian measurement noise to prove its feasibility in a large solution space. All the results show that the inverse technique by using multi-wavelength scattering-transmittance signals is effective and suitable for retrieving the optical complex refractive indices and PSD of particle system simultaneously.

  7. SYSTEM OPTIMIZATION FOR THE AUTOMATIC SIMULTANEOUS DETERMINATION OF ARSENIC, SELENIUM, AND ANTIMONY, USING HYDRIDE GENERATION INTRODUCTION TO AN INDUCTIVELY COUPLED PLASMA.

    USGS Publications Warehouse

    Pyen, Grace S.; Browner, Richard F.; Long, Stephen

    1986-01-01

    A fixed-size simplex has been used to determine the optimum conditions for the simultaneous determination of arsenic, selenium, and antimony by hydride generation and inductively coupled plasma emission spectrometry. The variables selected for the simplex were carrier gas flow rate, rf power, viewing height, and reagent conditions. The detection limit for selenium was comparable to the preoptimized case, but there were twofold and fourfold improvements in the detection limits for arsenic and antimony, respectively. Precision of the technique was assessed with the use of artificially prepared water samples.

  8. Optimal platform design using non-dominated sorting genetic algorithm II and technique for order of preference by similarity to ideal solution; application to automotive suspension system

    NASA Astrophysics Data System (ADS)

    Shojaeefard, Mohammad Hassan; Khalkhali, Abolfazl; Faghihian, Hamed; Dahmardeh, Masoud

    2018-03-01

    Unlike conventional approaches where optimization is performed on a unique component of a specific product, optimum design of a set of components for employing in a product family can cause significant reduction in costs. Increasing commonality and performance of the product platform simultaneously is a multi-objective optimization problem (MOP). Several optimization methods are reported to solve these MOPs. However, what is less discussed is how to find the trade-off points among the obtained non-dominated optimum points. This article investigates the optimal design of a product family using non-dominated sorting genetic algorithm II (NSGA-II) and proposes the employment of technique for order of preference by similarity to ideal solution (TOPSIS) method to find the trade-off points among the obtained non-dominated results while compromising all objective functions together. A case study for a family of suspension systems is presented, considering performance and commonality. The results indicate the effectiveness of the proposed method to obtain the trade-off points with the best possible performance while maximizing the common parts.

  9. Point-based warping with optimized weighting factors of displacement vectors

    NASA Astrophysics Data System (ADS)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  10. Simultaneous detection of resolved glutamate, glutamine, and γ-aminobutyric acid at 4 T

    NASA Astrophysics Data System (ADS)

    Hu, Jiani; Yang, Shaolin; Xuan, Yang; Jiang, Quan; Yang, Yihong; Haacke, E. Mark

    2007-04-01

    A new approach is introduced to simultaneously detect resolved glutamate (Glu), glutamine (Gln), and γ-aminobutyric acid (GABA) using a standard STEAM localization pulse sequence with the optimized sequence timing parameters. This approach exploits the dependence of the STEAM spectra of the strongly coupled spin systems of Glu, Gln, and GABA on the echo time TE and the mixing time TM at 4 T to find an optimized sequence parameter set, i.e., {TE, TM}, where the outer-wings of the Glu C4 multiplet resonances around 2.35 ppm, the Gln C4 multiplet resonances around 2.45 ppm, and the GABA C2 multiplet resonance around 2.28 ppm are significantly suppressed and the three resonances become virtual singlets simultaneously and thus resolved. Spectral simulation and optimization were conducted to find the optimized sequence parameters, and phantom and in vivo experiments (on normal human brains, one patient with traumatic brain injury, and one patient with brain tumor) were carried out for verification. The results have demonstrated that the Gln, Glu, and GABA signals at 2.2-2.5 ppm can be well resolved using a standard STEAM sequence with the optimized sequence timing parameters around {82 ms, 48 ms} at 4 T, while the other main metabolites, such as N-acetyl aspartate (NAA), choline (tCho), and creatine (tCr), are still preserved in the same spectrum. The technique can be easily implemented and should prove to be a useful tool for the basic and clinical studies associated with metabolism of Glu, Gln, and/or GABA.

  11. A Simultaneous Approach to Optimizing Treatment Assignments with Mastery Scores. Research Report 89-5.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    An approach to simultaneous optimization of assignments of subjects to treatments followed by an end-of-mastery test is presented using the framework of Bayesian decision theory. Focus is on demonstrating how rules for the simultaneous optimization of sequences of decisions can be found. The main advantages of the simultaneous approach, compared…

  12. Simultaneous spectrophotometric determination of synthetic dyes in food samples after cloud point extraction using multiple response optimizations.

    PubMed

    Heidarizadi, Elham; Tabaraki, Reza

    2016-01-01

    A sensitive cloud point extraction method for simultaneous determination of trace amounts of sunset yellow (SY), allura red (AR) and brilliant blue (BB) by spectrophotometry was developed. Experimental parameters such as Triton X-100 concentration, KCl concentration and initial pH on extraction efficiency of dyes were optimized using response surface methodology (RSM) with a Doehlert design. Experimental data were evaluated by applying RSM integrating a desirability function approach. The optimum condition for extraction efficiency of SY, AR and BB simultaneously were: Triton X-100 concentration 0.0635 mol L(-1), KCl concentration 0.11 mol L(-1) and pH 4 with maximum overall desirability D of 0.95. Correspondingly, the maximum extraction efficiency of SY, AR and BB were 100%, 92.23% and 95.69%, respectively. At optimal conditions, extraction efficiencies were 99.8%, 92.48% and 95.96% for SY, AR and BB, respectively. These values were only 0.2%, 0.25% and 0.27% different from the predicted values, suggesting that the desirability function approach with RSM was a useful technique for simultaneously dye extraction. Linear calibration curves were obtained in the range of 0.02-4 for SY, 0.025-2.5 for AR and 0.02-4 μg mL(-1) for BB under optimum condition. Detection limit based on three times the standard deviation of the blank (3Sb) was 0.009, 0.01 and 0.007 μg mL(-1) (n=10) for SY, AR and BB, respectively. The method was successfully used for the simultaneous determination of the dyes in different food samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  14. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.

    PubMed

    Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher

    2013-10-01

    This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.

  15. Interferometric at-wavelength flare characterization of EUV optical systems

    DOEpatents

    Naulleau, Patrick P.; Goldberg, Kenneth Alan

    2001-01-01

    The extreme ultraviolet (EUV) phase-shifting point diffraction interferometer (PS/PDI) provides the high-accuracy wavefront characterization critical to the development of EUV lithography systems. Enhancing the implementation of the PS/PDI can significantly extend its spatial-frequency measurement bandwidth. The enhanced PS/PDI is capable of simultaneously characterizing both wavefront and flare. The enhanced technique employs a hybrid spatial/temporal-domain point diffraction interferometer (referred to as the dual-domain PS/PDI) that is capable of suppressing the scattered-reference-light noise that hinders the conventional PS/PDI. Using the dual-domain technique in combination with a flare-measurement-optimized mask and an iterative calculation process for removing flare contribution caused by higher order grating diffraction terms, the enhanced PS/PDI can be used to simultaneously measure both figure and flare in optical systems.

  16. Potentiometric chip-based multipumping flow system for the simultaneous determination of fluoride, chloride, pH, and redox potential in water samples.

    PubMed

    Chango, Gabriela; Palacio, Edwin; Cerdà, Víctor

    2018-08-15

    A simple potentiometric chip-based multipumping flow system (MPFS) has been developed for the simultaneous determination of fluoride, chloride, pH, and redox potential in water samples. The proposed system was developed by using a poly(methyl methacrylate) chip microfluidic-conductor using the advantages of flow techniques with potentiometric detection. For this purpose, an automatic system has been designed and built by optimizing the variables involved in the process, such as: pH, ionic strength, stirring and sample volume. This system was applied successfully to water samples getting a versatile system with an analysis frequency of 12 samples per hour. Good correlation between chloride and fluoride concentration measured with ISE and ionic chromatography technique suggests satisfactory reliability of the system. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Simultaneous delivery time and aperture shape optimization for the volumetric-modulated arc therapy (VMAT) treatment planning problem

    NASA Astrophysics Data System (ADS)

    Mahnam, Mehdi; Gendreau, Michel; Lahrichi, Nadia; Rousseau, Louis-Martin

    2017-07-01

    In this paper, we propose a novel heuristic algorithm for the volumetric-modulated arc therapy treatment planning problem, optimizing the trade-off between delivery time and treatment quality. We present a new mixed integer programming model in which the multi-leaf collimator leaf positions, gantry speed, and dose rate are determined simultaneously. Our heuristic is based on column generation; the aperture configuration is modeled in the columns and the dose distribution and time restriction in the rows. To reduce the number of voxels and increase the efficiency of the master model, we aggregate similar voxels using a clustering technique. The efficiency of the algorithm and the treatment quality are evaluated on a benchmark clinical prostate cancer case. The computational results show that a high-quality treatment is achievable using a four-thread CPU. Finally, we analyze the effects of the various parameters and two leaf-motion strategies.

  18. Simultaneous assimilation of satellite NO2, O3, CO, and HNO3 data for the analysis of the tropospheric chemical composition

    NASA Astrophysics Data System (ADS)

    Miyazaki, K.; Eskes, H.; Sudo, K.

    2012-04-01

    Carbon monoxide (CO) and nitrogen oxides (NOx) play an important role in tropospheric chemistry through their influences on the ozone and hydroxyl radical (OH). The simultaneous optimization of various chemical components is expected to improve the emission inversion through the better description of the chemical feedbacks in the NOx- and CO-chemistry. This study aims to reproduce chemical composition distributions in the troposphere by combining information obtained from multiple satellite data sets. The emissions of CO and NOx, together with the 3D concentration fields of all forecasted chemical species in the global CTM CHASER have been simultaneously optimized using the ensemble Kalman filter (EnKF) data assimilation technique, and NO2, O3, CO, and HNO3 data obtained from OMI, TES, MOPITT, and MLS satellite measurements. The performance is evaluated against independent data from ozone sondes, aircraft measurements, GOME-2, and SCIAMACHY satellite data. Observing System Experiments (OSEs) have been carried out. These OSEs quantify the relative importance of each data set on constraining the emissions and concentrations. We confirmed that the simultaneous data assimilation improved the agreement with these independent data sets. The combined analysis of multiple data sets by means of advanced data assimilation system can provide a useful framework for the air quality research.

  19. Artificial neural network assisted kinetic spectrophotometric technique for simultaneous determination of paracetamol and p-aminophenol in pharmaceutical samples using localized surface plasmon resonance band of silver nanoparticles

    NASA Astrophysics Data System (ADS)

    Khodaveisi, Javad; Dadfarnia, Shayessteh; Haji Shabani, Ali Mohammad; Rohani Moghadam, Masoud; Hormozi-Nezhad, Mohammad Reza

    2015-03-01

    Spectrophotometric analysis method based on the combination of the principal component analysis (PCA) with the feed-forward neural network (FFNN) and the radial basis function network (RBFN) was proposed for the simultaneous determination of paracetamol (PAC) and p-aminophenol (PAP). This technique relies on the difference between the kinetic rates of the reactions between analytes and silver nitrate as the oxidizing agent in the presence of polyvinylpyrrolidone (PVP) which is the stabilizer. The reactions are monitored at the analytical wavelength of 420 nm of the localized surface plasmon resonance (LSPR) band of the formed silver nanoparticles (Ag-NPs). Under the optimized conditions, the linear calibration graphs were obtained in the concentration range of 0.122-2.425 μg mL-1 for PAC and 0.021-5.245 μg mL-1 for PAP. The limit of detection in terms of standard approach (LODSA) and upper limit approach (LODULA) were calculated to be 0.027 and 0.032 μg mL-1 for PAC and 0.006 and 0.009 μg mL-1 for PAP. The important parameters were optimized for the artificial neural network (ANN) models. Statistical parameters indicated that the ability of the both methods is comparable. The proposed method was successfully applied to the simultaneous determination of PAC and PAP in pharmaceutical preparations.

  20. Bayer Digester Optimization Studies using Computer Techniques

    NASA Astrophysics Data System (ADS)

    Kotte, Jan J.; Schleider, Victor H.

    Theoretically required heat transfer performance by the multistaged flash heat reclaim system of a high pressure Bayer digester unit is determined for various conditions of discharge temperature, excess flash vapor and indirect steam addition. Solution of simultaneous heat balances around the digester vessels and the heat reclaim system yields the magnitude of available heat for representation of each case on a temperature-enthalpy diagram, where graphical fit of the number of flash stages fixes the heater requirements. Both the heat balances and the trial-and-error graphical solution are adapted to solution by digital computer techniques.

  1. Quantitative Analysis of Drugs with Highly Different Concentrations of Pharmaceutical Components Using Spectral Subtraction Techniques

    NASA Astrophysics Data System (ADS)

    Ayoub, B. M.

    2017-11-01

    Two simple spectrophotometric methods were developed for determination of empagliflozin and metformin by manipulating their ratio spectra with application on a recently approved pharmaceutical combination, Synjardy® tablets. A spiking technique was used to increase the concentration of empagliflozin after extraction from the tablets to allow its simultaneous determination with metformin. Validation parameters according to ICH guidelines were acceptable over the concentration range of 2-12 μg/mL for both drugs using constant multiplication and spectrum subtraction methods. The optimized methods are suitable for QC labs.

  2. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  3. Improved conventional and microwave-assisted silylation protocols for simultaneous gas chromatographic determination of tocopherols and sterols: Method development and multi-response optimization.

    PubMed

    Poojary, Mahesha M; Passamonti, Paolo

    2016-12-09

    This paper reports on improved conventional thermal silylation (CTS) and microwave-assisted silylation (MAS) methods for simultaneous determination of tocopherols and sterols by gas chromatography. Reaction parameters in each of the methods developed were systematically optimized using a full factorial design followed by a central composite design. Initially, experimental conditions for CTS were optimized using a block heater. Further, a rapid MAS was developed and optimized. To understand microwave heating mechanisms, MAS was optimized by two distinct modes of microwave heating: temperature-controlled MAS and power-controlled MAS, using dedicated instruments where reaction temperature and microwave power level were controlled and monitored online. Developed methods: were compared with routine overnight derivatization. On a comprehensive level, while both CTS and MAS were found to be efficient derivatization techniques, MAS significantly reduced the reaction time. The optimal derivatization temperature and time for CTS found to be 55°C and 54min, while it was 87°C and 1.2min for temperature-controlled MAS. Further, a microwave power of 300W and a derivatization time 0.5min found to be optimal for power-controlled MAS. The use of an appropriate derivatization solvent, such as pyridine, was found to be critical for the successful determination. Catalysts, like potassium acetate and 4-dimethylaminopyridine, enhanced the efficiency slightly. The developed methods showed excellent analytical performance in terms of linearity, accuracy and precision. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Simultaneous optimization method for absorption spectroscopy postprocessing.

    PubMed

    Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T

    2015-05-10

    A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.

  5. Static inverter with synchronous output waveform synthesized by time-optimal-response feedback

    NASA Technical Reports Server (NTRS)

    Kernick, A.; Stechschulte, D. L.; Shireman, D. W.

    1976-01-01

    Time-optimal-response 'bang-bang' or 'bang-hang' technique, using four feedback control loops, synthesizes static-inverter sinusoidal output waveform by self-oscillatory but yet synchronous pulse-frequency-modulation (SPFM). A single modular power stage per phase of ac output entails the minimum of circuit complexity while providing by feedback synthesis individual phase voltage regulation, phase position control and inherent compensation simultaneously for line and load disturbances. Clipped sinewave performance is described under off-limit load or input voltage conditions. Also, approaches to high power levels, 3-phase arraying and parallel modular connection are given.

  6. A hybrid algorithm optimization approach for machine loading problem in flexible manufacturing system

    NASA Astrophysics Data System (ADS)

    Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.

    2012-05-01

    The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.

  7. Kernelized Locality-Sensitive Hashing for Fast Image Landmark Association

    DTIC Science & Technology

    2011-03-24

    based Simultaneous Localization and Mapping ( SLAM ). The problem, however, is that vision-based navigation techniques can re- quire excessive amounts of...up and optimizing the data association process in vision-based SLAM . Specifically, this work studies the current methods that algorithms use to...required for location identification than that of other methods. This work can then be extended into a vision- SLAM implementation to subsequently

  8. Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.

    PubMed

    McIntosh, Chris; Hamarneh, Ghassan

    2012-01-01

    We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.

  9. A mathematical tool to generate complex whole body motor tasks and test hypotheses on underlying motor planning.

    PubMed

    Tagliabue, Michele; Pedrocchi, Alessandra; Pozzo, Thierry; Ferrigno, Giancarlo

    2008-01-01

    In spite of the complexity of human motor behavior, difficulties in mathematical modeling have restricted to rather simple movements attempts to identify the motor planning criterion used by the central nervous system. This paper presents a novel-simulation technique able to predict the "desired trajectory" corresponding to a wide range of kinematic and kinetic optimality criteria for tasks involving many degrees of freedom and the coordination between goal achievement and balance maintenance. Employment of proper time discretization, inverse dynamic methods and constrained optimization technique are combined. The application of this simulator to a planar whole body pointing movement shows its effectiveness in managing system nonlinearities and instability as well as in ensuring the anatomo-physiological feasibility of predicted motor plans. In addition, the simulator's capability to simultaneously optimize competing movement aspects represents an interesting opportunity for the motor control community, in which the coexistence of several controlled variables has been hypothesized.

  10. Optimization of microwave-assisted extraction (MAE) of coriander phenolic antioxidants - response surface methodology approach.

    PubMed

    Zeković, Zoran; Vladić, Jelena; Vidović, Senka; Adamović, Dušan; Pavlić, Branimir

    2016-10-01

    Microwave-assisted extraction (MAE) of polyphenols from coriander seeds was optimized by simultaneous maximization of total phenolic (TP) and total flavonoid (TF) yields, as well as maximized antioxidant activity determined by 1,1-diphenyl-2-picrylhydrazyl and reducing power assays. Box-Behnken experimental design with response surface methodology (RSM) was used for optimization of MAE. Extraction time (X1 , 15-35 min), ethanol concentration (X2 , 50-90% w/w) and irradiation power (X3 , 400-800 W) were investigated as independent variables. Experimentally obtained values of investigated responses were fitted to a second-order polynomial model, and multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions. The optimal MAE conditions for simultaneous maximization of polyphenol yield and increased antioxidant activity were an extraction time of 19 min, an ethanol concentration of 63% and an irradiation power of 570 W, while predicted values of TP, TF, IC50 and EC50 at optimal MAE conditions were 311.23 mg gallic acid equivalent per 100 g dry weight (DW), 213.66 mg catechin equivalent per 100 g DW, 0.0315 mg mL(-1) and 0.1311 mg mL(-1) respectively. RSM was successfully used for multi-response optimization of coriander seed polyphenols. Comparison of optimized MAE with conventional extraction techniques confirmed that MAE provides significantly higher polyphenol yields and extracts with increased antioxidant activity. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  11. A "Reverse-Schur" Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.

  12. A “Reverse-Schur” Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839

  13. Optimizing Photosynthetic and Respiratory Parameters Based on the Seasonal Variation Pattern in Regional Net Ecosystem Productivity Obtained from Atmospheric Inversion

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.

    2014-12-01

    In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.

  14. Aerodynamic design and optimization in one shot

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.

    1992-01-01

    This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.

  15. Study of information transfer optimization for communication satellites

    NASA Technical Reports Server (NTRS)

    Odenwalder, J. P.; Viterbi, A. J.; Jacobs, I. M.; Heller, J. A.

    1973-01-01

    The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.

  16. Galaxy Redshifts from Discrete Optimization of Correlation Functions

    NASA Astrophysics Data System (ADS)

    Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi

    2016-12-01

    We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.

  17. General solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging.

    PubMed

    Nakata, Toshihiko; Ninomiya, Takanori

    2006-10-10

    A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.

  18. SU-F-T-349: Dosimetric Comparison of Three Different Simultaneous Integrated Boost Irradiation Techniques for Multiple Brain Metastases: Intensity-Modulatedradiotherapy, Hybrid Intensity-Modulated Radiotherapy and Volumetric Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, X; Sun, T; Yin, Y

    Purpose: To study the dosimetric impact of intensity-modulated radiotherapy (IMRT), hybrid intensity-modulated radiotherapy (h-IMRT) and volumetric modulated arc therapy(VMAT) for whole-brain radiotherapy (WBRT) with simultaneous integrated boost in patients with multiple brain metastases. Methods: Ten patients with multiple brain metastases were included in this analysis. The prescribed dose was 45 Gy to the whole brain (PTVWBRT) and 55 Gy to individual brain metastases (PTVboost) delivered simultaneously in 25 fractions. Three treatment techniques were designed: the 7 equal spaced fields IMRT plan, hybrid IMRT plan and VMAT with two 358°arcs. In hybrid IMRT plan, two fields(90°and 270°) were planned to themore » whole brain. This was used as a base dose plan. Then 5 fields IMRT plan was optimized based on the two fields plan. The dose distribution in the target, the dose to the organs at risk and total MU in three techniques were compared. Results: For the target dose, conformity and homogeneity in PTV, no statistically differences were observed in the three techniques. For the maximum dose in bilateral lens and the mean dose in bilateral eyes, IMRT and h-IMRT plans showed the highest and lowest value respectively. No statistically significant differences were observed in the dose of optic nerve and brainstem. For the monitor units, IMRT and VMAT plans showed the highest and lowest value respectively. Conclusion: For WBRT with simultaneous integrated boost in patients with multiple brain metastases, hybrid IMRT could reduce the doses to lens and eyes. It is feasible for patients with brain metastases.« less

  19. Inference of a Nonlinear Stochastic Model of the Cardiorespiratory Interaction

    NASA Astrophysics Data System (ADS)

    Smelyanskiy, V. N.; Luchinsky, D. G.; Stefanovska, A.; McClintock, P. V.

    2005-03-01

    We reconstruct a nonlinear stochastic model of the cardiorespiratory interaction in terms of a set of polynomial basis functions representing the nonlinear force governing system oscillations. The strength and direction of coupling and noise intensity are simultaneously inferred from a univariate blood pressure signal. Our new inference technique does not require extensive global optimization, and it is applicable to a wide range of complex dynamical systems subject to noise.

  20. Generation of structural topologies using efficient technique based on sorted compliances

    NASA Astrophysics Data System (ADS)

    Mazur, Monika; Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2018-01-01

    Topology optimization, although well recognized is still widely developed. It has gained recently more attention since large computational ability become available for designers. This process is stimulated simultaneously by variety of emerging, innovative optimization methods. It is observed that traditional gradient-based mathematical programming algorithms, in many cases, are replaced by novel and e cient heuristic methods inspired by biological, chemical or physical phenomena. These methods become useful tools for structural optimization because of their versatility and easy numerical implementation. In this paper engineering implementation of a novel heuristic algorithm for minimum compliance topology optimization is discussed. The performance of the topology generator is based on implementation of a special function utilizing information of compliance distribution within the design space. With a view to cope with engineering problems the algorithm has been combined with structural analysis system Ansys.

  1. Constant-Envelope Waveform Design for Optimal Target-Detection and Autocorrelation Performances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata

    2013-01-01

    We propose an algorithm to directly synthesize in time-domain a constant-envelope transmit waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. This approach is in contrast to the traditional indirect methods that synthesize the transmit signal following the computation of the optimal energy spectral density. Additionally, we aim to maintain a good autocorrelation property of the designed signal. Therefore, our waveform design technique solves a bi-objective optimization problem in order to simultaneously improve the detection and autocorrelation performances, which are in general conflicting in nature. We demonstrate this compromising characteristics of themore » detection and autocorrelation performances with numerical examples. Furthermore, in the absence of the autocorrelation criterion, our designed signal is shown to achieve a near-optimum detection performance.« less

  2. Zymography Methods to Simultaneously Analyze Superoxide Dismutase and Catalase Activities: Novel Application for Yeast Species Identification.

    PubMed

    Gamero-Sandemetrio, Esther; Gómez-Pastor, Rocío; Matallana, Emilia

    2017-01-01

    We provide an optimized protocol for a double staining technique to analyze superoxide dismutase enzymatic isoforms Cu-Zn SOD (Sod1) and Mn-SOD (Sod2) and catalase in the same polyacrylamide gel. The use of NaCN, which specifically inhibits yeast Sod1 isoform, allows the analysis of Sod2 isoform while the use of H 2 O 2 allows the analysis of catalase. The identification of a different zymography profiling of SOD and catalase isoforms in different yeast species allowed us to propose this technique as a novel yeast identification and classification strategy.

  3. Primary surgical excision for pediatric orbital capillary hemangioma.

    PubMed

    Krema, Hatem

    2015-05-01

    We report the technique and outcome of surgical excision of subcutaneous orbital capillary hemangioma causing eye globe displacement in two children. Primary surgical excision was performed with blunt dissection along the tumor walls using a cotton-tipped applicator as the dissecting tool with simultaneous outward gentle traction on the tumor wall. Despite the deep and extensive orbital involvement, complete excision of the hemangiomas was achievable with this technique, which permitted excellent visualization of the surgical planes throughout the procedures. Deep and extensive pediatric orbital capillary hemangioma can be surgically excised with the suggested technique, which obviates the need for intralesional or systemic medical therapy, yielding optimal cosmetic and functional outcomes, shortly after surgery.

  4. Multi-Frequency Harmonics Technique for HIFU Tissue Treatment

    NASA Astrophysics Data System (ADS)

    Rybyanets, Andrey N.; Lugovaya, Maria A.; Rybyanets, Anastasia A.

    2010-03-01

    New technique for enhancing of tissue lysis and enlarging treatment volume during one HIFU sonification is proposed. The technique consists in simultaneous or alternative (at optimal repetition frequency) excitation of single element HIFU transducer on a frequencies corresponding to odd natural harmonics of piezoceramic element at ultrasound energy levels sufficient for producing cavitational, thermal or mechanical damage of fat cells at each of aforementioned frequencies. Calculation and FEM modeling of transducer vibrations and acoustic field patterns for different frequencies sets were performed. Acoustic pressure in focal plane was measured in water using calibrated hydrophone and 3D acoustic scanning system. In vitro experiments on different tissues and phantoms confirming the advantages of multifrequency harmonic method were performed.

  5. Efficient Sampling of Parsimonious Inversion Histories with Application to Genome Rearrangement in Yersinia

    PubMed Central

    Darling, Aaron E.

    2009-01-01

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186

  6. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  7. Development and validation of a magnetic solid-phase extraction with high-performance liquid chromatography method for the simultaneous determination of amphetamine and methadone in urine.

    PubMed

    Taghvimi, Arezou; Hamishehkar, Hamed; Ebrahimi, Mahmoud

    2016-06-01

    The simultaneous determination of amphetamine and methadone was carried out by magnetic graphene oxide nanoparticles, a magnetic solid-phase extraction adsorbent, as a new sample treatment technique. The main factors (the amounts of sample volume, amount of adsorbent, type and amount of extraction organic solvent, time of extraction and desorption, pH, the ionic strength of extraction medium, and agitation rate) influencing the extraction efficiency were investigated and optimized. Under the optimized conditions, good linearity was observed in the range of 100-1500 ng/mL for amphetamine and 100-1000 ng/mL for methadone. The method was evaluated for determination of AM and methadone in positive urine samples, satisfactory results were obtained, therefore magnetic solid-phase extraction can be applied as a novel method for the determination of drugs of abuse in forensic laboratories. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer

    PubMed Central

    Yu, Hongyan; Zhang, Yongqiang; Yang, Yuanyuan; Ji, Luyue

    2017-01-01

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively. PMID:28820496

  9. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer.

    PubMed

    Yu, Hongyan; Zhang, Yongqiang; Guo, Songtao; Yang, Yuanyuan; Ji, Luyue

    2017-08-18

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively.

  10. Quantifying viable Vibrio parahaemolyticus and Listeria monocytogenes simultaneously in raw shrimp.

    PubMed

    Zhang, Zhaohuan; Liu, Haiquan; Lou, Yang; Xiao, Lili; Liao, Chao; Malakar, Pradeep K; Pan, Yingjie; Zhao, Yong

    2015-08-01

    A novel TaqMan-based multiplex real-time PCR method combined with propidium monoazide (PMA) treatment was firstly developed for the simultaneous quantification of viable Vibrio parahaemolyticus and Listeria monocytogenes in raw shrimp. The optimization of PMA concentration showed that 100 μM was considered optimal to effectively inhibit 10(8) CFU/mL dead cells of both bacteria. The high specificity of this method was confirmed on tests using 96 target and non-target strains. The optimized assay could detect as low as 10(1)-10(2) CFU/g of each strain on the artificially contaminated shrimp, and its amplification efficiencies were up to 100 and 106 % for V. parahaemolyticus and L. monocytogenes, respectively. Furthermore, this assay has been successfully applied to describe the behavior of these two pathogens in raw shrimps stored at 4 °C. In conclusion, this PMA TaqMan-based multiplex real-time PCR technique, where the whole procedure takes less than 5 h, provides an effective and rapid tool for monitoring contamination of viable V. parahaemolyticus and L. monocytogenes in seafood, improving seafood safety and protecting public health.

  11. Voltage sweep ion mobility spectrometry.

    PubMed

    Davis, Eric J; Williams, Michael D; Siems, William F; Hill, Herbert H

    2011-02-15

    Ion mobility spectrometry (IMS) is a rapid, gas-phase separation technique that exhibits excellent separation of ions as a standalone instrument. However, IMS cannot achieve optimal separation power with both small and large ions simultaneously. Similar to the general elution problem in chromatography, fast ions are well resolved using a low electric field (50-150 V/cm), whereas slow drifting molecules are best separated using a higher electric field (250-500 V/cm). While using a low electric field, IMS systems tend to suffer from low ion transmission and low signal-to-noise ratios. Through the use a novel voltage algorithm, some of these effects can be alleviated. The electric field was swept from low to high while monitoring a specific drift time, and the resulting data were processed to create a 'voltage-sweep' spectrum. If an optimal drift time is calculated for each voltage and scanned simultaneously, a spectrum may be obtained with optimal separation throughout the mobility range. This increased the resolving power up to the theoretical maximum for every peak in the spectrum and extended the peak capacity of the IMS system, while maintaining accurate drift time measurements. These advantages may be extended to any IMS, requiring only a change in software.

  12. Multidisciplinary design optimization of aircraft wing structures with aeroelastic and aeroservoelastic constraints

    NASA Astrophysics Data System (ADS)

    Jung, Sang-Young

    Design procedures for aircraft wing structures with control surfaces are presented using multidisciplinary design optimization. Several disciplines such as stress analysis, structural vibration, aerodynamics, and controls are considered simultaneously and combined for design optimization. Vibration data and aerodynamic data including those in the transonic regime are calculated by existing codes. Flutter analyses are performed using those data. A flutter suppression method is studied using control laws in the closed-loop flutter equation. For the design optimization, optimization techniques such as approximation, design variable linking, temporary constraint deletion, and optimality criteria are used. Sensitivity derivatives of stresses and displacements for static loads, natural frequency, flutter characteristics, and control characteristics with respect to design variables are calculated for an approximate optimization. The objective function is the structural weight. The design variables are the section properties of the structural elements and the control gain factors. Existing multidisciplinary optimization codes (ASTROS* and MSC/NASTRAN) are used to perform single and multiple constraint optimizations of fully built up finite element wing structures. Three benchmark wing models are developed and/or modified for this purpose. The models are tested extensively.

  13. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  14. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  15. New nonlinear control algorithms for multiple robot arms

    NASA Technical Reports Server (NTRS)

    Tarn, T. J.; Bejczy, A. K.; Yun, X.

    1988-01-01

    Multiple coordinated robot arms are modeled by considering the arms as closed kinematic chains and as a force-constrained mechanical system working on the same object simultaneously. In both formulations, a novel dynamic control method is discussed. It is based on feedback linearization and simultaneous output decoupling technique. By applying a nonlinear feedback and a nonlinear coordinate transformation, the complicated model of the multiple robot arms in either formulation is converted into a linear and output decoupled system. The linear system control theory and optimal control theory are used to design robust controllers in the task space. The first formulation has the advantage of automatically handling the coordination and load distribution among the robot arms. In the second formulation, it was found that by choosing a general output equation it became possible simultaneously to superimpose the position and velocity error feedback with the force-torque error feedback in the task space.

  16. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip.

    PubMed

    Davison, James A

    2015-01-01

    To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer.

  17. The use of computer imaging techniques to visualize cardiac muscle cells in three dimensions.

    PubMed

    Marino, T A; Cook, P N; Cook, L T; Dwyer, S J

    1980-11-01

    Atrial muscle cells and atrioventricular bundle cells were reconstructed using a computer-assisted three-dimensional reconstruction system. This reconstruction technique permitted these cells to be viewed from any direction. The cell surfaces were approximated using triangular tiles, and this optimization technique for cell reconstruction allowed for the computation of cell surface area and cell volume. A transparent mode is described which enables the investigator to examine internal cellular features such as the shape and location of the nucleus. In addition, more than one cell can be displayed simultaneously, and, therefore, spatial relationships are preserved and intercellular relationships viewed directly. The use of computer imaging techniques allows for a more complete collection of quantitative morphological data and also the visualization of the morphological information gathered.

  18. Concept of combinatorial de novo design of drug-like molecules by particle swarm optimization.

    PubMed

    Hartenfeller, Markus; Proschak, Ewgenij; Schüller, Andreas; Schneider, Gisbert

    2008-07-01

    We present a fast stochastic optimization algorithm for fragment-based molecular de novo design (COLIBREE, Combinatorial Library Breeding). The search strategy is based on a discrete version of particle swarm optimization. Molecules are represented by a scaffold, which remains constant during optimization, and variable linkers and side chains. Different linkers represent virtual chemical reactions. Side-chain building blocks were obtained from pseudo-retrosynthetic dissection of large compound databases. Here, ligand-based design was performed using chemically advanced template search (CATS) topological pharmacophore similarity to reference ligands as fitness function. A weighting scheme was included for particle swarm optimization-based molecular design, which permits the use of many reference ligands and allows for positive and negative design to be performed simultaneously. In a case study, the approach was applied to the de novo design of potential peroxisome proliferator-activated receptor subtype-selective agonists. The results demonstrate the ability of the technique to cope with large combinatorial chemistry spaces and its applicability to focused library design. The technique was able to perform exploitation of a known scheme and at the same time explorative search for novel ligands within the framework of a given molecular core structure. It thereby represents a practical solution for compound screening in the early hit and lead finding phase of a drug discovery project.

  19. Optimizing Requirements Decisions with KEYS

    NASA Technical Reports Server (NTRS)

    Jalali, Omid; Menzies, Tim; Feather, Martin

    2008-01-01

    Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.

  20. Applications of fuzzy theories to multi-objective system optimization

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Dhingra, A. K.

    1991-01-01

    Most of the computer aided design techniques developed so far deal with the optimization of a single objective function over the feasible design space. However, there often exist several engineering design problems which require a simultaneous consideration of several objective functions. This work presents several techniques of multiobjective optimization. In addition, a new formulation, based on fuzzy theories, is also introduced for the solution of multiobjective system optimization problems. The fuzzy formulation is useful in dealing with systems which are described imprecisely using fuzzy terms such as, 'sufficiently large', 'very strong', or 'satisfactory'. The proposed theory translates the imprecise linguistic statements and multiple objectives into equivalent crisp mathematical statements using fuzzy logic. The effectiveness of all the methodologies and theories presented is illustrated by formulating and solving two different engineering design problems. The first one involves the flight trajectory optimization and the main rotor design of helicopters. The second one is concerned with the integrated kinematic-dynamic synthesis of planar mechanisms. The use and effectiveness of nonlinear membership functions in fuzzy formulation is also demonstrated. The numerical results indicate that the fuzzy formulation could yield results which are qualitatively different from those provided by the crisp formulation. It is felt that the fuzzy formulation will handle real life design problems on a more rational basis.

  1. Optimal placement of actuators and sensors in control augmented structural optimization

    NASA Technical Reports Server (NTRS)

    Sepulveda, A. E.; Schmit, L. A., Jr.

    1990-01-01

    A control-augmented structural synthesis methodology is presented in which actuator and sensor placement is treated in terms of (0,1) variables. Structural member sizes and control variables are treated simultaneously as design variables. A multiobjective utopian approach is used to obtain a compromise solution for inherently conflicting objective functions such as strucutal mass control effort and number of actuators. Constraints are imposed on transient displacements, natural frequencies, actuator forces and dynamic stability as well as controllability and observability of the system. The combinatorial aspects of the mixed - (0,1) continuous variable design optimization problem are made tractable by combining approximation concepts with branch and bound techniques. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure set forth.

  2. Prediction of protein-protein interaction network using a multi-objective optimization approach.

    PubMed

    Chowdhury, Archana; Rakshit, Pratyusha; Konar, Amit

    2016-06-01

    Protein-Protein Interactions (PPIs) are very important as they coordinate almost all cellular processes. This paper attempts to formulate PPI prediction problem in a multi-objective optimization framework. The scoring functions for the trial solution deal with simultaneous maximization of functional similarity, strength of the domain interaction profiles, and the number of common neighbors of the proteins predicted to be interacting. The above optimization problem is solved using the proposed Firefly Algorithm with Nondominated Sorting. Experiments undertaken reveal that the proposed PPI prediction technique outperforms existing methods, including gene ontology-based Relative Specific Similarity, multi-domain-based Domain Cohesion Coupling method, domain-based Random Decision Forest method, Bagging with REP Tree, and evolutionary/swarm algorithm-based approaches, with respect to sensitivity, specificity, and F1 score.

  3. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  4. Constrained simultaneous multi-state reconfigurable wing structure configuration optimization

    NASA Astrophysics Data System (ADS)

    Snyder, Matthew

    A reconfigurable aircraft is capable of in-flight shape change to increase mission performance or provide multi-mission capability. Reconfigurability has always been a consideration in aircraft design, from the Wright Flyer, to the F-14, and most recently the Lockheed-Martin folding wing concept. The Wright Flyer used wing-warping for roll control, the F-14 had a variable-sweep wing to improve supersonic flight capabilities, and the Lockheed-Martin folding wing demonstrated radical in-flight shape change. This dissertation will examine two questions that aircraft reconfigurability raises, especially as reconfiguration increases in complexity. First, is there an efficient method to develop a light weight structure which supports all the loads generated by each configuration? Second, can this method include the capability to propose a sub-structure topology that weighs less than other considered designs? The first question requires a method that will design and optimize multiple configurations of a reconfigurable aerostructure. Three options exist, this dissertation will show one is better than the others. Simultaneous optimization considers all configurations and their respective load cases and constraints at the same time. Another method is sequential optimization which considers each configuration of the vehicle one after the other - with the optimum design variable values from the first configuration becoming the lower bounds for subsequent configurations. This process repeats for each considered configuration and the lower bounds update as necessary. The third approach is aggregate combination — this method keeps the thickness or area of each member for the most critical configuration, the configuration that requires the largest cross-section. This research will show that simultaneous optimization produces a lower weight and different topology for the considered structures when compared to the sequential and aggregate techniques. To answer the second question, the developed optimization algorithm combines simultaneous optimization with a new method for determining the optimum location of the structural members of the sub-structure. The method proposed here considers an over-populated structural model, one in which there are initially more members than necessary. Using a unique iterative process, the optimization algorithm removes members from the design if they do not carry enough load to justify their presence. The initial set of members includes ribs, spars and a series of cross-members that diagonally connect the ribs and spars. The final result is a different structure, which is lower weight than one developed from sequential optimization or aggregate combination, and suggests the primary load paths. Chapter 1 contains background information on reconfigurable aircraft and a description of the new reconfigurable air vehicle being considered by the Air Vehicles Directorate of the Air Force Research Laboratory. This vehicle serves as a platform to test the proposed optimization process. Chapters 2 and 3 overview the optimization method and Chapter 4 provides some background analysis which is unique to this particular reconfigurable air vehicle. Chapter 5 contains the results of the optimizations and demonstrates how changing constraints or initial configuration impacts the final weight and topology of the wing structure. The final chapter contains conclusions and comments on some future work which would further enhance the effectiveness of the simultaneous reconfigurable structural topology optimization process developed and used in this dissertation.

  5. An Introduction to System-Level, Steady-State and Transient Modeling and Optimization of High-Power-Density Thermoelectric Generator Devices Made of Segmented Thermoelectric Elements

    NASA Astrophysics Data System (ADS)

    Crane, D. T.

    2011-05-01

    High-power-density, segmented, thermoelectric (TE) elements have been intimately integrated into heat exchangers, eliminating many of the loss mechanisms of conventional TE assemblies, including the ceramic electrical isolation layer. Numerical models comprising simultaneously solved, nonlinear, energy balance equations have been created to simulate these novel architectures. Both steady-state and transient models have been created in a MATLAB/Simulink environment. The models predict data from experiments in various configurations and applications over a broad range of temperature, flow, and current conditions for power produced, efficiency, and a variety of other important outputs. Using the validated models, devices and systems are optimized using advanced multiparameter optimization techniques. Devices optimized for particular steady-state operating conditions can then be dynamically simulated in a transient operating model. The transient model can simulate a variety of operating conditions including automotive and truck drive cycles.

  6. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  7. Development of a novel optimization tool for electron linacs inspired by artificial intelligence techniques in video games

    NASA Astrophysics Data System (ADS)

    Meier, E.; Biedron, S. G.; LeBlanc, G.; Morgan, M. J.

    2011-03-01

    This paper reports the results of an advanced algorithm for the optimization of electron beam parameters in Free Electron Laser (FEL) Linacs. In the novel approach presented in this paper, the system uses state of the art developments in video games to mimic an operator's decisions to perform an optimization task when no prior knowledge, other than constraints on the actuators is available. The system was tested for the simultaneous optimization of the energy spread and the transmission of the Australian Synchrotron Linac. The proposed system successfully increased the transmission of the machine from 90% to 97% and decreased the energy spread of the beam from 1.04% to 0.91%. Results of a control experiment performed at the new FERMI@Elettra FEL is also reported, suggesting the adaptability of the scheme for beam-based control.

  8. Carbon coated magnetic nanoparticles as a novel magnetic solid phase extraction adsorbent for simultaneous extraction of methamphetamine and ephedrine from urine samples.

    PubMed

    Taghvimi, Arezou; Hamishehkar, Hamed

    2017-01-15

    This paper develops a highly selective, specific and efficient method for simultaneous determination of ephedrine and methamphetamine by a new carbon coated magnetic nanoparticles (C/MNPs) as a magnetic solid phase extraction (MSPE) adsorbent in biological urine medium. The characterization of synthesized magnetic nano adsorbent was completely carried out by various characterization techniques like Fourier transform infrared (FT-IR) spectroscopy, powder x-ray diffraction (XRD), scanning electron microscopy (SEM) and vibrating sample magnetometer (VSM). Nine important parameters influencing extraction efficiency including amount of adsorbent, amounts of sample volume, pH, type and amount of extraction organic solvent, time of extraction and desorption, agitation rate and ionic strength of extraction medium, were studied and optimized. Under optimized extraction conditions, a good linearity was observed in the concentration range of 100-2000ng/mL for ephedrine and 100-2500ng/mL for methamphetamine. Analysis of positive urine samples was carried out by proposed method with the recovery of 98.71 and 97.87% for ephedrine and methamphetamine, respectively. The results indicated that carbon coated magnetic nanoparticles could be applied in clinical and forensic laboratories for simultaneous determination of abused drugs in urine media. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Development of a 3-wire probe for the simultaneous measurement of turbulent velocity, concentration and temperature fields

    NASA Astrophysics Data System (ADS)

    Hewes, Alaïs; Mydlarski, Laurent

    2015-11-01

    The present work focuses on the design and optimization of a probe used to simultaneously measure the velocity, concentration and temperature fields in a turbulent jet. The underlying principles of this sensor are based in thermal-anemometry techniques, and the design of this 3-wire probe builds off the previous work of Sirivat and Warhaft, J. Fluid Mech., 1982. In the first part of this study, the effect of different overheat ratios in the first two wires (called the ``interference'' or ``Way-Libby'' probe - used to infer velocity and concentration) are investigated. Of particular interest is their effect on the quality of the resulting calibration, as well as the measured velocity and concentration data. Four different overheat ratio pairs for the two wires comprising the interference probe are studied. In the second part of this work, a third wire, capable of detecting temperature fluctuations, is added to the 3-wire probe. The optimal configuration of this probe, including wire type and overheat ratio for the third wire, is studied and the simultaneously-measured velocity, concentration, and temperature data (e.g. spectra, PDFs) for different probe configurations are presented. Supported by the Natural Sciences and Engineering Research Council of Canada (Grant 217184).

  10. Decision making based on data analysis and optimization algorithm applied for cogeneration systems integration into a grid

    NASA Astrophysics Data System (ADS)

    Asmar, Joseph Al; Lahoud, Chawki; Brouche, Marwan

    2018-05-01

    Cogeneration and trigeneration systems can contribute to the reduction of primary energy consumption and greenhouse gas emissions in residential and tertiary sectors, by reducing fossil fuels demand and grid losses with respect to conventional systems. The cogeneration systems are characterized by a very high energy efficiency (80 to 90%) as well as a less polluting aspect compared to the conventional energy production. The integration of these systems into the energy network must simultaneously take into account their economic and environmental challenges. In this paper, a decision-making strategy will be introduced and is divided into two parts. The first one is a strategy based on a multi-objective optimization tool with data analysis and the second part is based on an optimization algorithm. The power dispatching of the Lebanese electricity grid is then simulated and considered as a case study in order to prove the compatibility of the cogeneration power calculated by our decision-making technique. In addition, the thermal energy produced by the cogeneration systems which capacity is selected by our technique shows compatibility with the thermal demand for district heating.

  11. MUTLI-OBJECTIVE OPTIMIZATION OF MICROSTRUCTURE IN WROUGHT MAGNESIUM ALLOYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radhakrishnan, Balasubramaniam; Gorti, Sarma B; Simunovic, Srdjan

    2013-01-01

    The microstructural features that govern the mechanical properties of wrought magnesium alloys include grain size, crystallographic texture, and twinning. Several processes based on shear deformation have been developed that promote grain refinement, weakening of the basal texture, as well as the shift of the peak intensity away from the center of the basal pole figure - features that promote room temperature ductility in Mg alloys. At ORNL, we are currently exploring the concept of introducing nano-twins within sub-micron grains as a possible mechanism for simultaneously improving strength and ductility by exploiting a potential dislocation glide along the twin-matrix interface amore » mechanism that was originally proposed for face-centered cubic materials. Specifically, we have developed an integrated modeling and optimization framework in order to identify the combinations of grain size, texture and twin spacing that can maximize strength-ductility combinations. A micromechanical model that relates microstructure to material strength is coupled with a failure model that relates ductility to a critical shear strain and a critical hydrostatic stress. The micro-mechanical model is combined with an optimization tool based on genetic algorithm. A multi-objective optimization technique is used to explore the strength-ductility space in a systematic fashion and identify optimum combinations of the microstructural parameters that will simultaneously maximize the strength-ductility in the alloy.« less

  12. Investigation on pretreatment of centrifugal mother liquid produced in the production of polyvinyl chloride by air-Fenton technique.

    PubMed

    Sun, Yingying; Hua, Xiuyi; Ge, Rui; Guo, Aitong; Guo, Zhiyong; Dong, Deming; Sun, Wentian

    2013-08-01

    Centrifugal mother liquid (CML) is one of the main sources of wastewater produced during the production of polyvinyl chloride in chlor-alkali industry. CML is a typical poorly biodegradable organic wastewater, containing many kinds of refractory pollutants. Specifically, it contains dissolved refractory polymers, especially polyvinyl alcohol (PVA), which can pass though the biotreatment processes and clog the membranes used for further treatment. In this study, to ensure the CML applicable to biotreatment and membrane treatment, a novel efficient and mild technique, air-Fenton treatment, was employed as a pretreatment technique to improve biodegradability of the CML and to break down the polymers in the CML. Firstly, the technique was optimized for the CML treatment by optimizing the main parameters, including the dosage of ferrous sulfate, initial pH of the wastewater, [H2O2]/[Fe(2+)], aeration rate, reaction time, and temperature, based on removal efficiency of COD and PVA from the CML. Then, the optimized technique was tested and evaluated. The results indicated that under the optimized conditions, the air-Fenton treatment could remove 66, 98, and 55 % of the COD, PVA, and TOC, respectively, from the CML. After the treatment, biodegradability of the wastewater increased significantly (BOD/COD increased from 0.31 to 0.68), and almost all of the PVA polymers were removed or broken down. Meanwhile, concentration of the remaining iron ions, which were added during the treatment, was also quite low (only 2.9 mg/L). Furthermore, most of the suspended materials and ammonia nitrogen, and some of the phosphorus in the wastewater were removed simultaneously.

  13. Linear theory for filtering nonlinear multiscale systems with model error

    PubMed Central

    Berry, Tyrus; Harlim, John

    2014-01-01

    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829

  14. Simultaneous separation of copper, cadmium and cobalt from sea-water by co-flotation with octadecylamine and ferric hydroxide as collectors.

    PubMed

    Cabezon, L M; Caballero, M; Cela, R; Perez-Bustamante, J A

    1984-08-01

    A method is proposed for the simultaneous quantitative separation of traces ofCu(II), Cd(II) and Co(II) from sea-water samples by means of the co-flotation (adsorbing colloid flotation) technique with ferric hydroxide as co-precipitant and octadecylamine as collector. The experimental parameters have been studied and optimized. The drawbacks arising from the low solubility of octadecylamine and the corresponding sublates in water have been avoided by use of a 6M hydrochloric acid-MIBK-ethanol (1:2:2 v v ) mixture. The results obtained by means of the proposed method have been compared with those given by the usual ammonium pyrrolidine dithiocarbamate/MIBK extraction method.

  15. Evolutionary Bi-objective Optimization for Bulldozer and Its Blade in Soil Cutting

    NASA Astrophysics Data System (ADS)

    Sharma, Deepak; Barakat, Nada

    2018-02-01

    An evolutionary optimization approach is adopted in this paper for simultaneously achieving the economic and productive soil cutting. The economic aspect is defined by minimizing the power requirement from the bulldozer, and the soil cutting is made productive by minimizing the time of soil cutting. For determining the power requirement, two force models are adopted from the literature to quantify the cutting force on the blade. Three domain-specific constraints are also proposed, which are limiting the power from the bulldozer, limiting the maximum force on the bulldozer blade and achieving the desired production rate. The bi-objective optimization problem is solved using five benchmark multi-objective evolutionary algorithms and one classical optimization technique using the ɛ-constraint method. The Pareto-optimal solutions are obtained with the knee-region. Further, the post-optimal analysis is performed on the obtained solutions to decipher relationships among the objectives and decision variables. Such relationships are later used for making guidelines for selecting the optimal set of input parameters. The obtained results are then compared with the experiment results from the literature that show a close agreement among them.

  16. Recent developments of axial flow compressors under transonic flow conditions

    NASA Astrophysics Data System (ADS)

    Srinivas, G.; Raghunandana, K.; Satish Shenoy, B.

    2017-05-01

    The objective of this paper is to give a holistic view of the most advanced technology and procedures that are practiced in the field of turbomachinery design. Compressor flow solver is the turbulence model used in the CFD to solve viscous problems. The popular techniques like Jameson’s rotated difference scheme was used to solve potential flow equation in transonic condition for two dimensional aero foils and later three dimensional wings. The gradient base method is also a popular method especially for compressor blade shape optimization. Various other types of optimization techniques available are Evolutionary algorithms (EAs) and Response surface methodology (RSM). It is observed that in order to improve compressor flow solver and to get agreeable results careful attention need to be paid towards viscous relations, grid resolution, turbulent modeling and artificial viscosity, in CFD. The advanced techniques like Jameson’s rotated difference had most substantial impact on wing design and aero foil. For compressor blade shape optimization, Evolutionary algorithm is quite simple than gradient based technique because it can solve the parameters simultaneously by searching from multiple points in the given design space. Response surface methodology (RSM) is a method basically used to design empirical models of the response that were observed and to study systematically the experimental data. This methodology analyses the correct relationship between expected responses (output) and design variables (input). RSM solves the function systematically in a series of mathematical and statistical processes. For turbomachinery blade optimization recently RSM has been implemented successfully. The well-designed high performance axial flow compressors finds its application in any air-breathing jet engines.

  17. Model of separation performance of bilinear gradients in scanning format counter-flow gradient electrofocusing techniques.

    PubMed

    Shameli, Seyed Mostafa; Glawdel, Tomasz; Ren, Carolyn L

    2015-03-01

    Counter-flow gradient electrofocusing allows the simultaneous concentration and separation of analytes by generating a gradient in the total velocity of each analyte that is the sum of its electrophoretic velocity and the bulk counter-flow velocity. In the scanning format, the bulk counter-flow velocity is varying with time so that a number of analytes with large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Studies have shown that nonlinear (such as a bilinear) velocity gradients along the separation channel can improve both peak capacity and separation resolution simultaneously, which cannot be realized by using a single linear gradient. Developing an effective separation system based on the scanning counter-flow nonlinear gradient electrofocusing technique usually requires extensive experimental and numerical efforts, which can be reduced significantly with the help of analytical models for design optimization and guiding experimental studies. Therefore, this study focuses on developing an analytical model to evaluate the separation performance of scanning counter-flow bilinear gradient electrofocusing methods. In particular, this model allows a bilinear gradient and a scanning rate to be optimized for the desired separation performance. The results based on this model indicate that any bilinear gradient provides a higher separation resolution (up to 100%) compared to the linear case. This model is validated by numerical studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Complexity Optimization and High-Throughput Low-Latency Hardware Implementation of a Multi-Electrode Spike-Sorting Algorithm

    PubMed Central

    Dragas, Jelena; Jäckel, David; Hierlemann, Andreas; Franke, Felix

    2017-01-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction. PMID:25415989

  19. Complexity optimization and high-throughput low-latency hardware implementation of a multi-electrode spike-sorting algorithm.

    PubMed

    Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix

    2015-03-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.

  20. Design and fabrication of planar structures with graded electromagnetic properties

    NASA Astrophysics Data System (ADS)

    Good, Brandon Lowell

    Successfully integrating electromagnetic properties in planar structures offers numerous benefits to the microwave and optical communities. This work aims at formulating new analytic and optimized design methods, creating new fabrication techniques for achieving those methods, and matching appropriate implementation of methods to fabrication techniques. The analytic method consists of modifying an approach that realizes perfect antireflective properties from graded profiles. This method is shown for all-dielectric and magneto-dielectric grading profiles. The optimized design methods are applied to transformer (discrete) or taper (continuous) designs. From these methods, a subtractive and an additive manufacturing technique were established and are described. The additive method, dry powder dot deposition, enables three dimensional varying electromagnetic properties in a structural composite. Combining the methods and fabrication is shown in two applied methodologies. The first uses dry powder dot deposition to design one dimensionally graded electromagnetic profiles in a planar fiberglass composite. The second method simultaneously applies antireflective properties and adjusts directivity through a slab through the use of subwavelength structures to achieve a flat antireflective lens. The end result of this work is a complete set of methods, formulations, and fabrication techniques to achieve integrated electromagnetic properties in planar structures.

  1. A framework for simultaneous aerodynamic design optimization in the presence of chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi

    Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less

  2. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  3. Acquisition of Inductive Biconditional Reasoning Skills: Training of Simultaneous and Sequential Processing.

    ERIC Educational Resources Information Center

    Lee, Seong-Soo

    1982-01-01

    Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…

  4. Development and validation of simple spectrophotometric and chemometric methods for simultaneous determination of empagliflozin and metformin: Applied to recently approved pharmaceutical formulation

    NASA Astrophysics Data System (ADS)

    Ayoub, Bassam M.

    2016-11-01

    New univariate spectrophotometric method and multivariate chemometric approach were developed and compared for simultaneous determination of empagliflozin and metformin manipulating their zero order absorption spectra with application on their pharmaceutical preparation. Sample enrichment technique was used to increase concentration of empagliflozin after extraction from tablets to allow its simultaneous determination with metformin without prior separation. Validation parameters according to ICH guidelines were satisfactory over the concentration range of 2-12 μg mL- 1 for both drugs using simultaneous equation with LOD values equal to 0.20 μg mL- 1 and 0.19 μg mL- 1, LOQ values equal to 0.59 μg mL- 1 and 0.58 μg mL- 1 for empagliflozin and metformin, respectively. While the optimum results for the chemometric approach using partial least squares method (PLS-2) were obtained using concentration range of 2-10 μg mL- 1. The optimized validated methods are suitable for quality control laboratories enable fast and economic determination of the recently approved pharmaceutical combination Synjardy® tablets.

  5. A Wavelet Neural Network Optimal Control Model for Traffic-Flow Prediction in Intelligent Transport Systems

    NASA Astrophysics Data System (ADS)

    Huang, Darong; Bai, Xing-Rong

    Based on wavelet transform and neural network theory, a traffic-flow prediction model, which was used in optimal control of Intelligent Traffic system, is constructed. First of all, we have extracted the scale coefficient and wavelet coefficient from the online measured raw data of traffic flow via wavelet transform; Secondly, an Artificial Neural Network model of Traffic-flow Prediction was constructed and trained using the coefficient sequences as inputs and raw data as outputs; Simultaneous, we have designed the running principium of the optimal control system of traffic-flow Forecasting model, the network topological structure and the data transmitted model; Finally, a simulated example has shown that the technique is effectively and exactly. The theoretical results indicated that the wavelet neural network prediction model and algorithms have a broad prospect for practical application.

  6. OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING

    PubMed Central

    Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.

    2017-01-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance. PMID:28268369

  7. Optimal experiment design for magnetic resonance fingerprinting.

    PubMed

    Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L

    2016-08-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.

  8. Time-resolved multicolor two-photon excitation fluorescence microscopy of cells and tissues

    NASA Astrophysics Data System (ADS)

    Zheng, Wei

    2014-11-01

    Multilabeling which maps the distribution of different targets is an indispensable technique in many biochemical and biophysical studies. Two-photon excitation fluorescence (TPEF) microscopy of endogenous fluorophores combining with conventional fluorescence labeling techniques such as genetically encoded fluorescent protein (FP) and fluorescent dyes staining could be a powerful tool for imaging living cells. However, the challenge is that the excitation and emission wavelength of these endogenous fluorophores and fluorescent labels are very different. A multi-color ultrafast source is required for the excitation of multiple fluorescence molecules. In this study, we developed a two-photon imaging system with excitations from the pump femtosecond laser and the selected supercontinuum generated from a photonic crystal fiber (PCF). Multiple endogenous fluorophores, fluorescent proteins and fluorescent dyes were excited in their optimal wavelengths simultaneously. A time- and spectral-resolved detection system was used to record the TPEF signals. This detection technique separated the TPEF signals from multiple sources in time and wavelength domains. Cellular organelles such as nucleus, mitochondria, microtubule and endoplasmic reticulum, were clearly revealed in the TPEF images. The simultaneous imaging of multiple fluorophores of cells will greatly aid the study of sub-cellular compartments and protein localization.

  9. A rapid method for the simultaneous determination of gross alpha and beta activities in water samples using a low background liquid scintillation counter.

    PubMed

    Sanchez-Cabeza, J A; Pujol, L

    1995-05-01

    The radiological examination of water requires a rapid screening technique that permits the determination of the gross alpha and beta activities of each sample in order to decide if further radiological analyses are necessary. In this work, the use of a low background liquid scintillation system (Quantulus 1220) is proposed to simultaneously determine the gross activities in water samples. Liquid scintillation is compared to more conventional techniques used in most monitoring laboratories. In order to determine the best counting configuration of the system, pulse shape discrimination was optimized for 6 scintillant/vial combinations. It was concluded that the best counting configuration was obtained with the scintillation cocktail Optiphase Hisafe 3 in Zinsser low diffusion vials. The detection limits achieved were 0.012 Bq L-1 and 0.14 Bq L-1 for gross alpha and beta activity respectively, after a 1:10 concentration process by simple evaporation and for a counting time of only 360 min. The proposed technique is rapid, gives spectral information, and is adequate to determine gross activities according to the World Health Organization (WHO) guideline values.

  10. Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.

  11. Mapping of the human upper arm muscle activity with an electrode matrix.

    PubMed

    Côté, J; Mathieu, P A

    2000-06-01

    Surface electrode matrices allow measurement of muscle activity while avoiding certain hazardous risks and inconvenience associated with invasive techniques. Major challenges of such equipment involve optimizing spatial resolution, and designing simple acquisition systems able to record simultaneously many potentials over large anatomical areas. We present a surface electromyography acquisition system comprising of 3 x 8 Ag-AgCl electrodes mounted onto an elastic band, which can be adjusted to fit an entire human upper limb segment. Using this equipment, we acquired a simultaneous representation of muscular activity from a segment of the upper limb surface of 6 healthy subjects during isometric contractions at various intensities. We found that the location of regions of highest activity depended on elbow torque direction but also varied among subjects. Signals obtained with such equipment can be used to solve the inverse problem and help optimize the electrode configuration in volume conduction studies. The efficacy of decision algorithms of multi-functional myoelectric prostheses can be tested with the global muscle activity patterns gathered. The electrode cuff could also be used in the investigation of fatigue and injury mechanisms during occupational activities.

  12. Joint Optimization of Vertical Component Gravity and Seismic P-wave First Arrivals by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.

    2015-12-01

    Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could replace the existing strategy of forward modeling to match gravity data.

  13. Simultaneous Extraction Optimization and Analysis of Flavonoids from the Flowers of Tabernaemontana heyneana by High Performance Liquid Chromatography Coupled to Diode Array Detector and Electron Spray Ionization/Mass Spectrometry

    PubMed Central

    Sathishkumar, Thiyagarajan; Baskar, Ramakrishnan; Aravind, Mohan; Tilak, Suryanarayanan; Deepthi, Sri; Bharathikumar, Vellalore Maruthachalam

    2013-01-01

    Flavonoids are exploited as antioxidants, antimicrobial, antithrombogenic, antiviral, and antihypercholesterolemic agents. Normally, conventional extraction techniques like soxhlet or shake flask methods provide low yield of flavonoids with structural loss, and thereby, these techniques may be considered as inefficient. In this regard, an attempt was made to optimize the flavonoid extraction using orthogonal design of experiment and subsequent structural elucidation by high-performance liquid chromatography-diode array detector-electron spray ionization/mass spectrometry (HPLC-DAD-ESI/MS) techniques. The shake flask method of flavonoid extraction was observed to provide a yield of 1.2 ± 0.13 (mg/g tissue). With the two different solvents, namely, ethanol and ethyl acetate, tried for the extraction optimization of flavonoid, ethanol (80.1 mg/g tissue) has been proved better than ethyl acetate (20.5 mg/g tissue). The optimal conditions of the extraction of flavonoid were found to be 85°C, 3 hours with a material ratio of 1 : 20, 75% ethanol, and 1 cycle of extraction. About seven different phenolics like robinin, quercetin, rutin, sinapoyl-hexoside, dicaffeic acid, and two unknown compounds were identified for the first time in the flowers of T. heyneana. The study has also concluded that L16 orthogonal design of experiment is an effective method for the extraction of flavonoid than the shake flask method. PMID:25969771

  14. Optimal cure cycle design of a resin-fiber composite laminate

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeenson

    1987-01-01

    A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.

  15. Hybridization of Strength Pareto Multiobjective Optimization with Modified Cuckoo Search Algorithm for Rectangular Array

    NASA Astrophysics Data System (ADS)

    Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A. Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah

    2017-04-01

    This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele’s (ZDT’s) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.

  16. Evolutionary optimization of radial basis function classifiers for data mining applications.

    PubMed

    Buchtala, Oliver; Klimek, Manuel; Sick, Bernhard

    2005-10-01

    In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given (and often large) set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes an evolutionary algorithm (EA) that performs feature and model selection simultaneously for radial basis function (RBF) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the EA significantly: hybrid training of RBF networks, lazy evaluation, consideration of soft constraints by means of penalty terms, and temperature-based adaptive control of the EA. The feasibility and the benefits of the approach are demonstrated by means of four data mining problems: intrusion detection in computer networks, biometric signature verification, customer acquisition with direct marketing methods, and optimization of chemical production processes. It is shown that, compared to earlier EA-based RBF optimization techniques, the runtime is reduced by up to 99% while error rates are lowered by up to 86%, depending on the application. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms.

  17. Implementation and Optimization of miniGMG - a Compact Geometric Multigrid Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Kalamkar, Dhiraj; Singh, Amik

    2012-12-01

    Multigrid methods are widely used to accelerate the convergence of iterative solvers for linear systems used in a number of different application areas. In this report, we describe miniGMG, our compact geometric multigrid benchmark designed to proxy the multigrid solves found in AMR applications. We explore optimization techniques for geometric multigrid on existing and emerging multicore systems including the Opteron-based Cray XE6, Intel Sandy Bridge and Nehalem-based Infiniband clusters, as well as manycore-based architectures including NVIDIA's Fermi and Kepler GPUs and Intel's Knights Corner (KNC) co-processor. This report examines a variety of novel techniques including communication-aggregation, threaded wavefront-based DRAM communication-avoiding,more » dynamic threading decisions, SIMDization, and fusion of operators. We quantify performance through each phase of the V-cycle for both single-node and distributed-memory experiments and provide detailed analysis for each class of optimization. Results show our optimizations yield significant speedups across a variety of subdomain sizes while simultaneously demonstrating the potential of multi- and manycore processors to dramatically accelerate single-node performance. However, our analysis also indicates that improvements in networks and communication will be essential to reap the potential of manycore processors in large-scale multigrid calculations.« less

  18. Preparation of nanomaterials for the ultrasound-enhanced removal of Pb2+ ions and malachite green dye: Chemometric optimization and modeling.

    PubMed

    Dil, Ebrahim Alipanahpour; Ghaedi, Mehrorang; Asfaram, Arash; Hajati, Shaaker; Mehrabi, Fatemeh; Goudarzi, Alireza

    2017-01-01

    Copper oxide nanoparticle-loaded activated carbon (CuO-NP-AC) was synthesized and characterized using different techniques such as FE-SEM, XRD and FT-IR. It was successfully applied for the ultrasound-assisted simultaneous removal of Pb 2+ ions and malachite green (MG) dye in binary system from aqueous solution. The effect of important parameters was modeled and optimized by artificial neural network (ANN) and response surface methodology (RSM). Maximum simultaneous removal percentages (>99.0%) were found at 25mgL -1 , 20mgL -1 , 0.02g, 5min and 6.0 corresponding to initial Pb 2+ concentration, initial MG concentration, CuO-NP-AC amount, ultrasonication time and pH, respectively. The precision of the equation obtained by RSM was confirmed by the analysis of variance and calculation of correlation coefficient relating the predicted and the experimental values of ultrasound-assisted simultaneous removal of the analytes. A good agreement between experimental and predicted values was observed. A feed-forward neural network with a topology optimized by response surface methodology was successfully applied for the prediction of ultrasound-assisted simultaneous removal of Pb 2+ ions and MG dye in binary system by CuO-NPs-AC. The number of hidden neurons, MSE, R 2 , number of epochs and error histogram were chosen for ANN modeling. Then, Langmuir, Freundlich, Temkin and D-R isothermal models were applied for fitting the experimental data. It was found that the Langmuir model well describes the isotherm data with a maximum adsorption capacity of 98.328 and 87.719mgg -1 for Pb 2+ and MG, respectively. Kinetic studies at optimum condition showed that maximum Pb 2+ and MG adsorption is achieved within 5min of the start of most experiments. The combination of pseudo-second-order rate equation and intraparticle diffusion model was applicable to explain the experimental data of ultrasound-assisted simultaneous removal of Pb 2+ and MG at optimum condition obtained from RSM. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Overlapped Fourier coding for optical aberration removal

    PubMed Central

    Horstmeyer, Roarke; Ou, Xiaoze; Chung, Jaebum; Zheng, Guoan; Yang, Changhuei

    2014-01-01

    We present an imaging procedure that simultaneously optimizes a camera’s resolution and retrieves a sample’s phase over a sequence of snapshots. The technique, termed overlapped Fourier coding (OFC), first digitally pans a small aperture across a camera’s pupil plane with a spatial light modulator. At each aperture location, a unique image is acquired. The OFC algorithm then fuses these low-resolution images into a full-resolution estimate of the complex optical field incident upon the detector. Simultaneously, the algorithm utilizes redundancies within the acquired dataset to computationally estimate and remove unknown optical aberrations and system misalignments via simulated annealing. The result is an imaging system that can computationally overcome its optical imperfections to offer enhanced resolution, at the expense of taking multiple snapshots over time. PMID:25321982

  20. Immersion ultrasonography: simultaneous A-scan and B-scan.

    PubMed

    Coleman, D J; Dallow, R L; Smith, M E

    1979-01-01

    In eyes with opaque media, ophthalmic ultrasound provides a unique source of information that can dramatically affect the course of patient management. In addition, when an ocular abnormality can be visualized, ultrasonography provides information that supplements and complements other diagnostic testing. It provides documentation and differentiation of abnormal states, such as vitreous hemorrhage and intraocular tumor, as well as differentiation of orbital tumors from inflammatory causes of exophthalmos. Additional capabilities of ultrasound are biometric determinations for calculation of intraocular lens implant powers and drug-effectiveness studies. Maximal information is derived from ultrasonography when A-scan and B-scan techniques are employed simultaneously. Flexibility of electronics, variable-frequency transducers, and the use of several different manual scanning patterns aid in detection and interpretation of results. The immersion system of ultrasonography provides these features optimally.

  1. An ultra-low-power filtering technique for biomedical applications.

    PubMed

    Zhang, Tan-Tan; Mak, Pui-In; Vai, Mang-I; Mak, Peng-Un; Wan, Feng; Martins, R P

    2011-01-01

    This paper describes an ultra-low-power filtering technique for biomedical applications designated as T-wave sensing in heart-activities detection systems. The topology is based on a source-follower-based Biquad operating in the sub-threshold region. With the intrinsic advantages of simplicity and high linearity of the source-follower, ultra-low-cutoff filtering can be achieved, simultaneously with ultra low power and good linearity. An 8(th)-order 2.4-Hz lowpass filter design example optimized in a 0.35-μm CMOS process was designed achieving over 85-dB dynamic range, 74-dB stopband attenuation and consuming only 0.36 nW at a 3-V supply.

  2. Simultaneous Rapid Determination of the Solubility and Diffusion Coefficients of a Poorly Water-Soluble Drug Based on a Novel UV Imaging System.

    PubMed

    Lu, Yan; Li, Mingzhong

    2016-01-01

    The solubility and diffusion coefficient are two of the most important physicochemical properties of a drug compound. In practice, both have been measured separately, which is time consuming. This work utilizes a novel technique of UV imaging to determine the solubility and diffusion coefficients of poorly water-soluble drugs simultaneously. A 2-step optimal method is proposed to determine the solubility and diffusion coefficients of a poorly water-soluble pharmaceutical substance based on the Fick's second law of diffusion and UV imaging measurements. Experimental results demonstrate that the proposed method can be used to determine the solubility and diffusion coefficients of a drug with reasonable accuracy, indicating that UV imaging may provide a new opportunity to accurately measure the solubility and diffusion coefficients of a poorly water-soluble drug simultaneously and rapidly. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  3. Solution to automatic generation control problem using firefly algorithm optimized I(λ)D(µ) controller.

    PubMed

    Debbarma, Sanjoy; Saikia, Lalit Chandra; Sinha, Nidul

    2014-03-01

    Present work focused on automatic generation control (AGC) of a three unequal area thermal systems considering reheat turbines and appropriate generation rate constraints (GRC). A fractional order (FO) controller named as I(λ)D(µ) controller based on crone approximation is proposed for the first time as an appropriate technique to solve the multi-area AGC problem in power systems. A recently developed metaheuristic algorithm known as firefly algorithm (FA) is used for the simultaneous optimization of the gains and other parameters such as order of integrator (λ) and differentiator (μ) of I(λ)D(µ) controller and governor speed regulation parameters (R). The dynamic responses corresponding to optimized I(λ)D(µ) controller gains, λ, μ, and R are compared with that of classical integer order (IO) controllers such as I, PI and PID controllers. Simulation results show that the proposed I(λ)D(µ) controller provides more improved dynamic responses and outperforms the IO based classical controllers. Further, sensitivity analysis confirms the robustness of the so optimized I(λ)D(µ) controller to wide changes in system loading conditions and size and position of SLP. Proposed controller is also found to have performed well as compared to IO based controllers when SLP takes place simultaneously in any two areas or all the areas. Robustness of the proposed I(λ)D(µ) controller is also tested against system parameter variations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  4. CATO: a CAD tool for intelligent design of optical networks and interconnects

    NASA Astrophysics Data System (ADS)

    Chlamtac, Imrich; Ciesielski, Maciej; Fumagalli, Andrea F.; Ruszczyk, Chester; Wedzinga, Gosse

    1997-10-01

    Increasing communication speed requirements have created a great interest in very high speed optical and all-optical networks and interconnects. The design of these optical systems is a highly complex task, requiring the simultaneous optimization of various parts of the system, ranging from optical components' characteristics to access protocol techniques. Currently there are no computer aided design (CAD) tools on the market to support the interrelated design of all parts of optical communication systems, thus the designer has to rely on costly and time consuming testbed evaluations. The objective of the CATO (CAD tool for optical networks and interconnects) project is to develop a prototype of an intelligent CAD tool for the specification, design, simulation and optimization of optical communication networks. CATO allows the user to build an abstract, possible incomplete, model of the system, and determine its expected performance. Based on design constraints provided by the user, CATO will automatically complete an optimum design, using mathematical programming techniques, intelligent search methods and artificial intelligence (AI). Initial design and testing of a CATO prototype (CATO-1) has been completed recently. The objective was to prove the feasibility of combining AI techniques, simulation techniques, an optical device library and a graphical user interface into a flexible CAD tool for obtaining optimal communication network designs in terms of system cost and performance. CATO-1 is an experimental tool for designing packet-switching wavelength division multiplexing all-optical communication systems using a LAN/MAN ring topology as the underlying network. The two specific AI algorithms incorporated are simulated annealing and a genetic algorithm. CATO-1 finds the optimal number of transceivers for each network node, using an objective function that includes the cost of the devices and the overall system performance.

  5. Regularization iteration imaging algorithm for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  6. Optimization of laser butt welding parameters with multiple performance characteristics

    NASA Astrophysics Data System (ADS)

    Sathiya, P.; Abdul Jaleel, M. Y.; Katherasan, D.; Shanmugarajan, B.

    2011-04-01

    This paper presents a study carried out on 3.5 kW cooled slab laser welding of 904 L super austenitic stainless steel. The joints have butts welded with different shielding gases, namely argon, helium and nitrogen, at a constant flow rate. Super austenitic stainless steel (SASS) normally contains high amount of Mo, Cr, Ni, N and Mn. The mechanical properties are controlled to obtain good welded joints. The quality of the joint is evaluated by studying the features of weld bead geometry, such as bead width (BW) and depth of penetration (DOP). In this paper, the tensile strength and bead profiles (BW and DOP) of laser welded butt joints made of AISI 904 L SASS are investigated. The Taguchi approach is used as a statistical design of experiment (DOE) technique for optimizing the selected welding parameters. Grey relational analysis and the desirability approach are applied to optimize the input parameters by considering multiple output variables simultaneously. Confirmation experiments have also been conducted for both of the analyses to validate the optimized parameters.

  7. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH.

    PubMed

    Volk, Jochen; Herrmann, Torsten; Wüthrich, Kurt

    2008-07-01

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.

  8. Example-based human motion denoising.

    PubMed

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  9. Research and development activities in unified control-structure modeling and design

    NASA Technical Reports Server (NTRS)

    Nayak, A. P.

    1985-01-01

    Results of work sponsored by JPL and other organizations to develop a unified control/structures modeling and design capability for large space structures is presented. Recent analytical results are presented to demonstrate the significant interdependence between structural and control properties. A new design methodology is suggested in which the structure, material properties, dynamic model and control design are all optimized simultaneously. The development of a methodology for global design optimization is recommended as a long term goal. It is suggested that this methodology should be incorporated into computer aided engineering programs, which eventually will be supplemented by an expert system to aid design optimization. Recommendations are also presented for near term research activities at JPL. The key recommendation is to continue the development of integrated dynamic modeling/control design techniques, with special attention given to the development of structural models specially tailored to support design.

  10. Flight Control Development for the ARH-70 Armed Reconnaissance Helicopter Program

    NASA Technical Reports Server (NTRS)

    Christensen, Kevin T.; Campbell, Kip G.; Griffith, Carl D.; Ivler, Christina M.; Tischler, Mark B.; Harding, Jeffrey W.

    2008-01-01

    In July 2005, Bell Helicopter won the U.S. Army's Armed Reconnaissance Helicopter competition to produce a replacement for the OH-58 Kiowa Warrior capable of performing the armed reconnaissance mission. To meet the U.S. Army requirement that the ARH-70A have Level 1 handling qualities for the scout rotorcraft mission task elements defined by ADS-33E-PRF, Bell equipped the aircraft with their generic automatic flight control system (AFCS). Under the constraints of the tight ARH-70A schedule, the development team used modem parameter identification and control law optimization techniques to optimize the AFCS gains to simultaneously meet multiple handling qualities design criteria. This paper will show how linear modeling, control law optimization, and simulation have been used to produce a Level 1 scout rotorcraft for the U.S. Army, while minimizing the amount of flight testing required for AFCS development and handling qualities evaluation of the ARH-70A.

  11. An optimal general type-2 fuzzy controller for Urban Traffic Network.

    PubMed

    Khooban, Mohammad Hassan; Vafamand, Navid; Liaghat, Alireza; Dragicevic, Tomislav

    2017-01-01

    Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the Traffic Information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. FPGA-based protein sequence alignment : A review

    NASA Astrophysics Data System (ADS)

    Isa, Mohd. Nazrin Md.; Muhsen, Ku Noor Dhaniah Ku; Saiful Nurdin, Dayana; Ahmad, Muhammad Imran; Anuar Zainol Murad, Sohiful; Nizam Mohyar, Shaiful; Harun, Azizi; Hussin, Razaidi

    2017-11-01

    Sequence alignment have been optimized using several techniques in order to accelerate the computation time to obtain the optimal score by implementing DP-based algorithm into hardware such as FPGA-based platform. During hardware implementation, there will be performance challenges such as the frequent memory access and highly data dependent in computation process. Therefore, investigation in processing element (PE) configuration where involves more on memory access in load or access the data (substitution matrix, query sequence character) and the PE configuration time will be the main focus in this paper. There are various approaches to enhance the PE configuration performance that have been done in previous works such as by using serial configuration chain and parallel configuration chain i.e. the configuration data will be loaded into each PEs sequentially and simultaneously respectively. Some researchers have proven that the performance using parallel configuration chain has optimized both the configuration time and area.

  13. Modeling and Optimization for Morphing Wing Concept Generation II. Part 1; Morphing Wing Modeling and Structural Sizing Techniques

    NASA Technical Reports Server (NTRS)

    Skillen, Michael D.; Crossley, William A.

    2008-01-01

    This report documents a series of investigations to develop an approach for structural sizing of various morphing wing concepts. For the purposes of this report, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and / or increasing aspect ratio by as much as 200% from the lowest possible value. These significant changes in geometry mean that the underlying load-bearing structure changes geometry. While most finite element analysis packages provide some sort of structural optimization capability, these codes are not amenable to making significant changes in the stiffness matrix to reflect the large morphing wing planform changes. The investigations presented here use a finite element code capable of aeroelastic analysis in three different optimization approaches -a "simultaneous analysis" approach, a "sequential" approach, and an "aggregate" approach.

  14. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  15. Optimal fault-tolerant control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2017-10-01

    For solid oxide fuel cell (SOFC) development, load tracking, heat management, air excess ratio constraint, high efficiency, low cost and fault diagnosis are six key issues. However, no literature studies the control techniques combining optimization and fault diagnosis for the SOFC system. An optimal fault-tolerant control strategy is presented in this paper, which involves four parts: a fault diagnosis module, a switching module, two backup optimizers and a controller loop. The fault diagnosis part is presented to identify the SOFC current fault type, and the switching module is used to select the appropriate backup optimizer based on the diagnosis result. NSGA-II and TOPSIS are employed to design the two backup optimizers under normal and air compressor fault states. PID algorithm is proposed to design the control loop, which includes a power tracking controller, an anode inlet temperature controller, a cathode inlet temperature controller and an air excess ratio controller. The simulation results show the proposed optimal fault-tolerant control method can track the power, temperature and air excess ratio at the desired values, simultaneously achieving the maximum efficiency and the minimum unit cost in the case of SOFC normal and even in the air compressor fault.

  16. Taguchi optimization of bismuth-telluride based thermoelectric cooler

    NASA Astrophysics Data System (ADS)

    Anant Kishore, Ravi; Kumar, Prashant; Sanghadasa, Mohan; Priya, Shashank

    2017-07-01

    In the last few decades, considerable effort has been made to enhance the figure-of-merit (ZT) of thermoelectric (TE) materials. However, the performance of commercial TE devices still remains low due to the fact that the module figure-of-merit not only depends on the material ZT, but also on the operating conditions and configuration of TE modules. This study takes into account comprehensive set of parameters to conduct the numerical performance analysis of the thermoelectric cooler (TEC) using a Taguchi optimization method. The Taguchi method is a statistical tool that predicts the optimal performance with a far less number of experimental runs than the conventional experimental techniques. Taguchi results are also compared with the optimized parameters obtained by a full factorial optimization method, which reveals that the Taguchi method provides optimum or near-optimum TEC configuration using only 25 experiments against 3125 experiments needed by the conventional optimization method. This study also shows that the environmental factors such as ambient temperature and cooling coefficient do not significantly affect the optimum geometry and optimum operating temperature of TECs. The optimum TEC configuration for simultaneous optimization of cooling capacity and coefficient of performance is also provided.

  17. Model-independent particle accelerator tuning

    DOE PAGES

    Scheinker, Alexander; Pang, Xiaoying; Rybarcyk, Larry

    2013-10-21

    We present a new model-independent dynamic feedback technique, rotation rate tuning, for automatically and simultaneously tuning coupled components of uncertain, complex systems. The main advantages of the method are: 1) It has the ability to handle unknown, time-varying systems, 2) It gives known bounds on parameter update rates, 3) We give an analytic proof of its convergence and its stability, and 4) It has a simple digital implementation through a control system such as the Experimental Physics and Industrial Control System (EPICS). Because this technique is model independent it may be useful as a real-time, in-hardware, feedback-based optimization scheme formore » uncertain and time-varying systems. In particular, it is robust enough to handle uncertainty due to coupling, thermal cycling, misalignments, and manufacturing imperfections. As a result, it may be used as a fine-tuning supplement for existing accelerator tuning/control schemes. We present multi-particle simulation results demonstrating the scheme’s ability to simultaneously adaptively adjust the set points of twenty two quadrupole magnets and two RF buncher cavities in the Los Alamos Neutron Science Center Linear Accelerator’s transport region, while the beam properties and RF phase shift are continuously varying. The tuning is based only on beam current readings, without knowledge of particle dynamics. We also present an outline of how to implement this general scheme in software for optimization, and in hardware for feedback-based control/tuning, for a wide range of systems.« less

  18. Wireless Medical Devices for MRI-Guided Interventions

    NASA Astrophysics Data System (ADS)

    Venkateswaran, Madhav

    Wireless techniques can play an important role in next-generation, image-guided surgical techniques with integration strategies being the key. We present our investigations on three wireless applications. First, we validate a position and orientation independent method to noninvasively monitor wireless power delivery using current perturbation measurements of switched load modulation of the RF carrier. This is important for safe and efficient powering without using bulky batteries or invasive cables. Use of MRI transmit RF pulses for simultaneous powering is investigated in the second part. We develop system models for the MRI transmit chain, wireless powering circuits and a typical load. Detailed analysis and validation of nonlinear and cascaded modeling strategies are performed, useful for decoupled optimization of the harvester coil and RF-DC converter. MRI pulse sequences are investigated for suitability for simultaneous powering. Simulations indicate that a 1.8V, 2 mA load can be powered with a 100% duty cycle using a 30° fGRE sequence, despite the RF duty cycle being 44 mW for a 30° flip angle, consistent with model predictions. Investigations on imaging artifacts indicates that distortion is mostly restricted to within the physical span of the harvester coil in the imaging volume, with the homogeneous B1+ transmit field providing positioning flexibility to minimize this for simultaneous powering. The models are potentially valuable in designing wireless powering solutions for implantable devices with simultaneous real-time imaging in MRI-guided surgical suites. Finally in the last section, we model endovascular MRI coil coupling during RF transmit. FEM models for a series-resonant multimode coil and quadrature birdcage coil fields are developed and computationally efficient, circuit and full-wave simulations are used to model inductive coupling. The Bloch Siegert B1 mapping sequence is used for validating at 24, 28 and 34 microT background excitation. Quantitative performance metrics are successfully predicted and the role of simulation in geometric optimization is demonstrated. In a pig study, we demonstrate navigation of a catheter, with tip-tracking and high-resolution intravascular imaging, through the vasculature into the heart, followed by contextual visualization. A potentially significant application is in MRI-guided cardiac ablation procedures.

  19. Evolving a Behavioral Repertoire for a Walking Robot.

    PubMed

    Cully, A; Mouret, J-B

    2016-01-01

    Numerous algorithms have been proposed to allow legged robots to learn to walk. However, most of these algorithms are devised to learn walking in a straight line, which is not sufficient to accomplish any real-world mission. Here we introduce the Transferability-based Behavioral Repertoire Evolution algorithm (TBR-Evolution), a novel evolutionary algorithm that simultaneously discovers several hundreds of simple walking controllers, one for each possible direction. By taking advantage of solutions that are usually discarded by evolutionary processes, TBR-Evolution is substantially faster than independently evolving each controller. Our technique relies on two methods: (1) novelty search with local competition, which searches for both high-performing and diverse solutions, and (2) the transferability approach, which combines simulations and real tests to evolve controllers for a physical robot. We evaluate this new technique on a hexapod robot. Results show that with only a few dozen short experiments performed on the robot, the algorithm learns a repertoire of controllers that allows the robot to reach every point in its reachable space. Overall, TBR-Evolution introduced a new kind of learning algorithm that simultaneously optimizes all the achievable behaviors of a robot.

  20. Antibody Microarray for E. coli O157:H7 and Shiga Toxin in Microtiter Plates.

    PubMed

    Gehring, Andrew G; Brewster, Jeffrey D; He, Yiping; Irwin, Peter L; Paoli, George C; Simons, Tawana; Tu, Shu-I; Uknalis, Joseph

    2015-12-04

    Antibody microarray is a powerful analytical technique because of its inherent ability to simultaneously discriminate and measure numerous analytes, therefore making the technique conducive to both the multiplexed detection and identification of bacterial analytes (i.e., whole cells, as well as associated metabolites and/or toxins). We developed a sandwich fluorescent immunoassay combined with a high-throughput, multiwell plate microarray detection format. Inexpensive polystyrene plates were employed containing passively adsorbed, array-printed capture antibodies. During sample reaction, centrifugation was the only strategy found to significantly improve capture, and hence detection, of bacteria (pathogenic Escherichia coli O157:H7) to planar capture surfaces containing printed antibodies. Whereas several other sample incubation techniques (e.g., static vs. agitation) had minimal effect. Immobilized bacteria were labeled with a red-orange-fluorescent dye (Alexa Fluor 555) conjugated antibody to allow for quantitative detection of the captured bacteria with a laser scanner. Shiga toxin 1 (Stx1) could be simultaneously detected along with the cells, but none of the agitation techniques employed during incubation improved detection of the relatively small biomolecule. Under optimal conditions, the assay had demonstrated limits of detection of ~5.8 × 10⁵ cells/mL and 110 ng/mL for E. coli O157:H7 and Stx1, respectively, in a ~75 min total assay time.

  1. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip

    PubMed Central

    Davison, James A

    2015-01-01

    Purpose To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. Patients and methods A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Results Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. Conclusion A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer. PMID:26355695

  2. Reliability- and performance-based robust design optimization of MEMS structures considering technological uncertainties

    NASA Astrophysics Data System (ADS)

    Martowicz, Adam; Uhl, Tadeusz

    2012-10-01

    The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.

  3. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  4. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation

    PubMed Central

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  5. Multi-Target Tracking via Mixed Integer Optimization

    DTIC Science & Technology

    2016-05-13

    solving these two problems separately, however few algorithms attempt to solve these simultaneously and even fewer utilize optimization. In this paper we...introduce a new mixed integer optimization (MIO) model which solves the data association and trajectory estimation problems simultaneously by minimizing...Kalman filter [5], which updates the trajectory estimates before the algorithm progresses forward to the next scan. This process repeats sequentially

  6. Ionic liquid-based ultrasonic/microwave-assisted extraction combined with UPLC-MS-MS for the determination of tannins in Galla chinensis.

    PubMed

    Lu, Chunxia; Wang, Hongxin; Lv, Wenping; Ma, Chaoyang; Lou, Zaixiang; Xie, Jun; Liu, Bo

    2012-01-01

    Ionic liquid was used as extraction solvents and applied to the extraction of tannins from Galla chinensis in the simultaneous ultrasonic- and microwave-assisted extraction (UMAE) technique. Several parameters of UMAE were optimised, and the results were compared with of the conventional extraction techniques. Under optimal conditions, the content of tannins was 630.2 ± 12.1 mg g⁻¹. Compared with the conventional heat-reflux extraction, maceration extraction, regular ultrasound- and microwave-assisted extraction, the proposed approach exhibited higher efficiency (11.7-22.0% enhanced) and shorter extraction time (from 6 h to 1 min). The tannins were then identified by ultraperformance liquid chromatography tandem mass spectrometry. This study suggests that ionic liquid-based UMAE is an efficient, rapid, simple and green sample preparation technique.

  7. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  8. Development of multiplex PCR assay for simultaneous detection of Salmonella genus, Salmonella subspecies I, Salm. Enteritidis, Salm. Heidelberg and Salm. Typhimurium.

    PubMed

    Park, S H; Ricke, S C

    2015-01-01

    The aim of this research was to develop multiplex PCR assay that could simultaneously detect Salmonella genus, Salmonella subsp. I, Salm. Enteritidis, Heidelberg and Typhimurium because these Salmonella serovars are the most common isolates associated with poultry products. Five primers were utilized to establish multiplex PCR and applied to Salmonella isolates from chickens and farm environments. These isolates were identified as Salmonella subsp. I and 16 of 66 isolates were classified as Salm. Enteritidis, while Heidelberg or Typhimurium was not detected. We also spiked three Salmonella strains on chicken breast meat to evaluate the specificity and sensitivity of multiplex PCR as well as qPCR to optimize quantification of Salmonella in these samples. The optimized multiplex PCR and qPCR could detect approx. 2·2 CFU of Salmonella per gram after 18 h enrichment. The multiplex PCR and qPCR would provide rapid and consistent results. Also, these techniques would be useful for the detection and quantification of Salmonella in contaminated poultry, foods and environmental samples. The strategy for the rapid detection of Salmonella serovars in poultry is needed to further reduce the incidence of salmonellosis in humans. The optimized multiplex PCR will be useful to detect prevalent Salmonella serovars in poultry products. © 2014 The Society for Applied Microbiology.

  9. Using Medical-Device Wearable to Improve Hemodialysis Patient’s Live and Access the Holistic Health

    NASA Astrophysics Data System (ADS)

    Chen, W. L.; Wu, C.-C.; Kan, C. D.

    2017-06-01

    The increasing incidence of end-stage renal disease (ESRD) is the major burden to health budgets and a threat to public health worldwide. For many years, Taiwan has been ranked the first in the world for the number of hemodialysis patients. For solving the above-mentioned circumstance, we demonstrate the project, here, which goal is to construct the holistic health for hemodialysis patient. The project is to design a wearable medicine-device which can simultaneously measure and monitor the vital sign, including heart rate (HR), pulse oximetry (SPO2), continuous non-invasive blood pressure (c-NIBP), and total body water (TBW), of hemodialysis patient. By aid of the device we design, hemodialysis patients will get better health care than before. This device comprises three techniques. The first is named “Using phonoangiography technique to early detect the dysfunction of arteriovenous access by arteriovenous access (AVA) stenosis detector”. The stenosis detector based on autoregressive model was employed to simultaneously estimate the status of AVA life cycle and to tract changes in frequency spectra. It helps hemodialysis patients to early detect the dysfunction of AVA and alarms them to make a return visit. The second technique is named “Physiological detecting device for wearable medical device and encoding algorithm development”. The feature of the second technique is to optimize the prognosis by analyzing physiological signals, including water content index, pulse oximetry, and blood pressure in the meanwhile. The third technique is named “Intelligent and smart tourniquet”. This technique aims to preclude AVA dysfunction caused by inappropriate hemostasis.

  10. Application of augmented-Lagrangian methods in meteorology: Comparison of different conjugate-gradient codes for large-scale minimization

    NASA Technical Reports Server (NTRS)

    Navon, I. M.

    1984-01-01

    A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.

  11. Application of level set method to optimal vibration control of plate structures

    NASA Astrophysics Data System (ADS)

    Ansari, M.; Khajepour, A.; Esmailzadeh, E.

    2013-02-01

    Vibration control plays a crucial role in many structures, especially in the lightweight ones. One of the most commonly practiced method to suppress the undesirable vibration of structures is to attach patches of the constrained layer damping (CLD) onto the surface of the structure. In order to consider the weight efficiency of a structure, the best shapes and locations of the CLD patches should be determined to achieve the optimum vibration suppression with minimum usage of the CLD patches. This paper proposes a novel topology optimization technique that can determine the best shape and location of the applied CLD patches, simultaneously. Passive vibration control is formulated in the context of the level set method, which is a numerical technique to track shapes and locations concurrently. The optimal damping set could be found in a structure, in its fundamental vibration mode, such that the maximum modal loss factor of the system is achieved. Two different plate structures will be considered and the damping patches will be optimally located on them. At the same time, the best shapes of the damping patches will be determined too. In one example, the numerical results will be compared with those obtained from the experimental tests to validate the accuracy of the proposed method. This comparison reveals the effectiveness of the level set approach in finding the optimum shape and location of the CLD patches.

  12. Improving 130nm node patterning using inverse lithography techniques for an analog process

    NASA Astrophysics Data System (ADS)

    Duan, Can; Jessen, Scott; Ziger, David; Watanabe, Mizuki; Prins, Steve; Ho, Chi-Chien; Shu, Jing

    2018-03-01

    Developing a new lithographic process routinely involves usage of lithographic toolsets and much engineering time to perform data analysis. Process transfers between fabs occur quite often. One of the key assumptions made is that lithographic settings are equivalent from one fab to another and that the transfer is fluid. In some cases, that is far from the truth. Differences in tools can change the proximity effect seen in low k1 imaging processes. If you use model based optical proximity correction (MBOPC), then a model built in one fab will not work under the same conditions at another fab. This results in many wafers being patterned to try and match a baseline response. Even if matching is achieved, there is no guarantee that optimal lithographic responses are met. In this paper, we discuss the approach used to transfer and develop new lithographic processes and define MBOPC builds for the new lithographic process in Fab B which was transferred from a similar lithographic process in Fab A. By using PROLITHTM simulations to match OPC models for each level, minimal downtime in wafer processing was observed. Source Mask Optimization (SMO) was also used to optimize lithographic processes using novel inverse lithography techniques (ILT) to simultaneously optimize mask bias, depth of focus (DOF), exposure latitude (EL) and mask error enhancement factor (MEEF) for critical designs for each level.

  13. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  14. Stationary nonimaging lenses for solar concentration.

    PubMed

    Kotsidas, Panagiotis; Chatzi, Eleni; Modi, Vijay

    2010-09-20

    A novel approach for the design of refractive lenses is presented, where the lens is mounted on a stationary aperture and the Sun is tracked by a moving solar cell. The purpose of this work is to design a quasi-stationary concentrator by replacing the two-axis tracking of the Sun with internal motion of the miniaturized solar cell inside the module. Families of lenses are designed with a variation of the simultaneous multiple surface technique in which the sawtooth genetic algorithm is implemented to optimize the geometric variables of the optic in order to produce high fluxes for a range of incidence angles. Finally, we show examples of the technique for lenses with 60° and 30° acceptance half-angles, with low to medium attainable concentrations.

  15. Integration of fragment screening and library design.

    PubMed

    Siegal, Gregg; Ab, Eiso; Schultz, Jan

    2007-12-01

    With more than 10 years of practical experience and theoretical analysis, fragment-based drug discovery (FBDD) has entered the mainstream of the pharmaceutical and biotech industries. An array of biophysical techniques has been used to detect the weak interaction between a fragment and the target. Each technique presents its own requirements regarding the fragment collection and the target; therefore, in order to optimize the potential of FBDD, the nature of the target should be a driving factor for simultaneous development of both the library and the screening technology. A roadmap is now available to guide fragment-to-lead evolution when structural information is available. The next challenge is to apply FBDD to targets for which high-resolution structural information is not available.

  16. Dual cloud point extraction coupled with hydrodynamic-electrokinetic two-step injection followed by micellar electrokinetic chromatography for simultaneous determination of trace phenolic estrogens in water samples.

    PubMed

    Wen, Yingying; Li, Jinhua; Liu, Junshen; Lu, Wenhui; Ma, Jiping; Chen, Lingxin

    2013-07-01

    A dual cloud point extraction (dCPE) off-line enrichment procedure coupled with a hydrodynamic-electrokinetic two-step injection online enrichment technique was successfully developed for simultaneous preconcentration of trace phenolic estrogens (hexestrol, dienestrol, and diethylstilbestrol) in water samples followed by micellar electrokinetic chromatography (MEKC) analysis. Several parameters affecting the extraction and online injection conditions were optimized. Under optimal dCPE-two-step injection-MEKC conditions, detection limits of 7.9-8.9 ng/mL and good linearity in the range from 0.05 to 5 μg/mL with correlation coefficients R(2) ≥ 0.9990 were achieved. Satisfactory recoveries ranging from 83 to 108% were obtained with lake and tap water spiked at 0.1 and 0.5 μg/mL, respectively, with relative standard deviations (n = 6) of 1.3-3.1%. This method was demonstrated to be convenient, rapid, cost-effective, and environmentally benign, and could be used as an alternative to existing methods for analyzing trace residues of phenolic estrogens in water samples.

  17. A method for aircraft concept exploration using multicriteria interactive genetic algorithms

    NASA Astrophysics Data System (ADS)

    Buonanno, Michael Alexander

    2005-08-01

    The problem of aircraft concept selection has become increasingly difficult in recent years due to changes in the primary evaluation criteria of concepts. In the past, performance was often the primary discriminator, whereas modern programs have placed increased emphasis on factors such as environmental impact, economics, supportability, aesthetics, and other metrics. The revolutionary nature of the vehicles required to simultaneously meet these conflicting requirements has prompted a shift from design using historical data regression techniques for metric prediction to the use of sophisticated physics-based analysis tools that are capable of analyzing designs outside of the historical database. The use of optimization methods with these physics-based tools, however, has proven difficult because of the tendency of optimizers to exploit assumptions present in the models and drive the design towards a solution which, while promising to the computer, may be infeasible due to factors not considered by the computer codes. In addition to this difficulty, the number of discrete options available at this stage may be unmanageable due to the combinatorial nature of the concept selection problem, leading the analyst to select a sub-optimum baseline vehicle. Some extremely important concept decisions, such as the type of control surface arrangement to use, are frequently made without sufficient understanding of their impact on the important system metrics due to a lack of historical guidance, computational resources, or analysis tools. This thesis discusses the difficulties associated with revolutionary system design, and introduces several new techniques designed to remedy them. First, an interactive design method has been developed that allows the designer to provide feedback to a numerical optimization algorithm during runtime, thereby preventing the optimizer from exploiting weaknesses in the analytical model. This method can be used to account for subjective criteria, or as a crude measure of un-modeled quantitative criteria. Other contributions of the work include a modified Structured Genetic Algorithm that enables the efficient search of large combinatorial design hierarchies and an improved multi-objective optimization procedure that can effectively optimize several objectives simultaneously. A new conceptual design method has been created by drawing upon each of these new capabilities and aspects of more traditional design methods. The ability of this new technique to assist in the design of revolutionary vehicles has been demonstrated using a problem of contemporary interest: the concept exploration of a supersonic business jet. This problem was found to be a good demonstration case because of its novelty and unique requirements, and the results of this proof of concept exercise indicate that the new method is effective at providing additional insight into the relationship between a vehicle's requirements and its favorable attributes.

  18. Adaptive HIFU noise cancellation for simultaneous therapy and imaging using an integrated HIFU/imaging transducer

    PubMed Central

    Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K Kirk

    2010-01-01

    It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than −40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of −20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe ‘ripples’ when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm × 20.0 mm dimensions could increase the temperature of the soft biological tissue from 55 °C to 71 °C within 60 s. Two types of experiments for simultaneous therapy and imaging were conducted to acquire a single scan-line and B-mode image with an aluminum plate and a slice of porcine muscle, respectively. The B-mode image was obtained using the single element imaging system during HIFU beam transmission. The experimental results proved that the combination of the traditional short-pulse excitation and the adaptive noise canceling method could significantly reduce therapeutic interference and remnant ripples and thus may be a better way to implement real-time simultaneous therapy and imaging. PMID:20224162

  19. Adaptive HIFU noise cancellation for simultaneous therapy and imaging using an integrated HIFU/imaging transducer.

    PubMed

    Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K Kirk

    2010-04-07

    It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than -40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of -20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe 'ripples' when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm x 20.0 mm dimensions could increase the temperature of the soft biological tissue from 55 degrees C to 71 degrees C within 60 s. Two types of experiments for simultaneous therapy and imaging were conducted to acquire a single scan-line and B-mode image with an aluminum plate and a slice of porcine muscle, respectively. The B-mode image was obtained using the single element imaging system during HIFU beam transmission. The experimental results proved that the combination of the traditional short-pulse excitation and the adaptive noise canceling method could significantly reduce therapeutic interference and remnant ripples and thus may be a better way to implement real-time simultaneous therapy and imaging.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Igarashi, Noriyuki, E-mail: noriyuki.igarashi@kek.jp; Nitani, Hiroaki; Takeichi, Yasuo

    BL-15A is a new x-ray undulator beamline at the Photon Factory. It will be dedicated to two independent research activities, simultaneous XAFS/XRF/XRD experiments, and SAXS/WAXS/GI-SAXS studies. In order to supply a choice of micro-focus, low-divergence and collimated beams, a double surface bimorph mirror was recently developed. To achieve further mirror surface optimization, the pencil beam scanning method was applied for “in-situ” beam inspection and the Inverse Matrix method was used for determination of optimal voltages on the piezoelectric actuators. The corrected beam profiles at every focal spot gave good agreement with the theoretical values and the resultant beam performance ismore » promising for both techniques. Quick and stable switching between highly focused and intense collimated beams was established using this new mirror with the simple motorized stages.« less

  1. Adaptive Power Control for Space Communications

    NASA Technical Reports Server (NTRS)

    Thompson, Willie L., II; Israel, David J.

    2008-01-01

    This paper investigates the implementation of power control techniques for crosslinks communications during a rendezvous scenario of the Crew Exploration Vehicle (CEV) and the Lunar Surface Access Module (LSAM). During the rendezvous, NASA requires that the CEV supports two communication links: space-to-ground and crosslink simultaneously. The crosslink will generate excess interference to the space-to-ground link as the distances between the two vehicles decreases, if the output power is fixed and optimized for the worst-case link analysis at the maximum distance range. As a result, power control is required to maintain the optimal power level for the crosslink without interfering with the space-to-ground link. A proof-of-concept will be described and implemented with Goddard Space Flight Center (GSFC) Communications, Standard, and Technology Lab (CSTL).

  2. The association between optimal lifestyle-related health behaviors and employee productivity.

    PubMed

    Katz, Abigail S; Pronk, Nicolaas P; Lowry, Marcia

    2014-07-01

    To investigate the association between lifestyle-related health behaviors including sleep and the cluster of physical activity, no tobacco use, fruits and vegetables intake, and alcohol consumption termed the "Optimal Lifestyle Metric" (OLM), and employee productivity. Data were obtained from employee health assessments (N = 18,079). Regression techniques were used to study the association between OLM and employee productivity, sleep and employee productivity, and the interaction of both OLM and sleep on employee productivity. Employees who slept less or more than 7 or 8 hours per night experienced significantly more productivity loss. Employees who adhered to all four OLM behaviors simultaneously experienced less productivity loss compared with those who did not. Adequate sleep and adherence to the OLM cluster of behaviors are associated with significantly less productivity loss.

  3. MO-FG-CAMPUS-TeP2-05: Optimizing Stereotactic Radiosurgery Treatment of Multiple Brain Metastasis Lesions with Individualized Rotational Arc Trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, P; Xing, L; Ma, L

    Purpose: Radiosurgery of multiple (n>4) brain metastasis lesions requires 3–4 noncoplanar VMAT arcs with excessively high monitor units and long delivery time. We investigated whether an improved optimization technique would decrease the needed arc numbers and increase the delivery efficiency, while improving or maintaining the plan quality. Methods: The proposed 4pi arc space optimization algorithm consists of two steps: automatic couch angle selection followed by aperture generation for each arc with optimized control points distribution. We use a greedy algorithm to select the couch angles. Starting from a single coplanar arc plan we search through the candidate noncoplanar arcs tomore » pick a single noncoplanar arc that will bring the best plan quality when added into the existing treatment plan. Each time, only one additional noncoplanar arc is considered making the calculation time tractable. This process repeats itself until desired number of arc is reached. The technique is first evaluated in coplanar arc delivery scheme with testing cases and then applied to noncoplanar treatments of a case with 12 brain metastasis lesions. Results: Clinically acceptable plans are created within minutes. For the coplanar testing cases the algorithm yields singlearc plans with better dose distributions than that of two-arc VMAT, simultaneously with a 12–17% reduction in the delivery time and a 14–21% reduction in MUs. For the treatment of 12 brain mets while Paddick conformity indexes of the two plans were comparable the SCG-optimization with 2 arcs (1 noncoplanar and 1 coplanar) significantly improved the conventional VMAT with 3 arcs (2 noncoplanar and 1 coplanar). Specifically V16 V10 and V5 of the brain were reduced by 11%, 11% and 12% respectively. The beam delivery time was shortened by approximately 30%. Conclusion: The proposed 4pi arc space optimization technique promises to significantly reduce the brain toxicity while greatly improving the treatment efficiency.« less

  4. A simultaneous analysis method of polycyclic aromatic hydrocarbons, nicotine, cotinine and metals in human hair.

    PubMed

    Li, Zhenjiang; Wang, Bin; Ge, Shufang; Yan, Lailai; Liu, Yingying; Li, Zhiwen; Ren, Aiguo

    2016-12-01

    Polycyclic aromatic hydrocarbons (PAHs), nicotine, cotinine, and metals in human hair have been used as important environmental exposure markers. We aimed to develop a simple method to simultaneously analyze these pollutants using a small quantity of hair. The digestion performances of tetramethylammonium hydroxide (TMAH) and sodium hydroxide (NaOH) for human hair were compared. Various solvents or their mixtures including n-hexane (HEX), dichloromethane (DCM) and trichloromethane (TCM), HEX:DCM32 (3/2) and HEX:TCM73 (7/3) were adopted to extract organics. The recoveries of metals were determined under an optimal operation of digestion and extraction. Our results showed that TMAH performed well in dissolving human hair and even better than NaOH. Overall, the recoveries for five solutions were acceptable for PAHs, nicotine in the range of 80%-110%. Except for HEX, other four extraction solutions had acceptable extraction efficiency for cotinine from HEX:TCM73 (88 ± 4.1%) to HEX:DCM32 (100 ± 2.8%). HEX:DCM32 was chosen as the optimal solvent in consideration of its extraction efficiency and lower density than water. The recoveries of 12 typical major or trace metals were mainly in the range of 90%-110% and some of them were close to 100%. In conclusion, the simultaneous analysis of PAHs, nicotine, cotinine, and metals was feasible. Our study provided a simple and low-cost technique for environmental epidemiological studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Zhang, Yuzhen; Chen, Qian; Zuo, Chao; Li, Rubin; Shen, Guochen

    2014-08-01

    This paper presents a general solution for realizing high dynamic range three-dimensional (3-D) shape measurement based on fringe projection. Three concrete techniques are involved in the solution for measuring object with large range of reflectivity (LRR) or one with shiny specular surface. For the first technique, the measured surface reflectivities are sub-divided into several groups based on its histogram distribution, then the optimal exposure time for each group can be predicted adaptively so that the bright as well as dark areas on the measured surface are able to be handled without any compromise. Phase-shifted images are then captured at the calculated exposure times and a composite phase-shifted image is generated by extracting the optimally exposed pixels in the raw fringes images. For the second technique, it is proposed by introducing two orthogonal polarizers which are placed separately in front of the camera and projector into the first technique and the third one is developed by combining the second technique with the strategy of properly altering the angle between the transmission axes of the two polarizers. Experimental results show that the first technique can effectively improve the measurement accuracy of diffuse objects with LRR, the second one is capable of measuring object with weak specular reflection (WSR: e.g. shiny plastic surface) and the third can inspect surface with strong specular reflection (SSR: e.g. highlight on aluminum alloy) precisely. Further, more complex scene, such as the one with LRR and WSR, or even the one simultaneously involving LRR, WSR and SSR, can be measured accurately by the proposed solution.

  6. Simultaneous Optimization of Decisions Using a Linear Utility Function.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    1990-01-01

    An approach is presented to simultaneously optimize decision rules for combinations of elementary decisions through a framework derived from Bayesian decision theory. The developed linear utility model for selection-mastery decisions was applied to a sample of 43 first year medical students to illustrate the procedure. (SLD)

  7. Optimized distortion correction technique for echo planar imaging.

    PubMed

    Chen , N K; Wyrwicz, A M

    2001-03-01

    A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. Copyright 2001 Wiley-Liss, Inc.

  8. Precise tracking of remote sensing satellites with the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Yunck, Thomas P.; Wu, Sien-Chong; Wu, Jiun-Tsong; Thornton, Catherine L.

    1990-01-01

    The Global Positioning System (GPS) can be applied in a number of ways to track remote sensing satellites at altitudes below 3000 km with accuracies of better than 10 cm. All techniques use a precise global network of GPS ground receivers operating in concert with a receiver aboard the user satellite, and all estimate the user orbit, GPS orbits, and selected ground locations simultaneously. The GPS orbit solutions are always dynamic, relying on the laws of motion, while the user orbit solution can range from purely dynamic to purely kinematic (geometric). Two variations show considerable promise. The first one features an optimal synthesis of dynamics and kinematics in the user solution, while the second introduces a novel gravity model adjustment technique to exploit data from repeat ground tracks. These techniques, to be demonstrated on the Topex/Poseidon mission in 1992, will offer subdecimeter tracking accuracy for dynamically unpredictable satellites down to the lowest orbital altitudes.

  9. Simultaneous acoustic and dielectric real time curing monitoring of epoxy systems

    NASA Astrophysics Data System (ADS)

    Gkikas, G.; Saganas, Ch.; Grammatikos, S. A.; Aggelis, D. G.; Paipetis, A. S.

    2012-04-01

    The attainment of structural integrity of the reinforcing matrix in composite materials is of primary importance for the final properties of the composite structure. The detailed monitoring of the curing process on the other hand is paramount (i) in defining the optimal conditions for the impregnation of the reinforcement by the matrix (ii) in limiting the effects of the exotherm produced by the polymerization reaction which create unwanted thermal stresses and (iii) in securing optimal behavior in matrix controlled properties, such as off axis or shear properties and in general the durability of the composite. Dielectric curing monitoring is a well known technique for distinguishing between the different stages of the polymerization of a typical epoxy system. The technique successfully predicts the gelation and the vitrification of the epoxy and has been extended for the monitoring of prepregs. Recent work has shown that distinct changes in the properties of the propagated sound in the epoxy which undergoes polymerization is as well directly related to the gelation and vitrification of the resin, as well as to the attainment of the final properties of the resin system. In this work, a typical epoxy is simultaneously monitored using acoustic and dielectric methods. The system is isothermally cured in an oven to avoid effects from the polymerization exotherm. Typical broadband sensors are employed for the acoustic monitoring, while flat interdigital sensors are employed for the dielectric scans. All stages of the polymerization process were successfully monitored and the validity of both methods was cross checked and verified.

  10. Optimal simultaneous superpositioning of multiple structures with missing data.

    PubMed

    Theobald, Douglas L; Steindel, Phillip A

    2012-08-01

    Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.

  11. Large-scale fabrication of micro-lens array by novel end-fly-cutting-servo diamond machining.

    PubMed

    Zhu, Zhiwei; To, Suet; Zhang, Shaojian

    2015-08-10

    Fast/slow tool servo (FTS/STS) diamond turning is a very promising technique for the generation of micro-lens array (MLA). However, it is still a challenge to process MLA in large scale due to certain inherent limitations of this technique. In the present study, a novel ultra-precision diamond cutting method, as the end-fly-cutting-servo (EFCS) system, is adopted and investigated for large-scale generation of MLA. After a detailed discussion of the characteristic advantages for processing MLA, the optimal toolpath generation strategy for the EFCS is developed with consideration of the geometry and installation pose of the diamond tool. A typical aspheric MLA over a large area is experimentally fabricated, and the resulting form accuracy, surface micro-topography and machining efficiency are critically investigated. The result indicates that the MLA with homogeneous quality over the whole area is obtained. Besides, high machining efficiency, extremely small volume of control points for the toolpath, and optimal usage of system dynamics of the machine tool during the whole cutting can be simultaneously achieved.

  12. An efficient method for removing point sources from full-sky radio interferometric maps

    NASA Astrophysics Data System (ADS)

    Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard

    2017-12-01

    A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.

  13. Analytical performance of a low-gas-flow torch optimized for inductively coupled plasma atomic emission spectrometry

    USGS Publications Warehouse

    Montaser, A.; Huse, G.R.; Wax, R.A.; Chan, S.-K.; Golightly, D.W.; Kane, J.S.; Dorrzapf, A.F.

    1984-01-01

    An inductively coupled Ar plasma (ICP), generated in a lowflow torch, was investigated by the simplex optimization technique for simultaneous, multielement, atomic emission spectrometry (AES). The variables studied included forward power, observation height, gas flow (outer, intermediate, and nebulizer carrier) and sample uptake rate. When the ICP was operated at 720-W forward power with a total gas flow of 5 L/min, the signal-to-background ratios (S/B) of spectral lines from 20 elements were either comparable or inferior, by a factor ranging from 1.5 to 2, to the results obtained from a conventional Ar ICP. Matrix effect studies on the Ca-PO4 system revealed that the plasma generated in the low-flow torch was as free of vaporizatton-atomizatton interferences as the conventional ICP, but easily ionizable elements produced a greater level of suppression or enhancement effects which could be reduced at higher forward powers. Electron number densities, as determined via the series until line merging technique, were tower ht the plasma sustained in the low-flow torch as compared with the conventional ICP. ?? 1984 American Chemical Society.

  14. A closed-form solution to tensor voting: theory and applications.

    PubMed

    Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard

    2012-08-01

    We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.

  15. Bisphenol A, 4-t-octylphenol, and 4-nonylphenol determination in serum by Hybrid Solid Phase Extraction-Precipitation Technology technique tailored to liquid chromatography-tandem mass spectrometry.

    PubMed

    Asimakopoulos, Alexandros G; Thomaidis, Nikolaos S

    2015-04-01

    A rapid liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed and optimized for the simultaneous determination of bisphenol A, 4-t-octylphenol and 4-nonylphenol in human blood serum. For the first time, the electrospray ionization (ESI) parameters of probe position, voltage potential, sheath gas flow rate, auxiliary gas flow rate, and ion transfer tube temperature were thoroughly studied and optimized for each phenol by a univariate approach. As a consequence, low instrumental limits of detection were reported, demonstrating at 0.2 ng/mL (in solvent matrix) excellent injection repeatability (RSD<14.5%) and a confirmation peak for all target phenols. Extraction and purification of serum was performed by the novel Hybrid Solid Phase Extraction-Precipitation Technology technique (Hybrid SPE-PPT). The limits of detection in human blood serum were 0.80, 1.3 and 1.4 ng/mL for BPA, 4-t-OP and 4-NP, respectively. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  17. Digital robust control law synthesis using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivekananda

    1989-01-01

    Development of digital robust control laws for active control of high performance flexible aircraft and large space structures is a research area of significant practical importance. The flexible system is typically modeled by a large order state space system of equations in order to accurately represent the dynamics. The active control law must satisy multiple conflicting design requirements and maintain certain stability margins, yet should be simple enough to be implementable on an onboard digital computer. Described here is an application of a generic digital control law synthesis procedure for such a system, using optimal control theory and constrained optimization technique. A linear quadratic Gaussian type cost function is minimized by updating the free parameters of the digital control law, while trying to satisfy a set of constraints on the design loads, responses and stability margins. Analytical expressions for the gradients of the cost function and the constraints with respect to the control law design variables are used to facilitate rapid numerical convergence. These gradients can be used for sensitivity study and may be integrated into a simultaneous structure and control optimization scheme.

  18. A rapid space-resolved solid-phase microextraction method as a powerful tool to determine contaminants in wine based on their volatility.

    PubMed

    Liu, Min; Peng, Qing-Qing; Chen, Yu-Feng; Tang, Qian; Feng, Qing

    2015-06-01

    A novel space-resolved solid phase microextraction (SR-SPME) technique was developed to facilitate simultaneously analyte monitoring within heterogeneous samples. Graphene (G) and graphene oxide (GO) were coated separately to the segmented fibers which were successfully used for the solid-phase microextraction of two contaminants with dramatically different volatility: 2,4,6-trichloroanisole (TCA) and dibutyl phthalate (DBP). The space-resolved fiber showed good precision (5.4%, 6.8%), low detection limits (0.3ng/L, 0.3ng/L), and wide linearity (1.0-250.0ng/L, 1.0-250.0ng/L) under the optimized conditions for TCA and DBP, respectively. The method was applied to simultaneous analysis of the two contaminates with satisfactory recoveries, which were 96.96% and 98.20% for wine samples. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Simultaneous acquisition of 3D shape and deformation by combination of interferometric and correlation-based laser speckle metrology.

    PubMed

    Dekiff, Markus; Berssenbrügge, Philipp; Kemper, Björn; Denz, Cornelia; Dirksen, Dieter

    2015-12-01

    A metrology system combining three laser speckle measurement techniques for simultaneous determination of 3D shape and micro- and macroscopic deformations is presented. While microscopic deformations are determined by a combination of Digital Holographic Interferometry (DHI) and Digital Speckle Photography (DSP), macroscopic 3D shape, position and deformation are retrieved by photogrammetry based on digital image correlation of a projected laser speckle pattern. The photogrammetrically obtained data extend the measurement range of the DHI-DSP system and also increase the accuracy of the calculation of the sensitivity vector. Furthermore, a precise assignment of microscopic displacements to the object's macroscopic shape for enhanced visualization is achieved. The approach allows for fast measurements with a simple setup. Key parameters of the system are optimized, and its precision and measurement range are demonstrated. As application examples, the deformation of a mandible model and the shrinkage of dental impression material are measured.

  20. Theoretical analysis of the all-fiberized, dispersion-managed regenerator for simultaneous processing of WDM channels

    NASA Astrophysics Data System (ADS)

    Kouloumentas, Christos

    2011-09-01

    The concept of the all-fiberized multi-wavelength regenerator is analyzed, and the design methodology for operation at 40 Gb/s is presented. The specific methodology has been applied in the past for the experimental proof-of-principle of the technique, but it has never been reported in detail. The regenerator is based on a strong dispersion map that is implemented using alternating dispersion compensating fibers (DCF) and single-mode fibers (SMF), and minimizes the nonlinear interaction between the wavelength-division multiplexing (WDM) channels. The optimized regenerator design with + 0.86 ps/nm/km average dispersion of the nonlinear fiber section is further investigated. The specific design is capable of simultaneously processing five WDM channels with 800 GHz channel spacing and providing Q-factor improvement higher than 1 dB for each channel. The cascadeability of the regenerator is also indicated using a 6-node metropolitan network simulation model.

  1. Optimization of the Divergent method for genotyping single nucleotide variations using SYBR Green-based single-tube real-time PCR.

    PubMed

    Gentilini, Fabio; Turba, Maria E

    2014-01-01

    A novel technique, called Divergent, for single-tube real-time PCR genotyping of point mutations without the use of fluorescently labeled probes has recently been reported. This novel PCR technique utilizes a set of four primers and a particular denaturation temperature for simultaneously amplifying two different amplicons which extend in opposite directions from the point mutation. The two amplicons can readily be detected using the melt curve analysis downstream to a closed-tube real-time PCR. In the present study, some critical aspects of the original method were specifically addressed to further implement the technique for genotyping the DNM1 c.G767T mutation responsible for exercise-induced collapse in Labrador retriever dogs. The improved Divergent assay was easily set up using a standard two-step real-time PCR protocol. The melting temperature difference between the mutated and the wild-type amplicons was approximately 5°C which could be promptly detected by all the thermal cyclers. The upgraded assay yielded accurate results with 157pg of genomic DNA per reaction. This optimized technique represents a flexible and inexpensive alternative to the minor grove binder fluorescently labeled method and to high resolution melt analysis for high-throughput, robust and cheap genotyping of single nucleotide variations. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Use of chemometrics to compare NIR and HPLC for the simultaneous determination of drug levels in fixed-dose combination tablets employed in tuberculosis treatment.

    PubMed

    Teixeira, Kelly Sivocy Sampaio; da Cruz Fonseca, Said Gonçalves; de Moura, Luís Carlos Brigido; de Moura, Mario Luís Ribeiro; Borges, Márcia Herminia Pinheiro; Barbosa, Euzébio Guimaraes; De Lima E Moura, Túlio Flávio Accioly

    2018-02-05

    The World Health Organization recommends that TB treatment be administered using combination therapy. The methodologies for quantifying simultaneously associated drugs are highly complex, being costly, extremely time consuming and producing chemical residues harmful to the environment. The need to seek alternative techniques that minimize these drawbacks is widely discussed in the pharmaceutical industry. Therefore, the objective of this study was to develop and validate a multivariate calibration model in association with the near infrared spectroscopy technique (NIR) for the simultaneous determination of rifampicin, isoniazid, pyrazinamide and ethambutol. These models allow the quality control of these medicines to be optimized using simple, fast, low-cost techniques that produce no chemical waste. In the NIR - PLS method, spectra readings were acquired in the 10,000-4000cm -1 range using an infrared spectrophotometer (IRPrestige - 21 - Shimadzu) with a resolution of 4cm -1 , 20 sweeps, under controlled temperature and humidity. For construction of the model, the central composite experimental design was employed on the program Statistica 13 (StatSoft Inc.). All spectra were treated by computational tools for multivariate analysis using partial least squares regression (PLS) on the software program Pirouette 3.11 (Infometrix, Inc.). Variable selections were performed by the QSAR modeling program. The models developed by NIR in association with multivariate analysis provided good prediction of the APIs for the external samples and were therefore validated. For the tablets, however, the slightly different quantitative compositions of excipients compared to the mixtures prepared for building the models led to results that were not statistically similar, despite having prediction errors considered acceptable in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. CO 2 water-alternating-gas injection for enhanced oil recovery: Optimal well controls and half-cycle lengths

    DOE PAGES

    Chen, Bailian; Reynolds, Albert C.

    2018-03-11

    We report that CO 2 water-alternating-gas (WAG) injection is an enhanced oil recovery method designed to improve sweep efficiency during CO 2 injection with the injected water to control the mobility of CO 2 and to stabilize the gas front. Optimization of CO 2 -WAG injection is widely regarded as a viable technique for controlling the CO 2 and oil miscible process. Poor recovery from CO 2 -WAG injection can be caused by inappropriately designed WAG parameters. In previous study (Chen and Reynolds, 2016), we proposed an algorithm to optimize the well controls which maximize the life-cycle net-present-value (NPV). However,more » the effect of injection half-cycle lengths for each injector on oil recovery or NPV has not been well investigated. In this paper, an optimization framework based on augmented Lagrangian method and the newly developed stochastic-simplex-approximate-gradient (StoSAG) algorithm is proposed to explore the possibility of simultaneous optimization of the WAG half-cycle lengths together with the well controls. Finally, the proposed framework is demonstrated with three reservoir examples.« less

  4. Optimal design of leak-proof SRAM cell using MCDM method

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Kang, Sung-Mo

    2003-04-01

    As deep-submicron CMOS technology advances, on-chip cache has become a bottleneck on microprocessor's performance. Meanwhile, it also occupies a big percentage of processor area and consumes large power. Speed, power and area of SRAM are mutually contradicting, and not easy to be met simultaneously. Many existent leakage suppression techniques have been proposed, but they limit the circuit's performance. We apply a Multi-Criteria Decision Making strategy to perform a minimum delay-power-area optimization on SRAM circuit under some certain constraints. Based on an integrated device and circuit-level approach, we search for a process that yields a targeted composite performance. In consideration of the huge amount of simulation workload involved in the optimal design-seeking process, most of this process is automated to facilitate our goal-pursuant. With varying emphasis put on delay, power or area, different optimal SRAM designs are derived and a gate-oxide thickness scaling limit is projected. The result seems to indicate that a better composite performance could be achieved under a thinner oxide thickness. Under the derived optimal oxide thickness, the static leakage power consumption contributes less than 1% in the total power dissipation.

  5. CO 2 water-alternating-gas injection for enhanced oil recovery: Optimal well controls and half-cycle lengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Bailian; Reynolds, Albert C.

    We report that CO 2 water-alternating-gas (WAG) injection is an enhanced oil recovery method designed to improve sweep efficiency during CO 2 injection with the injected water to control the mobility of CO 2 and to stabilize the gas front. Optimization of CO 2 -WAG injection is widely regarded as a viable technique for controlling the CO 2 and oil miscible process. Poor recovery from CO 2 -WAG injection can be caused by inappropriately designed WAG parameters. In previous study (Chen and Reynolds, 2016), we proposed an algorithm to optimize the well controls which maximize the life-cycle net-present-value (NPV). However,more » the effect of injection half-cycle lengths for each injector on oil recovery or NPV has not been well investigated. In this paper, an optimization framework based on augmented Lagrangian method and the newly developed stochastic-simplex-approximate-gradient (StoSAG) algorithm is proposed to explore the possibility of simultaneous optimization of the WAG half-cycle lengths together with the well controls. Finally, the proposed framework is demonstrated with three reservoir examples.« less

  6. Optimization of High-Throughput Sequencing Kinetics for determining enzymatic rate constants of thousands of RNA substrates

    PubMed Central

    Niland, Courtney N.; Jankowsky, Eckhard; Harris, Michael E.

    2016-01-01

    Quantification of the specificity of RNA binding proteins and RNA processing enzymes is essential to understanding their fundamental roles in biological processes. High Throughput Sequencing Kinetics (HTS-Kin) uses high throughput sequencing and internal competition kinetics to simultaneously monitor the processing rate constants of thousands of substrates by RNA processing enzymes. This technique has provided unprecedented insight into the substrate specificity of the tRNA processing endonuclease ribonuclease P. Here, we investigate the accuracy and robustness of measurements associated with each step of the HTS-Kin procedure. We examine the effect of substrate concentration on the observed rate constant, determine the optimal kinetic parameters, and provide guidelines for reducing error in amplification of the substrate population. Importantly, we find that high-throughput sequencing, and experimental reproducibility contribute their own sources of error, and these are the main sources of imprecision in the quantified results when otherwise optimized guidelines are followed. PMID:27296633

  7. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  8. Use of a microscope-mounted wide-angle point of view camera to record optimal hand position in ocular surgery.

    PubMed

    Gooi, Patrick; Ahmed, Yusuf; Ahmed, Iqbal Ike K

    2014-07-01

    We describe the use of a microscope-mounted wide-angle point-of-view camera to record optimal hand positions in ocular surgery. The camera is mounted close to the objective lens beneath the surgeon's oculars and faces the same direction as the surgeon, providing a surgeon's view. A wide-angle lens enables viewing of both hands simultaneously and does not require repositioning the camera during the case. Proper hand positioning and instrument placement through microincisions are critical for effective and atraumatic handling of tissue within the eye. Our technique has potential in the assessment and training of optimal hand position for surgeons performing intraocular surgery. It is an innovative way to routinely record instrument and operating hand positions in ophthalmic surgery and has minimal requirements in terms of cost, personnel, and operating-room space. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  9. A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.

    PubMed

    Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  10. Joint Resource Optimization for Cognitive Sensor Networks with SWIPT-Enabled Relay.

    PubMed

    Lu, Weidang; Lin, Yuanrong; Peng, Hong; Nan, Tian; Liu, Xin

    2017-09-13

    Energy-constrained wireless networks, such as wireless sensor networks (WSNs), are usually powered by fixed energy supplies (e.g., batteries), which limits the operation time of networks. Simultaneous wireless information and power transfer (SWIPT) is a promising technique to prolong the lifetime of energy-constrained wireless networks. This paper investigates the performance of an underlay cognitive sensor network (CSN) with SWIPT-enabled relay node. In the CSN, the amplify-and-forward (AF) relay sensor node harvests energy from the ambient radio-frequency (RF) signals using power splitting-based relaying (PSR) protocol. Then, it helps forward the signal of source sensor node (SSN) to the destination sensor node (DSN) by using the harvested energy. We study the joint resource optimization including the transmit power and power splitting ratio to maximize CSN's achievable rate with the constraint that the interference caused by the CSN to the primary users (PUs) is within the permissible threshold. Simulation results show that the performance of our proposed joint resource optimization can be significantly improved.

  11. High-fidelity spin entanglement using optimal control.

    PubMed

    Dolde, Florian; Bergholm, Ville; Wang, Ya; Jakobi, Ingmar; Naydenov, Boris; Pezzagna, Sébastien; Meijer, Jan; Jelezko, Fedor; Neumann, Philipp; Schulte-Herbrüggen, Thomas; Biamonte, Jacob; Wrachtrup, Jörg

    2014-02-28

    Precise control of quantum systems is of fundamental importance in quantum information processing, quantum metrology and high-resolution spectroscopy. When scaling up quantum registers, several challenges arise: individual addressing of qubits while suppressing cross-talk, entangling distant nodes and decoupling unwanted interactions. Here we experimentally demonstrate optimal control of a prototype spin qubit system consisting of two proximal nitrogen-vacancy centres in diamond. Using engineered microwave pulses, we demonstrate single electron spin operations with a fidelity F≈0.99. With additional dynamical decoupling techniques, we further realize high-quality, on-demand entangled states between two electron spins with F>0.82, mostly limited by the coherence time and imperfect initialization. Crosstalk in a crowded spectrum and unwanted dipolar couplings are simultaneously eliminated to a high extent. Finally, by high-fidelity entanglement swapping to nuclear spin quantum memory, we demonstrate nuclear spin entanglement over a length scale of 25 nm. This experiment underlines the importance of optimal control for scalable room temperature spin-based quantum information devices.

  12. A policy iteration approach to online optimal control of continuous-time constrained-input systems.

    PubMed

    Modares, Hamidreza; Naghibi Sistani, Mohammad-Bagher; Lewis, Frank L

    2013-09-01

    This paper is an effort towards developing an online learning algorithm to find the optimal control solution for continuous-time (CT) systems subject to input constraints. The proposed method is based on the policy iteration (PI) technique which has recently evolved as a major technique for solving optimal control problems. Although a number of online PI algorithms have been developed for CT systems, none of them take into account the input constraints caused by actuator saturation. In practice, however, ignoring these constraints leads to performance degradation or even system instability. In this paper, to deal with the input constraints, a suitable nonquadratic functional is employed to encode the constraints into the optimization formulation. Then, the proposed PI algorithm is implemented on an actor-critic structure to solve the Hamilton-Jacobi-Bellman (HJB) equation associated with this nonquadratic cost functional in an online fashion. That is, two coupled neural network (NN) approximators, namely an actor and a critic are tuned online and simultaneously for approximating the associated HJB solution and computing the optimal control policy. The critic is used to evaluate the cost associated with the current policy, while the actor is used to find an improved policy based on information provided by the critic. Convergence to a close approximation of the HJB solution as well as stability of the proposed feedback control law are shown. Simulation results of the proposed method on a nonlinear CT system illustrate the effectiveness of the proposed approach. Copyright © 2013 ISA. All rights reserved.

  13. Arsenic and iron removal from groundwater by oxidation-coagulation at optimized pH: laboratory and field studies.

    PubMed

    Bordoloi, Shreemoyee; Nath, Suresh K; Gogoi, Sweety; Dutta, Robin K

    2013-09-15

    A three-step treatment process involving (i) mild alkaline pH-conditioning by NaHCO₃; (ii) oxidation of arsenite and ferrous ions by KMnO₄, itself precipitating as insoluble MnO₂ under the pH condition; and (iii) coagulation by FeCl₃ has been used for simultaneous removal of arsenic and iron ions from water. The treated water is filtered after a residence time of 1-2 h. Laboratory batch experiments were performed to optimize the doses. A field trial was performed with an optimized recipe at 30 households and 5 schools at some highly arsenic affected villages in Assam, India. Simultaneous removals of arsenic from initial 0.1-0.5 mg/L to about 5 μg/L and iron from initial 0.3-5.0 mg/L to less than 0.1 mg/L have been achieved along with final pH between 7.0 and 7.5 after residence time of 1h. The process also removes other heavy elements, if present, without leaving any additional toxic residue. The small quantity of solid sludge containing mainly ferrihydrite with adsorbed arsenate passes the toxicity characteristic leaching procedure (TCLP) test. The estimated recurring cost is approximately USD 0.16 per/m(3) of purified water. A high efficiency, an extremely low cost, safety, non-requirement of power and simplicity of operation make the technique potential for rural application. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Simultaneous quantification of the boar-taint compounds skatole and androstenone by surface-enhanced Raman scattering (SERS) and multivariate data analysis.

    PubMed

    Sørensen, Klavs M; Westley, Chloe; Goodacre, Royston; Engelsen, Søren Balling

    2015-10-01

    This study investigates the feasibility of using surface-enhanced Raman scattering (SERS) for the quantification of absolute levels of the boar-taint compounds skatole and androstenone in porcine fat. By investigation of different types of nanoparticles, pH and aggregating agents, an optimized environment that promotes SERS of the analytes was developed and tested with different multivariate spectral pre-processing techniques, and this was combined with variable selection on a series of analytical standards. The resulting method exhibited prediction errors (root mean square error of cross validation, RMSECV) of 2.4 × 10(-6) M skatole and 1.2 × 10(-7) M androstenone, with a limit of detection corresponding to approximately 2.1 × 10(-11) M for skatole and approximately 1.8 × 10(-10) for androstenone. The method was subsequently tested on porcine fat extract, leading to prediction errors (RMSECV) of 0.17 μg/g for skatole and 1.5 μg/g for androstenone. It is clear that this optimized SERS method, when combined with multivariate analysis, shows great potential for optimization into an on-line application, which will be the first of its kind, and opens up possibilities for simultaneous detection of other meat-quality metabolites or pathogen markers. Graphical abstract Artistic rendering of a laser-illuminated gold colloid sphere with skatole and androstenone adsorbed on the surface.

  15. SiC nanoparticles-modified glassy carbon electrodes for simultaneous determination of purine and pyrimidine DNA bases.

    PubMed

    Ghavami, Raouf; Salimi, Abdollah; Navaee, Aso

    2011-05-15

    For the first time a novel and simple electrochemical method was used for simultaneous detection of DNA bases (guanine, adenine, thymine and cytosine) without any pretreatment or separation process. Glassy carbon electrode modified with silicon carbide nanoparticles (SiCNP/GC), have been used for electrocatalytic oxidation of purine (guanine and adenine) and pyrimidine bases (thymine and cytosine) nucleotides. Field emission scanning electron microscopy (FE-SEM) and transmission electron microscopy (TEM) techniques were used to examine the structure of the SiCNP/GC modified electrode. The modified electrode shows excellent electrocatalytic activity toward guanine, adenine, thymine and cytosine. Differential pulse voltammetry (DPV) was proposed for simultaneous determination of four DNA bases. The effects of different parameters such as the thickness of SiC layer, pulse amplitude, scan rate, supporting electrolyte composition and pH were optimized to obtain the best peak potential separation and higher sensitivity. Detection limit, sensitivity and linear concentration range of the modified electrode toward proposed analytes were calculated for, guanine, adenine, thymine and cytosine, respectively. As shown this sensor can be used for nanomolar or micromolar detection of different DNA bases simultaneously or individually. This sensor also exhibits good stability, reproducibility and long lifetime. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Noise of a superconducting magnetic flux sensor based on a proximity Josephson junction.

    PubMed

    Jabdaraghi, R N; Golubev, D S; Pekola, J P; Peltonen, J T

    2017-08-14

    We demonstrate simultaneous measurements of DC transport properties and flux noise of a hybrid superconducting magnetometer based on the proximity effect (superconducting quantum interference proximity transistor, SQUIPT). The noise is probed by a cryogenic amplifier operating in the frequency range of a few MHz. In our non-optimized device, we achieve minimum flux noise ~4 μΦ 0 /Hz 1/2 , set by the shot noise of the probe tunnel junction. The flux noise performance can be improved by further optimization of the SQUIPT parameters, primarily minimization of the proximity junction length and cross section. Furthermore, the experiment demonstrates that the setup can be used to investigate shot noise in other nonlinear devices with high impedance. This technique opens the opportunity to measure sensitive magnetometers including SQUIPT devices with very low dissipation.

  17. Minimum deltaV Burn Planning for the International Space Station Using a Hybrid Optimization Technique, Level 1

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.

    2015-01-01

    The International Space Station's (ISS) trajectory is coordinated and executed by the Trajectory Operations and Planning (TOPO) group at NASA's Johnson Space Center. TOPO group personnel routinely generate look-ahead trajectories for the ISS that incorporate translation burns needed to maintain its orbit over the next three to twelve months. The burns are modeled as in-plane, horizontal burns, and must meet operational trajectory constraints imposed by both NASA and the Russian Space Agency. In generating these trajectories, TOPO personnel must determine the number of burns to model, each burn's Time of Ignition (TIG), and magnitude (i.e. deltaV) that meet these constraints. The current process for targeting these burns is manually intensive, and does not take advantage of more modern techniques that can reduce the workload needed to find feasible burn solutions, i.e. solutions that simply meet the constraints, or provide optimal burn solutions that minimize the total DeltaV while simultaneously meeting the constraints. A two-level, hybrid optimization technique is proposed to find both feasible and globally optimal burn solutions for ISS trajectory planning. For optimal solutions, the technique breaks the optimization problem into two distinct sub-problems, one for choosing the optimal number of burns and each burn's optimal TIG, and the other for computing the minimum total deltaV burn solution that satisfies the trajectory constraints. Each of the two aforementioned levels uses a different optimization algorithm to solve one of the sub-problems, giving rise to a hybrid technique. Level 2, or the outer level, uses a genetic algorithm to select the number of burns and each burn's TIG. Level 1, or the inner level, uses the burn TIGs from Level 2 in a sequential quadratic programming (SQP) algorithm to compute a minimum total deltaV burn solution subject to the trajectory constraints. The total deltaV from Level 1 is then used as a fitness function by the genetic algorithm in Level 2 to select the number of burns and their TIGs for the next generation. In this manner, the two levels solve their respective sub-problems separately but collaboratively until a burn solution is found that globally minimizes the deltaV across the entire trajectory. Feasible solutions can also be found by simply using the SQP algorithm in Level 1 with a zero cost function. This paper discusses the formulation of the Level 1 sub-problem and the development of a prototype software tool to solve it. The Level 2 sub-problem will be discussed in a future work. Following the Level 1 formulation and solution, several look-ahead trajectory examples for the ISS are explored. In each case, the burn targeting results using the current process are compared against a feasible solution found using Level 1 in the proposed technique. Level 1 is then used to find a minimum deltaV solution given the fixed number of burns and burn TIGs. The optimal solution is compared with the previously found feasible solution to determine the deltaV (and therefore propellant) savings. The proposed technique seeks to both improve the current process for targeting ISS burns, and to add the capability to optimize ISS burns in a novel fashion. The optimal solutions found using this technique can potentially save hundreds of kilograms of propellant over the course of the ISS mission compared to feasible solutions alone. While the software tool being developed to implement this technique is specific to ISS, the concept is extensible to other long-duration, central-body orbiting missions that must perform orbit maintenance burns to meet operational trajectory constraints.

  18. Using Fenton Oxidation to Simultaneously Remove Different Estrogens from Cow Manure

    PubMed Central

    Sun, Minxia; Xu, Defu; Ji, Yuefei; Liu, Juan; Ling, Wanting; Li, Shunyao; Chen, Mindong

    2016-01-01

    The presence of estrogens in livestock excrement has raised concerns about their potential negative influence on animals and the overall food cycle. This is the first investigation to simultaneously remove estrogens, including estriol (E3), bisphenol A (BPA), diethylstilbestrol (DES), estradiol (E2), and ethinyl estradiol (EE2), from cow manure using a Fenton oxidation technique. Based on the residual concentrations and removal efficiency of estrogens, the Fenton oxidation reaction conditions were optimized as follows: a H2O2 dosage of 2.56 mmol/g, a Fe(II) to H2O2 molar ratio of 0.125 M/M, a solid to water mass ratio of 2 g/mL, an initial pH of 3, and a reaction time of 24 h. Under these conditions, the simultaneous removal efficiencies of E3, BPA, DES, E2, and EE2, with initial concentrations in cow manure of 97.40, 96.54, 100.22, 95.01, and 72.49 mg/kg, were 84.9%, 99.5%, 99.1%, 97.8%, and 84.5%, respectively. We clarified the possible Fenton oxidation reaction mechanisms that governed the degradation of estrogens. We concluded that Fenton oxidation technique could be effective for efficient removal of estrogens in livestock excrement. Results are of great importance for cow manure reuse in agricultural management, and can be used to reduce the threat of environmental estrogens to human health and ecological safety. PMID:27649223

  19. Superiorization-based multi-energy CT image reconstruction

    PubMed Central

    Yang, Q; Cong, W; Wang, G

    2017-01-01

    The recently-developed superiorization approach is efficient and robust for solving various constrained optimization problems. This methodology can be applied to multi-energy CT image reconstruction with the regularization in terms of the prior rank, intensity and sparsity model (PRISM). In this paper, we propose a superiorized version of the simultaneous algebraic reconstruction technique (SART) based on the PRISM model. Then, we compare the proposed superiorized algorithm with the Split-Bregman algorithm in numerical experiments. The results show that both the Superiorized-SART and the Split-Bregman algorithms generate good results with weak noise and reduced artefacts. PMID:28983142

  20. Hall effect spintronics for gas detection

    NASA Astrophysics Data System (ADS)

    Gerber, A.; Kopnov, G.; Karpovski, M.

    2017-10-01

    We present the concept of magnetic gas detection by the extraordinary Hall effect. The technique is compatible with the existing conductometric gas detection technologies and allows the simultaneous measurement of two independent parameters: resistivity and magnetization affected by the target gas. Feasibility of the approach is demonstrated by detecting low concentration hydrogen using thin CoPd films as the sensor material. The Hall effect sensitivity of the optimized samples exceeds 240% per 104 ppm at hydrogen concentrations below 0.5% in the hydrogen/nitrogen atmosphere, which is more than two orders of magnitude higher than the sensitivity of the conductance detection.

  1. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  2. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults.

    PubMed

    Tait, Jamie L; Duckham, Rachel L; Milte, Catherine M; Main, Luana C; Daly, Robin M

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people.

  3. Quantification of asymmetric lung pathophysiology as a guide to the use of simultaneous independent lung ventilation in posttraumatic and septic adult respiratory distress syndrome.

    PubMed Central

    Siegel, J H; Stoklosa, J C; Borg, U; Wiles, C E; Sganga, G; Geisler, F H; Belzberg, H; Wedel, S; Blevins, S; Goh, K C

    1985-01-01

    The management of impaired respiratory gas exchange in patients with nonuniform posttraumatic and septic adult respiratory distress syndrome (ARDS) contains its own therapeutic paradox, since the need for volume-controlled ventilation and PEEP in the lung with the most reduced compliance increases pulmonary barotrauma to the better lung. A computer-based system has been developed by which respiratory pressure-flow-volume relations and gas exchange characteristics can be obtained and respiratory dynamic and static compliance curves computed and displayed for each lung, as a means of evaluating the effectiveness of ventilation therapy in ARDS. Using these techniques, eight patients with asymmetrical posttraumatic or septic ARDS, or both, have been managed using simultaneous independent lung ventilation (SILV). The computer assessment technique allows quantification of the nonuniform ARDS pattern between the two lungs. This enabled SILV to be utilized using two synchronized servo-ventilators at different pressure-flow-volumes, inspiratory/expiratory ratios, and PEEP settings to optimize the ventilatory volumes and gas exchange of each lung, without inducing excess barotrauma in the better lung. In the patients with nonuniform ARDS, conventional ventilation was not effective in reducing shunt (QS/QT) or in permitting a lower FIO2 to be used for maintenance of an acceptable PaO2. SILV reduced per cent v-a shunt and permitted a higher PaO2 at lower FIO2. Also, there was x-ray evidence of ARDS improvement in the poorer lung. While the ultimate outcome was largely dependent on the patient's injury and the adequacy of the septic host defense, by utilizing the SILV technique to match the quantitative aspects of respiratory dysfunction in each lung at specific times in the clinical course, it was possible to optimize gas exchange, to reduce barotrauma, and often to reverse apparently fixed ARDS changes. In some instances, this type of physiologically directed ventilatory therapy appeared to contribute to a successful recovery. Images FIG. 10. PMID:3901940

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guida, K; Qamar, K; Thompson, M

    Purpose: The RTOG 1005 trial offered a hypofractionated arm in delivering WBRT+SIB. Traditionally, treatments were planned at our institution using field-in-field (FiF) tangents with a concurrent 3D conformal boost. With the availability of VMAT, it is possible that a hybrid VMAT-3D planning technique could provide another avenue in treating WBRT+SIB. Methods: A retrospective study of nine patients previously treated using RTOG 1005 guidelines was performed to compare FiF+3D plans with the hybrid technique. A combination of static tangents and partial VMAT arcs were used in base-dose optimization. The hybrid plans were optimized to deliver 4005cGy to the breast PTVeval andmore » 4800cGy to the lumpectomy PTVeval over 15 fractions. Plans were optimized to meet the planning goals dictated by RTOG 1005. Results: Hybrid plans yielded similar coverage of breast and lumpectomy PTVs (average D95 of 4013cGy compared to 3990cGy for conventional), while reducing the volume of high dose within the breast; the average D30 and D50 for the hybrid technique were 4517cGy and 4288cGy, compared to 4704cGy and 4377cGy for conventional planning. Hybrid plans increased conformity as well, yielding CI95% values of 1.22 and 1.54 for breast and lumpectomy PTVeval volumes; in contrast, conventional plans averaged 1.49 and 2.27, respectively. The nearby organs at risk (OARs) received more low dose with the hybrid plans due to low dose spray from the partial arcs, but all hybrid plans did meet the acceptable constraints, at a minimum, from the protocol. Treatment planning time was also reduced, as plans were inversely optimized (VMAT) rather than forward optimized. Conclusion: Hybrid-VMAT could be a solution in delivering WB+SIB, as plans yield very conformal treatment plans and maintain clinical standards in OAR sparing. For treating breast cancer patients with a simultaneously-integrated boost, Hybrid-VMAT offers superiority in dosimetric conformity and planning time as compared to FIF techniques.« less

  5. A Compensatory Approach to Optimal Selection with Mastery Scores. Research Report 94-2.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Vos, Hans J.

    This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious example is the use of individualized instruction in…

  6. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  7. Direct aperture optimization: a turnkey solution for step-and-shoot IMRT.

    PubMed

    Shepard, D M; Earl, M A; Li, X A; Naqvi, S; Yu, C

    2002-06-01

    IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach "direct aperture optimization." This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT.

  8. Co-production of hydrogen and carbon nanotubes on nickel foam via methane catalytic decomposition

    NASA Astrophysics Data System (ADS)

    Ping, Dan; Wang, Chaoxian; Dong, Xinfa; Dong, Yingchao

    2016-04-01

    The co-production of COx-free hydrogen and carbon nanotubes (CNTs) was achieved on 3-dimensional (3D) macroporous nickel foam (NF) via methane catalytic decomposition (MCD) over nano-Ni catalysts using chemical vapor deposition (CVD) technique. By a simple coating of a NiO-Al2O3 binary mixture sol followed by a drying-calcination-reduction treatment, NF supported composite catalysts (denoted as NiyAlOx/NF) with Al2O3 transition-layer incorporated with well-dispersed nano-Ni catalysts were successfully prepared. The effects of Ni loading, calcination temperature and reaction temperature on the performance for simultaneous production of COx-free hydrogen and CNTs were investigated in detail. Catalysts before and after MCD were characterized by XRD, TPR, SEM, TEM, TG and Raman spectroscopy technology. Results show that increasing Ni loading, lowering calcination temperature and optimizing MCD reaction temperature resulted in high production efficiency of COx-free H2 and carbon, but broader diameter distribution of CNTs. Through detailed parameter optimization, the catalyst with a Ni/Al molar ratio of 0.1, calcination temperature of 550 °C and MCD temperature of 650 °C was favorable to simultaneously produce COx-free hydrogen with a growth rate as high as 10.3% and CNTs with uniform size on NF.

  9. Optimization of pilot high rate algal ponds for simultaneous nutrient removal and lipids production.

    PubMed

    Arbib, Zouhayr; de Godos, Ignacio; Ruiz, Jesús; Perales, José A

    2017-07-01

    Special attention is required to the removal of nitrogen and phosphorous in treated wastewaters. Although, there are a wide range of techniques commercially available for nutrient up-take, these processes entail high investment and operational costs. In the other hand, microalgae growth can simultaneously remove inorganic constituents of wastewater and produce energy rich biomass. Among all the cultivation technologies, High Rate Algae Ponds (HRAPs), are accepted as the most appropriate system. However, the optimization of the operation that maximizes the productivity, nutrient removal and lipid content in the biomass generated has not been established. In this study, the effect of two levels of depth and the addition of CO 2 were evaluated. Batch essays were used for the calculation of the kinetic parameters of microbial growth that determine the optimum conditions for continuous operation. Nutrient removal and lipid content of the biomass generated were analyzed. The best conditions were found at depth of 0.3m with CO 2 addition (biomass productivity of 26.2gTSSm -2 d -1 and a lipid productivity of 6.0glipidsm -2 d -1 ) in continuous mode. The concentration of nutrients was in all cases below discharge limits established by the most restrictive regulation for wastewater discharge. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. MITIE: Simultaneous RNA-Seq-based transcript identification and quantification in multiple samples.

    PubMed

    Behr, Jonas; Kahles, André; Zhong, Yi; Sreedharan, Vipin T; Drewe, Philipp; Rätsch, Gunnar

    2013-10-15

    High-throughput sequencing of mRNA (RNA-Seq) has led to tremendous improvements in the detection of expressed genes and reconstruction of RNA transcripts. However, the extensive dynamic range of gene expression, technical limitations and biases, as well as the observed complexity of the transcriptional landscape, pose profound computational challenges for transcriptome reconstruction. We present the novel framework MITIE (Mixed Integer Transcript IdEntification) for simultaneous transcript reconstruction and quantification. We define a likelihood function based on the negative binomial distribution, use a regularization approach to select a few transcripts collectively explaining the observed read data and show how to find the optimal solution using Mixed Integer Programming. MITIE can (i) take advantage of known transcripts, (ii) reconstruct and quantify transcripts simultaneously in multiple samples, and (iii) resolve the location of multi-mapping reads. It is designed for genome- and assembly-based transcriptome reconstruction. We present an extensive study based on realistic simulated RNA-Seq data. When compared with state-of-the-art approaches, MITIE proves to be significantly more sensitive and overall more accurate. Moreover, MITIE yields substantial performance gains when used with multiple samples. We applied our system to 38 Drosophila melanogaster modENCODE RNA-Seq libraries and estimated the sensitivity of reconstructing omitted transcript annotations and the specificity with respect to annotated transcripts. Our results corroborate that a well-motivated objective paired with appropriate optimization techniques lead to significant improvements over the state-of-the-art in transcriptome reconstruction. MITIE is implemented in C++ and is available from http://bioweb.me/mitie under the GPL license.

  11. Simultaneous bilateral stereotactic procedure for deep brain stimulation implants: a significant step for reducing operation time.

    PubMed

    Fonoff, Erich Talamoni; Azevedo, Angelo; Angelos, Jairo Silva Dos; Martinez, Raquel Chacon Ruiz; Navarro, Jessie; Reis, Paul Rodrigo; Sepulveda, Miguel Ernesto San Martin; Cury, Rubens Gisbert; Ghilardi, Maria Gabriela Dos Santos; Teixeira, Manoel Jacobsen; Lopez, William Omar Contreras

    2016-07-01

    OBJECT Currently, bilateral procedures involve 2 sequential implants in each of the hemispheres. The present report demonstrates the feasibility of simultaneous bilateral procedures during the implantation of deep brain stimulation (DBS) leads. METHODS Fifty-seven patients with movement disorders underwent bilateral DBS implantation in the same study period. The authors compared the time required for the surgical implantation of deep brain electrodes in 2 randomly assigned groups. One group of 28 patients underwent traditional sequential electrode implantation, and the other 29 patients underwent simultaneous bilateral implantation. Clinical outcomes of the patients with Parkinson's disease (PD) who had undergone DBS implantation of the subthalamic nucleus using either of the 2 techniques were compared. RESULTS Overall, a reduction of 38.51% in total operating time for the simultaneous bilateral group (136.4 ± 20.93 minutes) as compared with that for the traditional consecutive approach (220.3 ± 27.58 minutes) was observed. Regarding clinical outcomes in the PD patients who underwent subthalamic nucleus DBS implantation, comparing the preoperative off-medication condition with the off-medication/on-stimulation condition 1 year after the surgery in both procedure groups, there was a mean 47.8% ± 9.5% improvement in the Unified Parkinson's Disease Rating Scale Part III (UPDRS-III) score in the simultaneous group, while the sequential group experienced 47.5% ± 15.8% improvement (p = 0.96). Moreover, a marked reduction in the levodopa-equivalent dose from preoperatively to postoperatively was similar in these 2 groups. The simultaneous bilateral procedure presented major advantages over the traditional sequential approach, with a shorter total operating time. CONCLUSIONS A simultaneous stereotactic approach significantly reduces the operation time in bilateral DBS procedures, resulting in decreased microrecording time, contributing to the optimization of functional stereotactic procedures.

  12. Optimal graph search segmentation using arc-weighted graph for simultaneous surface detection of bladder and prostate.

    PubMed

    Song, Qi; Wu, Xiaodong; Liu, Yunlong; Smith, Mark; Buatti, John; Sonka, Milan

    2009-01-01

    We present a novel method for globally optimal surface segmentation of multiple mutually interacting objects, incorporating both edge and shape knowledge in a 3-D graph-theoretic approach. Hard surface interacting constraints are enforced in the interacting regions, preserving the geometric relationship of those partially interacting surfaces. The soft smoothness a priori shape compliance is introduced into the energy functional to provide shape guidance. The globally optimal surfaces can be simultaneously achieved by solving a maximum flow problem based on an arc-weighted graph representation. Representing the segmentation problem in an arc-weighted graph, one can incorporate a wider spectrum of constraints into the formulation, thus increasing segmentation accuracy and robustness in volumetric image data. To the best of our knowledge, our method is the first attempt to introduce the arc-weighted graph representation into the graph-searching approach for simultaneous segmentation of multiple partially interacting objects, which admits a globally optimal solution in a low-order polynomial time. Our new approach was applied to the simultaneous surface detection of bladder and prostate. The result was quite encouraging in spite of the low saliency of the bladder and prostate in CT images.

  13. Design and characterization of an optimized simultaneous color and near-infrared fluorescence rigid endoscopic imaging system

    NASA Astrophysics Data System (ADS)

    Venugopal, Vivek; Park, Minho; Ashitate, Yoshitomo; Neacsu, Florin; Kettenring, Frank; Frangioni, John V.; Gangadharan, Sidhu P.; Gioux, Sylvain

    2013-12-01

    We report the design, characterization, and validation of an optimized simultaneous color and near-infrared (NIR) fluorescence rigid endoscopic imaging system for minimally invasive surgery. This system is optimized for illumination and collection of NIR wavelengths allowing the simultaneous acquisition of both color and NIR fluorescence at frame rates higher than 6.8 fps with high sensitivity. The system employs a custom 10-mm diameter rigid endoscope optimized for NIR transmission. A dual-channel light source compatible with the constraints of an endoscope was built and includes a plasma source for white light illumination and NIR laser diodes for fluorescence excitation. A prism-based 2-CCD camera was customized for simultaneous color and NIR detection with a highly efficient filtration scheme for fluorescence imaging of both 700- and 800-nm emission dyes. The performance characterization studies indicate that the endoscope can efficiently detect fluorescence signal from both indocyanine green and methylene blue in dimethyl sulfoxide at the concentrations of 100 to 185 nM depending on the background optical properties. Finally, we performed the validation of this imaging system in vivo during a minimally invasive procedure for thoracic sentinel lymph node mapping in a porcine model.

  14. Fast approximation for joint optimization of segmentation, shape, and location priors, and its application in gallbladder segmentation.

    PubMed

    Saito, Atsushi; Nawano, Shigeru; Shimizu, Akinobu

    2017-05-01

    This paper addresses joint optimization for segmentation and shape priors, including translation, to overcome inter-subject variability in the location of an organ. Because a simple extension of the previous exact optimization method is too computationally complex, we propose a fast approximation for optimization. The effectiveness of the proposed approximation is validated in the context of gallbladder segmentation from a non-contrast computed tomography (CT) volume. After spatial standardization and estimation of the posterior probability of the target organ, simultaneous optimization of the segmentation, shape, and location priors is performed using a branch-and-bound method. Fast approximation is achieved by combining sampling in the eigenshape space to reduce the number of shape priors and an efficient computational technique for evaluating the lower bound. Performance was evaluated using threefold cross-validation of 27 CT volumes. Optimization in terms of translation of the shape prior significantly improved segmentation performance. The proposed method achieved a result of 0.623 on the Jaccard index in gallbladder segmentation, which is comparable to that of state-of-the-art methods. The computational efficiency of the algorithm is confirmed to be good enough to allow execution on a personal computer. Joint optimization of the segmentation, shape, and location priors was proposed, and it proved to be effective in gallbladder segmentation with high computational efficiency.

  15. Simultaneous biosorption of selenium, arsenic and molybdenum with modified algal-based biochars.

    PubMed

    Johansson, Charlotte L; Paul, Nicholas A; de Nys, Rocky; Roberts, David A

    2016-01-01

    Ash disposal waters from coal-fired power stations present a challenging water treatment scenario as they contain high concentrations of the oxyanions Se, As and Mo which are difficult to remove through conventional techniques. In an innovative process, macroalgae can be treated with Fe and processed through slow pyrolysis into Fe-biochar which has a high affinity for oxyanions. However, the effect of production conditions on the efficacy of Fe-biochar is poorly understood. We produced Fe-biochar from two algal sources; "Gracilaria waste" (organic remnants after agar is extracted from cultivated Gracilaria) and the freshwater macroalgae Oedogonium. Pyrolysis experiments tested the effects of the concentration of Fe(3+) in pre-treatment, and pyrolysis temperatures, on the efficacy of the Fe-biochar. The efficacy of Fe-biochar increased with increasing concentrations of Fe(3+) in the pre-treatment solutions, and decreased with increasing pyrolysis temperatures. The optimized Fe-biochar for each biomass was produced by treatment with a 12.5% w/v Fe(3+) solution, followed by slow pyrolysis at 300 °C. The Fe-biochar produced in this way had higher a biosorption capacity for As and Mo (62.5-80.7 and 67.4-78.5 mg g(-1) respectively) than Se (14.9-38.8 mg g(-1)) in single-element mock effluents, and the Fe-biochar produced from Oedogonium had a higher capacity for all elements than the Fe-biochar produced from Gracilaria waste. Regardless, the optimal Fe-biochars from both biomass sources were able to effectively treat Se, As and Mo simultaneously in an ash disposal effluent from a power station. The production of Fe-biochar from macroalgae is a promising technique for treatment of complex effluents containing oxyanions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Simultaneous detection of six urinary pteridines and creatinine by high-performance liquid chromatography-tandem mass spectrometry for clinical breast cancer detection.

    PubMed

    Burton, Casey; Shi, Honglan; Ma, Yinfa

    2013-11-19

    Recent preliminary studies have implicated urinary pteridines as candidate biomarkers in a growing number of malignancies including breast cancer. While the developments of capillary electrophoresis-laser induced fluorescence (CE-LIF), high performance liquid chromatography (HPLC), and liquid chromatography-mass spectroscopy (LC-MS) pteridine urinalyses among others have helped to enable these findings, limitations including poor pteridine specificity, asynchronous or nonexistent renal dilution normalization, and a lack of information regarding adduct formation in mass spectrometry techniques utilizing electrospray ionization (ESI) have prevented application of these techniques to a larger clinical setting. In this study, a simple, rapid, specific, and sensitive high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method has been developed and optimized for simultaneous detection of six pteridines previously implicated in breast cancer and creatinine as a renal dilution factor in urine. In addition, this study reports cationic adduct formation of urinary pteridines under ESI-positive ionization for the first time. This newly developed technique separates and detects the following six urinary pteridines: 6-biopterin, 6-hydroxymethylpterin, d-neopterin, pterin, isoxanthopterin, and xanthopterin, as well as creatinine. The method detection limit for the pteridines is between 0.025 and 0.5 μg/L, and for creatinine, it is 0.15 μg/L. The method was also validated by spiked recoveries (81-105%), reproducibility (RSD: 1-6%), and application to 25 real urine samples from breast cancer positive and negative samples through a double-blind study. The proposed technique was finally compared directly with a previously reported CE-LIF technique, concluding that additional or alternative renal dilution factors are needed for proper investigation of urinary pteridines as breast cancer biomarkers.

  17. Surface NMR imaging with simultaneously energized transmission loops

    NASA Astrophysics Data System (ADS)

    Irons, T. P.; Kass, A.; Parsekian, A.

    2016-12-01

    Surface nuclear magnetic resonance (sNMR) is a unique geophysical technique which allows for the direct detection of liquid-phase water. In saturated media the sNMR response also provides estimates of hydrologic properties including porosity and permeability. The most common survey deployment consists of a single coincident loop performing both transmission and receiving. Because the sNMR method is relatively slow, tomography using coincident loops is time-intensive. Surveys using multiple receiver loops (but a single transmitter) provide additional sensitivity; however, they still require iterating transmission over the loops, and do not decrease survey acquisition time. In medical rotating frame imaging, arrays of transmitters are employed in order to decrease acquisition time, whilst optimizing image resolving power-a concept which we extend to earth's field imaging. Using simultaneously energized transmission loops decreases survey time linearly with the number of channels. To demonstrate the efficacy and benefits of multiple transmission loops, we deployed simultaneous sNMR transmission arrays using minimally coupled loops and a specially modified instrument at the Red Buttes Hydrogeophysics Experiment Site-a well-characterized location near Laramie, Wyoming. The proposed survey proved capable of acquiring multiple-channel imaging data with comparable noise levels to figure-eight configurations. Finally, the channels can be combined after acquisition or inverted simultaneously to provide composite datasets and images. This capability leverages the improved near surface resolving power of small loops but retains sensitivity to deep media through the use of synthetic aperature receivers. As such, simultaneously acquired loop arrays provide a great deal of flexibility.

  18. Simultaneous assay of multiple antibiotics in human plasma by LC-MS/MS: importance of optimizing formic acid concentration.

    PubMed

    Chen, Feng; Hu, Zhe-Yi; Laizure, S Casey; Hudson, Joanna Q

    2017-03-01

    Optimal dosing of antibiotics in critically ill patients is complicated by the development of resistant organisms requiring treatment with multiple antibiotics and alterations in systemic exposure due to diseases and extracorporeal drug removal. Developing guidelines for optimal antibiotic dosing is an important therapeutic goal requiring robust analytical methods to simultaneously measure multiple antibiotics. An LC-MS/MS assay using protein precipitation for cleanup followed by a 6-min gradient separation was developed to simultaneously determine five antibiotics in human plasma. The precision and accuracy were within the 15% acceptance range. The formic acid concentration was an important determinant of signal intensity, peak shape and matrix effects. The method was designed to be simple and successfully applied to a clinical pharmacokinetic study.

  19. Optimizing body contour in massive weight loss patients: the modified vertical abdominoplasty.

    PubMed

    Costa, Luiz Fernando da; Landecker, Alan; Manta, Anísio Marinho

    2004-12-01

    In morbid obesity, contour deformities of the abdomen are common after bariatric surgery and radical weight loss. Traditional abdominoplasty techniques often fail to maximally improve body contour in these cases because adjacent sites such as the hip rolls and flanks are not treated, leaving the patient with large lateral tissue redundancies and dog-ears. In an attempt to solve these challenging problems, the authors present the modified vertical abdominoplasty technique, a single-stage procedure that involves a combined vertical and transverse approach in which an "en bloc" resection of the redundant tissues is performed without undermining, drainage, or reinforcement of the abdominal wall. The latter is only carried out when diastasis and/or hernias are present, and Marlex mesh may be utilized when indicated. In patients with simultaneous large umbilical hernias and/or excessively long stalks, neoumbilicoplasty is recommended. A significant improvement of abdominal contour was obtained in the vast majority of patients because the resection design offers simultaneous treatment of both vertical and transverse tissue redundancies in the abdomen and neighboring regions, with more harmonic results when compared with purely vertical or transverse approaches. The modified vertical abdominoplasty technique is an easy, fast, and reliable alternative for treating these patients, with less intraoperative bleeding, reduced overall cost, and low morbidity rates. In selected cases, the technique is capable of offering excellent results in terms of contouring and maximizes the overall outcome of treatment protocols for these patients, who can then be integrated into normal life with heightened self-esteem, happiness, and productivity.

  20. Optimal simultaneous superpositioning of multiple structures with missing data

    PubMed Central

    Theobald, Douglas L.; Steindel, Phillip A.

    2012-01-01

    Motivation: Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually ‘missing’ from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Results: Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation–maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. Availability and implementation: The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. Contact: dtheobald@brandeis.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22543369

  1. Image information content and patient exposure.

    PubMed

    Motz, J W; Danos, M

    1978-01-01

    Presently, patient exposure and x-ray tube kilovoltage are determined by image visibility requirements on x-ray film. With the employment of image-processing techniques, image visibility may be manipulated and the exposure may be determined only by the desired information content, i.e., by the required degree of tissue-density descrimination and spatial resolution. This work gives quantitative relationships between the image information content and the patient exposure, give estimates of the minimum exposures required for the detection of image signals associated with particular radiological exams. Also, for subject thickness larger than approximately 5 cm, the results show that the maximum information content may be obtained at a single kilovoltage and filtration with the simultaneous employment of image-enhancement and antiscatter techniques. This optimization may be used either to reduce the patient exposure or to increase the retrieved information.

  2. Neural network based online simultaneous policy update algorithm for solving the HJI equation in nonlinear H∞ control.

    PubMed

    Wu, Huai-Ning; Luo, Biao

    2012-12-01

    It is well known that the nonlinear H∞ state feedback control problem relies on the solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which is a nonlinear partial differential equation that has proven to be impossible to solve analytically. In this paper, a neural network (NN)-based online simultaneous policy update algorithm (SPUA) is developed to solve the HJI equation, in which knowledge of internal system dynamics is not required. First, we propose an online SPUA which can be viewed as a reinforcement learning technique for two players to learn their optimal actions in an unknown environment. The proposed online SPUA updates control and disturbance policies simultaneously; thus, only one iterative loop is needed. Second, the convergence of the online SPUA is established by proving that it is mathematically equivalent to Newton's method for finding a fixed point in a Banach space. Third, we develop an actor-critic structure for the implementation of the online SPUA, in which only one critic NN is needed for approximating the cost function, and a least-square method is given for estimating the NN weight parameters. Finally, simulation studies are provided to demonstrate the effectiveness of the proposed algorithm.

  3. Simultaneous separation/enrichment and detection of trace ciprofloxacin and lomefloxacin in food samples using thermosensitive smart polymers aqueous two-phase flotation system combined with HPLC.

    PubMed

    Lu, Yang; Chen, Bo; Yu, Miao; Han, Juan; Wang, Yun; Tan, Zhenjiang; Yan, Yongsheng

    2016-11-01

    Smart polymer aqueous two phase flotation system (SPATPF) is a new separation and enrichment technology that integrated the advantages of the three technologies, i.e., aqueous two phase system, smart polymer and flotation sublation. Ethylene oxide and propylene oxide copolymer (EOPO)-(NH4)2SO4 SPATPF is a pretreatment technique, and it is coupled with high-performance liquid chromatography to analyze the trace ciprofloxacin and lomefloxacin in real food samples. The optimized conditions of experiment were determined in the multi-factor experiment by using response surface methodology. The flotation efficiency of lomefloxacin and ciprofloxacin was 94.50% and 98.23% under the optimized conditions. The recycling experimentsshowed that the smart polymer EOPO could use repeatedly, which will reduce the cost in the future application. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction

    PubMed Central

    Gregor, Jens; Fessler, Jeffrey A.

    2015-01-01

    Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906

  5. Response surface optimization of substrates for thermophilic anaerobic codigestion of sewage sludge and food waste.

    PubMed

    Kim, Hyun-Woo; Shin, Hang-Sik; Han, Sun-Kee; Oh, Sae-Eun

    2007-03-01

    This study investigated the effects of food waste constituents on thermophilic (55 degrees C) anaerobic codigestion of sewage sludge and food waste by using statistical techniques based on biochemical methane potential tests. Various combinations of grain, vegetable, and meat as cosubstrate were tested, and then the data of methane potential (MP), methane production rate (MPR), and first-order kinetic constant of hydrolysis (kH) were collected for further analyses. Response surface methodology by the Box-Behnken design can verify the effects and their interactions of three variables on responses efficiently. MP was mainly affected by grain, whereas MPR and kH were affected by both vegetable and meat. Estimated polynomial regression models can properly explain the variability of experimental data with a high-adjusted R2 of 0.727, 0.836, and 0.915, respectively. By applying a series of optimization techniques, it was possible to find the proper criteria of cosubstrate. The optimal cosubstrate region was suggested based on overlay contours of overall mean responses. With the desirability contour plots, it was found that optimal conditions of cosubstrate for the maximum MPR (56.6 mL of CH4/g of chemical oxygen demand [COD]/day) were 0.71 g of COD/L of grain, 0.18 g of COD/L of vegetable, and 0.38 g of COD/L of meat by the simultaneous consideration of MP, MPR, and kH. Within the range of each factor examined, the corresponding optimal ratio of sewage sludge to cosubstrate was 71:29 as the COD basis. Elaborate discussions could yield practical operational strategies for the enhanced thermophilic anaerobic codigestion of sewage sludge and food waste.

  6. Wavelet-promoted sparsity for non-invasive reconstruction of electrical activity of the heart.

    PubMed

    Cluitmans, Matthijs; Karel, Joël; Bonizzi, Pietro; Volders, Paul; Westra, Ronald; Peeters, Ralf

    2018-05-12

    We investigated a novel sparsity-based regularization method in the wavelet domain of the inverse problem of electrocardiography that aims at preserving the spatiotemporal characteristics of heart-surface potentials. In three normal, anesthetized dogs, electrodes were implanted around the epicardium and body-surface electrodes were attached to the torso. Potential recordings were obtained simultaneously on the body surface and on the epicardium. A CT scan was used to digitize a homogeneous geometry which consisted of the body-surface electrodes and the epicardial surface. A novel multitask elastic-net-based method was introduced to regularize the ill-posed inverse problem. The method simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Performance was assessed in terms of quality of reconstructed epicardial potentials, estimated activation and recovery time, and estimated locations of pacing, and compared with performance of Tikhonov zeroth-order regularization. Results in the wavelet domain obtained higher sparsity than those in the time domain. Epicardial potentials were non-invasively reconstructed with higher accuracy than with Tikhonov zeroth-order regularization (p < 0.05), and recovery times were improved (p < 0.05). No significant improvement was found in terms of activation times and localization of origin of pacing. Next to improved estimation of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias, this novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions. Graphical Abstract The inverse problem of electrocardiography is to reconstruct heart-surface potentials from recorded bodysurface electrocardiograms (ECGs) and a torso-heart geometry. However, it is ill-posed and solving it requires additional constraints for regularization. We introduce a regularization method that simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Our approach reconstructs epicardial (heart-surface) potentials with higher accuracy than common methods. It also improves the reconstruction of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias. This novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions.

  7. A Multi-Objective Optimization Technique to Model the Pareto Front of Organic Dielectric Polymers

    NASA Astrophysics Data System (ADS)

    Gubernatis, J. E.; Mannodi-Kanakkithodi, A.; Ramprasad, R.; Pilania, G.; Lookman, T.

    Multi-objective optimization is an area of decision making that is concerned with mathematical optimization problems involving more than one objective simultaneously. Here we describe two new Monte Carlo methods for this type of optimization in the context of their application to the problem of designing polymers with more desirable dielectric and optical properties. We present results of applying these Monte Carlo methods to a two-objective problem (maximizing the total static band dielectric constant and energy gap) and a three objective problem (maximizing the ionic and electronic contributions to the static band dielectric constant and energy gap) of a 6-block organic polymer. Our objective functions were constructed from high throughput DFT calculations of 4-block polymers, following the method of Sharma et al., Nature Communications 5, 4845 (2014) and Mannodi-Kanakkithodi et al., Scientific Reports, submitted. Our high throughput and Monte Carlo methods of analysis extend to general N-block organic polymers. This work was supported in part by the LDRD DR program of the Los Alamos National Laboratory and in part by a Multidisciplinary University Research Initiative (MURI) Grant from the Office of Naval Research.

  8. Modelling and optimization of semi-solid processing of 7075 Al alloy

    NASA Astrophysics Data System (ADS)

    Binesh, B.; Aghaie-Khafri, M.

    2017-09-01

    The new modified strain-induced melt activation (SIMA) process presented by Binesh and Aghaie-Khafri was optimized using a response surface methodology to improve the thixotropic characteristics of semi-solid 7075 alloy. The responses, namely the average grain size and the shape factor, were considered as functions of three independent input variables: effective strain, isothermal holding temperature and time. Mathematical models for the responses were developed using the regression analysis technique, and the adequacy of the models was validated by the analysis of variance method. The calculated results correlated fairly well with the experiments. It was found that all the first- and second-order terms of the independent parameters and the interactive terms of the effective strain and holding time were statistically significant for the responses. In order to simultaneously optimize the responses, the desirable values for the effective strain, holding temperature and time were predicted to be 5.1, 609 °C and 14 min, respectively, when employing the desirability function approach. Based on the optimization results, a significant improvement in the average grain size and shape factor of the semi-solid slurry prepared by the new modified SIMA process was observed.

  9. Optimization of the pretreatment of wastewater from a slaughterhouse and packing plant through electrocoagulation in a batch reactor.

    PubMed

    Orssatto, Fábio; Ferreira Tavares, Maria Hermínia; Manente da Silva, Flávia; Eyng, Eduardo; Farias Biassi, Brendown; Fleck, Leandro

    2017-10-01

    The purpose of this study is to evaluate the removal of chemical oxygen demand (COD), turbidity and color of wastewater from a pig slaughterhouse and packing plant through the electrochemical technique and to optimize the ΔV (electric potential difference) and HRT (hydraulic retention time) variables in an electrocoagulation batch reactor using aluminum electrodes. The experimental design used was rotatable central composite design. For turbidity, the values for removal efficiency obtained varied from 92.85% to 99.28%; for color, they varied from 81.34% to 98.93% and for COD, they varied from 58.61% to 81.01%. The best optimized conditions of treatment were at 25 min for the HRT and 25 V for the ΔV, which correspond to electrical current of 1.08 A and a current density of 21.6 mA cm -2 . The aluminum residue varied from 15.254 to 54.291 mg L -1 and the cost of the treatment was US$4.288 m -3 . The novelty of the work was the simultaneous optimization of three response variables using the desirability function applied to the treatment of wastewater from slaughterhouses.

  10. Simultaneous determination of several phytohormones in natural coconut juice by hollow fiber-based liquid-liquid-liquid microextraction-high performance liquid chromatography.

    PubMed

    Wu, Yunli; Hu, Bin

    2009-11-06

    A simple, selective, sensitive and inexpensive method of hollow fiber-based liquid-liquid-liquid microextraction (HF-LLLME) combined with high performance liquid chromatography (HPLC)-ultraviolet (UV) detection was developed for the determination of four acidic phytohormones (salicylic acid (SA), indole-3-acetic acid (IAA), (+/-) abscisic acid (ABA) and (+/-) jasmonic acid (JA)) in natural coconut juice. To the best of our knowledge, this is the first report on the use of liquid phase microextraction (LPME) as a sample pretreatment technique for the simultaneous analysis of several phytohormones. Using phenetole to fill the pores of hollow fiber as the organic phase, 0.1molL(-1) NaOH solution in the lumen of hollow fiber as the acceptor phase and 1molL(-1) HCl as the donor phase, a simultaneous preconcentration of four target phytohormones was realized. The acceptor phase was finally withdrawn into the microsyringe and directly injected into HPLC for the separation and quantification of the target phytohormones. The factors affecting the extraction efficiency of four phytohormones by HF-LLLME were optimized with orthogonal design experiment, and the data was analyzed by Statistical Product and Service Solutions (SPSS) software. Under the optimized conditions, the enrichment factors for SA, IAA, ABA and JA were 243, 215, 52 and 48, with the detection limits (S/N=3) of 4.6, 1.3, 0.9ngmL(-1) and 8.8 microg mL(-1), respectively. The relative standard deviations (RSDs, n=7) were 7.9, 4.9, 6.8% at 50ngmL(-1) level for SA, IAA, ABA and 8.4% at 500 microg mL(-1) for JA, respectively. To evaluate the accuracy of the method, the developed method was applied for the simultaneous analysis of several phytohormones in five natural coconut juice samples, and the recoveries for the spiked samples were in the range of 88.3-119.1%.

  11. Simultaneous optimization of loading pattern and burnable poison placement for PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alim, F.; Ivanov, K.; Yilmaz, S.

    2006-07-01

    To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less

  12. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults

    PubMed Central

    Tait, Jamie L.; Duckham, Rachel L.; Milte, Catherine M.; Main, Luana C.; Daly, Robin M.

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people. PMID:29163146

  13. Neuro-genetic multioptimization of the determination of polychlorinated biphenyl congeners in human milk by headspace solid phase microextraction coupled to gas chromatography with electron capture detection.

    PubMed

    Kowalski, Cláudia Hoffmann; da Silva, Gilmare Antônia; Poppi, Ronei Jesus; Godoy, Helena Teixeira; Augusto, Fabio

    2007-02-28

    Polychlorinated biphenyls (PCB) can eventually contaminate breast milk, which is a serious issue to the newborn due to their high vulnerability. Solid phase microextraction (SPME) can be a very convenient technique for their isolation and pre-concentration prior chromatographic analysis. Here, a simultaneous multioptimization strategy based on a neuro-genetic approach was applied to a headspace SPME method for determination of 12 PCB in human milk. Gas chromatography with electron capture detection (ECD) was adopted for the separation and detection of the analytes. Experiments according to a Doehlert design were carried out with varied extraction time and temperature, media ionic strength and concentration of the methanol (co-solvent). To find the best model that simultaneously correlate all PCB peak areas and SPME extraction conditions, a multivariate calibration method based on a Bayesian Neural Network (BNN) was applied. The net output from the neural network was used as input in a genetic algorithm (GA) optimization operation (neuro-genetic approach). The GA pointed out that the best values of the overall SPME operational conditions were the saturation of the media with NaCl, extraction temperature of 95 degrees C, extraction time of 60 min and addition of 5% (v/v) methanol to the media. These optimized parameters resulted in the decrease of the detection limits and increase on the sensitivity for all tested analytes, showing that the use of neuro-genetic approach can be a promising way for optimization of SPME methods.

  14. Reliable Transition State Searches Integrated with the Growing String Method.

    PubMed

    Zimmerman, Paul

    2013-07-09

    The growing string method (GSM) is highly useful for locating reaction paths connecting two molecular intermediates. GSM has often been used in a two-step procedure to locate exact transition states (TS), where GSM creates a quality initial structure for a local TS search. This procedure and others like it, however, do not always converge to the desired transition state because the local search is sensitive to the quality of the initial guess. This article describes an integrated technique for simultaneous reaction path and exact transition state search. This is achieved by implementing an eigenvector following optimization algorithm in internal coordinates with Hessian update techniques. After partial convergence of the string, an exact saddle point search begins under the constraint that the maximized eigenmode of the TS node Hessian has significant overlap with the string tangent near the TS. Subsequent optimization maintains connectivity of the string to the TS as well as locks in the TS direction, all but eliminating the possibility that the local search leads to the wrong TS. To verify the robustness of this approach, reaction paths and TSs are found for a benchmark set of more than 100 elementary reactions.

  15. Extracting archaeal populations from iron oxidizing systems

    NASA Astrophysics Data System (ADS)

    Whitmore, L. M.; Hutchison, J.; Chrisler, W.; Jay, Z.; Moran, J.; Inskeep, W.; Kreuzer, H.

    2013-12-01

    Unique environments in Yellowstone National Park offer exceptional conditions for studying microorganisms in extreme and constrained systems. However, samples from some extreme systems often contain inorganic components that pose complications during microbial and molecular analysis. Several archaeal species are found in acidic, geothermal ferric-oxyhydroxide mats; these species have been shown to adhere to mineral surfaces in flocculated colonies. For optimal microbial analysis, (microscopy, flow cytometry, genomic extractions, proteomic analysis, stable isotope analysis, and others), improved techniques are needed to better facilitate cell detachment and separation from mineral surfaces. As a requirement, these techniques must preserve cell structure while simultaneously minimizing organic carryover to downstream analysis. Several methods have been developed for removing sediments from mixed prokaryotic populations, including ultra-centrifugation, nycodenz gradient, sucrose cushions, and cell straining. In this study we conduct a comparative analysis of mechanisms used to detach archaeal cell populations from the mineral interface. Specifically, we evaluated mechanical and chemical approaches for cell separation and homogenization. Methods were compared using confocal microscopy, flow cytometry analyses, and real-time PCR detection. The methodology and approaches identified will be used to optimize biomass collection from environmental specimens or isolates grown with solid phases.

  16. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    PubMed

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  17. Total energy expenditure in burned children using the doubly labeled water technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goran, M.I.; Peters, E.J.; Herndon, D.N.

    Total energy expenditure (TEE) was measured in 15 burned children with the doubly labeled water technique. Application of the technique in burned children required evaluation of potential errors resulting from nutritional intake altering background enrichments during studies and from the high rate of water turnover relative to CO2 production. Five studies were discarded because of these potential problems. TEE was 1.33 +/- 0.27 times predicted basal energy expenditure (BEE), and in studies where resting energy expenditure (REE) was simultaneously measured, TEE was 1.18 +/- 0.17 times REE, which in turn was 1.16 +/- 0.10 times predicted BEE. TEE was significantlymore » correlated with measured REE (r2 = 0.92) but not with predicted BEE. These studies substantiate the advantage of measuring REE to predict TEE in severely burned patients as opposed to relying on standardized equations. Therefore we recommend that optimal nutritional support will be achieved in convalescent burned children by multiplying REE by an activity factor of 1.2.« less

  18. Detection and Length Estimation of Linear Scratch on Solid Surfaces Using an Angle Constrained Ant Colony Technique

    NASA Astrophysics Data System (ADS)

    Pal, Siddharth; Basak, Aniruddha; Das, Swagatam

    In many manufacturing areas the detection of surface defects is one of the most important processes in quality control. Currently in order to detect small scratches on solid surfaces most of the industries working on material manufacturing rely on visual inspection primarily. In this article we propose a hybrid computational intelligence technique to automatically detect a linear scratch from a solid surface and estimate its length (in pixel unit) simultaneously. The approach is based on a swarm intelligence algorithm called Ant Colony Optimization (ACO) and image preprocessing with Wiener and Sobel filters as well as the Canny edge detector. The ACO algorithm is mostly used to compensate for the broken parts of the scratch. Our experimental results confirm that the proposed technique can be used for detecting scratches from noisy and degraded images, even when it is very difficult for conventional image processing to distinguish the scratch area from its background.

  19. Generalized ISAR--part II: interferometric techniques for three-dimensional location of scatterers.

    PubMed

    Given, James A; Schmidt, William R

    2005-11-01

    This paper is the second part of a study dedicated to optimizing diagnostic inverse synthetic aperture radar (ISAR) studies of large naval vessels. The method developed here provides accurate determination of the position of important radio-frequency scatterers by combining accurate knowledge of ship position and orientation with specialized signal processing. The method allows for the simultaneous presence of substantial Doppler returns from both change of roll angle and change of aspect angle by introducing generalized ISAR ates. The first paper provides two modes of interpreting ISAR plots, one valid when roll Doppler is dominant, the other valid when the aspect angle Doppler is dominant. Here, we provide, for each type of ISAR plot technique, a corresponding interferometric ISAR (InSAR) technique. The former, aspect-angle dominated InSAR, is a generalization of standard InSAR; the latter, roll-angle dominated InSAR, seems to be new to this work. Both methods are shown to be efficient at identifying localized scatterers under simulation conditions.

  20. System-level power optimization for real-time distributed embedded systems

    NASA Astrophysics Data System (ADS)

    Luo, Jiong

    Power optimization is one of the crucial design considerations for modern electronic systems. In this thesis, we present several system-level power optimization techniques for real-time distributed embedded systems, based on dynamic voltage scaling, dynamic power management, and management of peak power and variance of the power profile. Dynamic voltage scaling has been widely acknowledged as an important and powerful technique to trade off dynamic power consumption and delay. Efficient dynamic voltage scaling requires effective variable-voltage scheduling mechanisms that can adjust voltages and clock frequencies adaptively based on workloads and timing constraints. For this purpose, we propose static variable-voltage scheduling algorithms utilizing criticalpath driven timing analysis for the case when tasks are assumed to have uniform switching activities, as well as energy-gradient driven slack allocation for a more general scenario. The proposed techniques can achieve closeto-optimal power savings with very low computational complexity, without violating any real-time constraints. We also present algorithms for power-efficient joint scheduling of multi-rate periodic task graphs along with soft aperiodic tasks. The power issue is addressed through both dynamic voltage scaling and power management. Periodic task graphs are scheduled statically. Flexibility is introduced into the static schedule to allow the on-line scheduler to make local changes to PE schedules through resource reclaiming and slack stealing, without interfering with the validity of the global schedule. We provide a unified framework in which the response times of aperiodic tasks and power consumption are dynamically optimized simultaneously. Interconnection network fabrics point to a new generation of power-efficient and scalable interconnection architectures for distributed embedded systems. As the system bandwidth continues to increase, interconnection networks become power/energy limited as well. Variable-frequency links have been designed by circuit designers for both parallel and serial links, which can adaptively regulate the supply voltage of transceivers to a desired link frequency, to exploit the variations in bandwidth requirement for power savings. We propose solutions for simultaneous dynamic voltage scaling of processors and links. The proposed solution considers real-time scheduling, flow control, and packet routing jointly. It can trade off the power consumption on processors and communication links via efficient slack allocation, and lead to more power savings than dynamic voltage scaling on processors alone. For battery-operated systems, the battery lifespan is an important concern. Due to the effects of discharge rate and battery recovery, the discharge pattern of batteries has an impact on the battery lifespan. Battery models indicate that even under the same average power consumption, reducing peak power current and variance in the power profile can increase the battery efficiency and thereby prolong battery lifetime. To take advantage of these effects, we propose battery-driven scheduling techniques for embedded applications, to reduce the peak power and the variance in the power profile of the overall system under real-time constraints. The proposed scheduling algorithms are also beneficial in addressing reliability and signal integrity concerns by effectively controlling peak power and variance of the power profile.

  1. Simultaneous measurement of polymerization stress and curing kinetics for photo-polymerized composites with high filler contents.

    PubMed

    Wang, Zhengzhi; Landis, Forrest A; Giuseppetti, Anthony A M; Lin-Gibson, Sheng; Chiang, Martin Y M

    2014-12-01

    Photopolymerized composites are used in a broad range of applications with their performance largely directed by reaction kinetics and contraction accompanying polymerization. The present study was to demonstrate an instrument capable of simultaneously collecting multiple kinetics parameters for a wide range of photopolymerizable systems: degree of conversion (DC), reaction exotherm, and polymerization stress (PS). Our system consisted of a cantilever beam-based instrument (tensometer) that has been optimized to capture a large range of stress generated by lightly-filled to highly-filled composites. The sample configuration allows the tensometer to be coupled to a fast near infrared (NIR) spectrometer collecting spectra in transmission mode. Using our instrument design, simultaneous measurements of PS and DC are performed, for the first time, on a commercial composite with ≈80% (by mass) silica particle fillers. The in situ NIR spectrometer collects more than 10 spectra per second, allowing for thorough characterization of reaction kinetics. With increased instrument sensitivity coupled with the ability to collect real time reaction kinetics information, we show that the external constraint imposed by the cantilever beam during polymerization could affect the rate of cure and final degree of polymerization. The present simultaneous measurement technique is expected to provide new insights into kinetics and property relationships for photopolymerized composites with high filler content such as dental restorative composites. Published by Elsevier Ltd.

  2. Simultaneous Measurement of Polymerization Stress and Curing Kinetics for Photo-polymerized Composites with High Filler Contents

    PubMed Central

    Wang, Zhengzhi; Landis, Forrest A.; Giuseppetti, Anthony A.M.; Lin-Gibson, Sheng; Chiang, Martin Y.M.

    2015-01-01

    Objectives Photopolymerized composites are used in a broad range of applications with their performance largely directed by reaction kinetics and contraction accompanying polymerization. The present study was to demonstrate an instrument capable of simultaneously collecting multiple kinetics parameters for a wide range of photopolymerizable systems: degree of conversion (DC), reaction exotherm, and polymerization stress (PS). Methods Our system consisted of a cantilever beam-based instrument (tensometer) that has been optimized to capture a large range of stress generated by lightly-filled to highly-filled composites. The sample configuration allows the tensometer to be coupled to a fast near infrared (NIR) spectrometer collecting spectra in transmission mode. Results Using our instrument design, simultaneous measurements of PS and DC are performed, for the first time, on a commercial composite with ≈ 80 % (by mass) silica particle fillers. The in situ NIR spectrometer collects more than 10 spectra per second, allowing for thorough characterization of reaction kinetics. With increased instrument sensitivity coupled with the ability to collect real time reaction kinetics information, we show that the external constraint imposed by the cantilever beam during polymerization could affect the rate of cure and final degree of polymerization. Significance The present simultaneous measurement technique is expected to provide new insights into kinetics and property relationships for photopolymerized composites with high filler content such as dental restorative composites. PMID:25443160

  3. Real-time holographic deconvolution techniques for one-way image transmission through an aberrating medium: characterization, modeling, and measurements.

    PubMed

    Haji-Saeed, B; Sengupta, S K; Testorf, M; Goodhue, W; Khoury, J; Woods, C L; Kierstead, J

    2006-05-10

    We propose and demonstrate a new photorefractive real-time holographic deconvolution technique for adaptive one-way image transmission through aberrating media by means of four-wave mixing. In contrast with earlier methods, which typically required various codings of the exact phase or two-way image transmission for correcting phase distortion, our technique relies on one-way image transmission through the use of exact phase information. Our technique can simultaneously correct both amplitude and phase distortions. We include several forms of image degradation, various test cases, and experimental results. We characterize the performance as a function of the input beam ratios for four metrics: signal-to-noise ratio, normalized root-mean-square error, edge restoration, and peak-to-total energy ratio. In our characterization we use false-color graphic images to display the best beam-intensity ratio two-dimensional region(s) for each of these metrics. Test cases are simulated at the optimal values of the beam-intensity ratios. We demonstrate our results through both experiment and computer simulation.

  4. Evolutionary algorithm based optimization of hydraulic machines utilizing a state-of-the-art block coupled CFD solver and parametric geometry and mesh generation tools

    NASA Astrophysics Data System (ADS)

    S, Kyriacou; E, Kontoleontos; S, Weissenberger; L, Mangani; E, Casartelli; I, Skouteropoulou; M, Gattringer; A, Gehrer; M, Buchmayr

    2014-03-01

    An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure.

  5. Weighted least-square approach for simultaneous measurement of multiple reflective surfaces

    NASA Astrophysics Data System (ADS)

    Tang, Shouhong; Bills, Richard E.; Freischlad, Klaus

    2007-09-01

    Phase shifting interferometry (PSI) is a highly accurate method for measuring the nanometer-scale relative surface height of a semi-reflective test surface. PSI is effectively used in conjunction with Fizeau interferometers for optical testing, hard disk inspection, and semiconductor wafer flatness. However, commonly-used PSI algorithms are unable to produce an accurate phase measurement if more than one reflective surface is present in the Fizeau interferometer test cavity. Examples of test parts that fall into this category include lithography mask blanks and their protective pellicles, and plane parallel optical beam splitters. The plane parallel surfaces of these parts generate multiple interferograms that are superimposed in the recording plane of the Fizeau interferometer. When using wavelength shifting in PSI the phase shifting speed of each interferogram is proportional to the optical path difference (OPD) between the two reflective surfaces. The proposed method is able to differentiate each underlying interferogram from each other in an optimal manner. In this paper, we present a method for simultaneously measuring the multiple test surfaces of all underlying interferograms from these superimposed interferograms through the use of a weighted least-square fitting technique. The theoretical analysis of weighted least-square technique and the measurement results will be described in this paper.

  6. Multifrequency spectrum analysis using fully digital G Mode-Kelvin probe force microscopy.

    PubMed

    Collins, Liam; Belianinov, Alex; Somnath, Suhas; Rodriguez, Brian J; Balke, Nina; Kalinin, Sergei V; Jesse, Stephen

    2016-03-11

    Since its inception over two decades ago, Kelvin probe force microscopy (KPFM) has become the standard technique for characterizing electrostatic, electrochemical and electronic properties at the nanoscale. In this work, we present a purely digital, software-based approach to KPFM utilizing big data acquisition and analysis methods. General mode (G-Mode) KPFM works by capturing the entire photodetector data stream, typically at the sampling rate limit, followed by subsequent de-noising, analysis and compression of the cantilever response. We demonstrate that the G-Mode approach allows simultaneous multi-harmonic detection, combined with on-the-fly transfer function correction-required for quantitative CPD mapping. The KPFM approach outlined in this work significantly simplifies the technique by avoiding cumbersome instrumentation optimization steps (i.e. lock in parameters, feedback gains etc), while also retaining the flexibility to be implemented on any atomic force microscopy platform. We demonstrate the added advantages of G-Mode KPFM by allowing simultaneous mapping of CPD and capacitance gradient (C') channels as well as increased flexibility in data exploration across frequency, time, space, and noise domains. G-Mode KPFM is particularly suitable for characterizing voltage sensitive materials or for operation in conductive electrolytes, and will be useful for probing electrodynamics in photovoltaics, liquids and ionic conductors.

  7. Simultaneous velocity and pressure quantification using pressure-sensitive flow tracers in air

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Peterson, Sean; Porfiri, Maurizio

    2017-11-01

    Particle-based measurement techniques for assessing the velocity field of a fluid have advanced rapidly over the past two decades. Full-field pressure measurement techniques have remained elusive, however. In this work, we aim to demonstrate the possibility of direct simultaneous planar velocity and pressure measurement of a high speed aerodynamic flow by employing novel pressure-sensitive tracer particles for particle image velocimetry (PIV). Specifically, the velocity and pressure variations of an airflow through a converging-diverging channel are studied. Polystyrene microparticles embedded with a pressure-sensitive phosphorescent dye-platinum octaethylporphyrin (PtOEP)-are used as seeding particles. Due to the oxygen quenching effect, the emission lifetime of PtOEP is highly sensitive to the oxygen concentration, that is, the partial pressure of oxygen, in the air. Since the partial pressure of oxygen is linearly proportional to the air pressure, we can determine the air pressure through the phosphorescence emission lifetime of the dye. The velocity field is instead obtained using traditional PIV methods. The particles have a pressure resolution on the order of 1 kPa, which may be improved by optimizing the particle size and dye concentration to suit specific flow scenarios. This work was supported by the National Science Foundation under Grant Number CBET-1332204.

  8. Multifrequency spectrum analysis using fully digital G Mode-Kelvin probe force microscopy

    DOE PAGES

    Collins, Liam F.; Jesse, Stephen; Belianinov, Alex; ...

    2016-02-11

    Since its inception over two decades ago, Kelvin probe force microscopy (KPFM) has become the standard technique for characterizing electrostatic, electrochemical and electronic properties at the nanoscale. In this work, we present a purely digital, software-based approach to KPFM utilizing big data acquisition and analysis methods. General Mode (G-Mode) KPFM, works by capturing the entire photodetector data stream, typically at the sampling rate limit, followed by subsequent de-noising, analysis and compression of the cantilever response. We demonstrate that the G-Mode approach allows simultaneous multi-harmonic detection, combined with on-the-fly transfer function correction required for quantitative CPD mapping. The KPFM approach outlinedmore » in this work significantly simplifies the technique by avoiding cumbersome instrumentation optimization steps (i.e. lock in parameters, feedback gains etc.), while also retaining the flexibility to be implemented on any atomic force microscopy platform. We demonstrate the added advantages of G-Mode KPFM by allowing simultaneous mapping of CPD and capacitance gradient (C') channels as well as increased flexibility in data exploration across frequency, time, space, and noise domains. As a result, G-Mode KPFM is particularly suitable for characterizing voltage sensitive materials or for operation in conductive electrolytes, and will be useful for probing electrodynamics in photovoltaics, liquids and ionic conductors.« less

  9. Esophageal cancer dose escalation using a simultaneous integrated boost technique.

    PubMed

    Welsh, James; Palmer, Matthew B; Ajani, Jaffer A; Liao, Zhongxing; Swisher, Steven G; Hofstetter, Wayne L; Allen, Pamela K; Settle, Steven H; Gomez, Daniel; Likhacheva, Anna; Cox, James D; Komaki, Ritsuko

    2012-01-01

    We previously showed that 75% of radiation therapy (RT) failures in patients with unresectable esophageal cancer are in the gross tumor volume (GTV). We performed a planning study to evaluate if a simultaneous integrated boost (SIB) technique could selectively deliver a boost dose of radiation to the GTV in patients with esophageal cancer. Treatment plans were generated using four different approaches (two-dimensional conformal radiotherapy [2D-CRT] to 50.4 Gy, 2D-CRT to 64.8 Gy, intensity-modulated RT [IMRT] to 50.4 Gy, and SIB-IMRT to 64.8 Gy) and optimized for 10 patients with distal esophageal cancer. All plans were constructed to deliver the target dose in 28 fractions using heterogeneity corrections. Isodose distributions were evaluated for target coverage and normal tissue exposure. The 50.4 Gy IMRT plan was associated with significant reductions in mean cardiac, pulmonary, and hepatic doses relative to the 50.4 Gy 2D-CRT plan. The 64.8 Gy SIB-IMRT plan produced a 28% increase in GTV dose and comparable normal tissue doses as the 50.4 Gy IMRT plan; compared with the 50.4 Gy 2D-CRT plan, the 64.8 Gy SIB-IMRT produced significant dose reductions to all critical structures (heart, lung, liver, and spinal cord). The use of SIB-IMRT allowed us to selectively increase the dose to the GTV, the area at highest risk of failure, while simultaneously reducing the dose to the normal heart, lung, and liver. Clinical implications warrant systematic evaluation. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Esophageal Cancer Dose Escalation using a Simultaneous Integrated Boost Technique

    PubMed Central

    Welsh, James; Palmer, Matthew B.; Ajani, Jaffer A.; Liao, Zhongxing; Swisher, Steven G.; Hofstetter, Wayne L.; Allen, Pamela K.; Settle, Steven H.; Gomez, Daniel; Likhacheva, Anna; Cox, James D.; Komaki, Ritsuko

    2014-01-01

    Purpose We previously showed that 75% of radiation therapy (RT) failures in patients with unresectable esophageal cancer are in the gross tumor volume (GTV). We performed a planning study to evaluate if a simultaneous integrated boost (SIB) technique could selectively deliver a boost dose of radiation to the GTV in patients with esophageal cancer. Methods and Materials Treatment plans were generated using four different approaches (two-dimensional conformal RT [2D-CRT] to 50.4 Gy or 64.8 Gy, intensity-modulated RT [IMRT] to 50.4 Gy, and SIB-IMRT to 64.8 Gy) and optimized for 10 patients with distal esophageal cancer. All plans were constructed to deliver the target dose in 28 fractions using heterogeneity corrections. Isodose distributions were evaluated for target coverage and normal tissue exposure. Results The 50.4-Gy IMRT plan was associated with significant reductions in mean cardiac, pulmonary, and hepatic doses relative to the 50.4-Gy 2D-CRT plan. The 64.8-Gy SIB-IMRT plan produced a 28% increase in GTV dose and the same normal tissue doses as the 50.4-Gy IMRT plan; compared with the 50.4-Gy 2D-CRT plan, the 64.8-Gy SIB-IMRT produced significant dose reductions to all critical structures (heart, lung, liver, and spinal cord). Conclusions The use of SIB-IMRT allowed us to selectively increase the dose to the GTV, the area at highest risk of failure, while simultaneously reducing the dose to the normal heart, lung, and liver. Clinical implications warrant systematic evaluation. PMID:21123005

  11. Enhancement of DRPE performance with a novel scheme based on new RAC: Principle, security analysis and FPGA implementation

    NASA Astrophysics Data System (ADS)

    Neji, N.; Jridi, M.; Alfalou, A.; Masmoudi, N.

    2016-02-01

    The double random phase encryption (DRPE) method is a well-known all-optical architecture which has many advantages especially in terms of encryption efficiency. However, the method presents some vulnerabilities against attacks and requires a large quantity of information to encode the complex output plane. In this paper, we present an innovative hybrid technique to enhance the performance of DRPE method in terms of compression and encryption. An optimized simultaneous compression and encryption method is applied simultaneously on the real and imaginary components of the DRPE output plane. The compression and encryption technique consists in using an innovative randomized arithmetic coder (RAC) that can well compress the DRPE output planes and at the same time enhance the encryption. The RAC is obtained by an appropriate selection of some conditions in the binary arithmetic coding (BAC) process and by using a pseudo-random number to encrypt the corresponding outputs. The proposed technique has the capabilities to process video content and to be standard compliant with modern video coding standards such as H264 and HEVC. Simulations demonstrate that the proposed crypto-compression system has presented the drawbacks of the DRPE method. The cryptographic properties of DRPE have been enhanced while a compression rate of one-sixth can be achieved. FPGA implementation results show the high performance of the proposed method in terms of maximum operating frequency, hardware occupation, and dynamic power consumption.

  12. Multi-signal FIB/SEM tomography

    NASA Astrophysics Data System (ADS)

    Giannuzzi, Lucille A.

    2012-06-01

    Focused ion beam (FIB) milling coupled with scanning electron microscopy (SEM) on the same platform enables 3D microstructural analysis of structures using FIB for serial sectioning and SEM for imaging. Since FIB milling is a destructive technique, the acquisition of multiple signals from each slice is desirable. The feasibility of collecting both an inlens backscattered electron (BSE) signal and an inlens secondary electron (SE) simultaneously from a single scan of the electron beam from each FIB slice is demonstrated. The simultaneous acquisition of two different SE signals from two different detectors (inlens vs. Everhart-Thornley (ET) detector) is also possible. Obtaining multiple signals from each FIB slice with one scan increases the acquisition throughput. In addition, optimization of microstructural and morphological information from the target is achieved using multi-signals. Examples of multi-signal FIB/SEM tomography from a dental implant will be provided where both material contrast from the bone/ceramic coating/Ti substrate phases and porosity in the ceramic coating will be characterized.

  13. Microcalorimeter X-ray Detectors for Solar Physics

    NASA Astrophysics Data System (ADS)

    Deiker, S.; Boerner, P.; Martinez-Galarce, D.; Metcalf, T.; Rausch, A.; Shing, L.; Stern, R.; Irwin, K.; William, D.; Reintsema, C.; Ullom, J.; Cabrera, B.; Lehman, S.; Brink, P.

    2005-05-01

    Cryogenic X-ray microcalorimeters provide high spectral resolution over a large bandwidth. They have achieved < 3 eV resolution at 5.9 keV, and can produce this performance simultaneously from 0.25 to 10 keV. Although they operate at low (< 0.1 K) temperatures, such temperature are now easily produced. Microcalorimeters cooled by adiabatic demagnetization refrigerators have already flown on sounding rocket flights to study the soft X-ray background of the interstellar medium, and will soon be launched on the ASTRO-E II satellite. Microcalorimeters based on superconducting transition edge sensors are multiplexable and may be fabricated using standard photolithographic techniques. This makes large arrays of microcalorimeters feasible. Each pixel of such an array detects the arrival time of each photon to within < 0.01 ms. Such an instrument would offer simultaneous spatial, temporal and energy resolution, bringing a wealth of new information about solar processes. Current design and performance of microcalorimeters will be presented. Future improvements required to optimize microcalorimeters for solar physics applications will also be discussed.

  14. Multi-point Adjoint-Based Design of Tilt-Rotors in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Acree, Cecil W.

    2014-01-01

    Optimization of tilt-rotor systems requires the consideration of performance at multiple design points. In the current study, an adjoint-based optimization of a tilt-rotor blade is considered. The optimization seeks to simultaneously maximize the rotorcraft figure of merit in hover and the propulsive efficiency in airplane-mode for a tilt-rotor system. The design is subject to minimum thrust constraints imposed at each design point. The rotor flowfields at each design point are cast as steady-state problems in a noninertial reference frame. Geometric design variables used in the study to control blade shape include: thickness, camber, twist, and taper represented by as many as 123 separate design variables. Performance weighting of each operational mode is considered in the formulation of the composite objective function, and a build up of increasing geometric degrees of freedom is used to isolate the impact of selected design variables. In all cases considered, the resulting designs successfully increase both the hover figure of merit and the airplane-mode propulsive efficiency for a rotor designed with classical techniques.

  15. Preservation of three-dimensional spatial structure in the gut microbiome.

    PubMed

    Hasegawa, Yuko; Mark Welch, Jessica L; Rossetti, Blair J; Borisy, Gary G

    2017-01-01

    Preservation of three-dimensional structure in the gut is necessary in order to analyze the spatial organization of the gut microbiota and gut luminal contents. In this study, we evaluated preparation methods for mouse gut with the goal of preserving micron-scale spatial structure while performing fluorescence imaging assays. Our evaluation of embedding methods showed that commonly used media such as Tissue-Tek Optimal Cutting Temperature (OCT) compound, paraffin, and polyester waxes resulted in redistribution of luminal contents. By contrast, a hydrophilic methacrylate resin, Technovit H8100, preserved three-dimensional organization. Our mouse intestinal preparation protocol optimized using the Technovit H8100 embedding method was compatible with microbial fluorescence in situ hybridization (FISH) and other labeling techniques, including immunostaining and staining with both wheat germ agglutinin (WGA) and 4', 6-diamidino-2-phenylindole (DAPI). Mucus could be visualized whether the sample was fixed with paraformaldehyde (PFA) or with Carnoy's fixative. The protocol optimized in this study enabled simultaneous visualization of micron-scale spatial patterns formed by microbial cells in the mouse intestines along with biogeographical landmarks such as host-derived mucus and food particles.

  16. An adjoint method for gradient-based optimization of stellarator coil shapes

    NASA Astrophysics Data System (ADS)

    Paul, E. J.; Landreman, M.; Bader, A.; Dorland, W.

    2018-07-01

    We present a method for stellarator coil design via gradient-based optimization of the coil-winding surface. The REGCOIL (Landreman 2017 Nucl. Fusion 57 046003) approach is used to obtain the coil shapes on the winding surface using a continuous current potential. We apply the adjoint method to calculate derivatives of the objective function, allowing for efficient computation of analytic gradients while eliminating the numerical noise of approximate derivatives. We are able to improve engineering properties of the coils by targeting the root-mean-squared current density in the objective function. We obtain winding surfaces for W7-X and HSX which simultaneously decrease the normal magnetic field on the plasma surface and increase the surface-averaged distance between the coils and the plasma in comparison with the actual winding surfaces. The coils computed on the optimized surfaces feature a smaller toroidal extent and curvature and increased inter-coil spacing. A technique for computation of the local sensitivity of figures of merit to normal displacements of the winding surface is presented, with potential applications for understanding engineering tolerances.

  17. Genetic algorithm approaches for conceptual design of spacecraft systems including multi-objective optimization and design under uncertainty

    NASA Astrophysics Data System (ADS)

    Hassan, Rania A.

    In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.

  18. Combining Simultaneous with Temporal Masking

    ERIC Educational Resources Information Center

    Hermens, Frouke; Herzog, Michael H.; Francis, Gregory

    2009-01-01

    Simultaneous and temporal masking are two frequently used techniques in psychology and vision science. Although there are many studies and theories related to each masking technique, there are no systematic investigations of their mutual relationship, even though both techniques are often applied together. Here, the authors show that temporal…

  19. Simultaneous optimization of the cavity heat load and trip rates in linacs using a genetic algorithm

    DOE PAGES

    Terzić, Balša; Hofler, Alicia S.; Reeves, Cody J.; ...

    2014-10-15

    In this paper, a genetic algorithm-based optimization is used to simultaneously minimize two competing objectives guiding the operation of the Jefferson Lab's Continuous Electron Beam Accelerator Facility linacs: cavity heat load and radio frequency cavity trip rates. The results represent a significant improvement to the standard linac energy management tool and thereby could lead to a more efficient Continuous Electron Beam Accelerator Facility configuration. This study also serves as a proof of principle of how a genetic algorithm can be used for optimizing other linac-based machines.

  20. Inherent smoothness of intensity patterns for intensity modulated radiation therapy generated by simultaneous projection algorithms

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.

    2004-07-01

    The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.

  1. Enhancement of simultaneous gold and copper extraction from computer printed circuit boards using Bacillus megaterium.

    PubMed

    Arshadi, M; Mousavi, S M

    2015-01-01

    In this research simultaneous gold and copper recovery from computer printed circuit boards (CPCBs) was evaluated using central composite design of response surface methodology (CCD-RSM). To maximize simultaneous metals' extraction from CPCB waste four factors which affected bioleaching were selected to be optimized. A pure culture of Bacillus megaterium, a cyanogenic bacterium, was used to produce cyanide as a leaching agent. Initial pH 10, pulp density 2g/l, particle mesh#100 and glycine concentration 0.5g/l were obtained as optimal conditions. Gold and copper were extracted simultaneously at about 36.81 and 13.26% under optimum conditions, respectively. To decrease the copper effect as an interference agent in the leaching solution, a pretreatment strategy was examined. For this purpose firstly using Acidithiobacillus ferrooxidans copper in the CPCB powder was totally extracted, then the residual sediment was subjected to further experiments for gold recovery by B. megaterium. Using pretreated sample under optimal conditions 63.8% gold was extracted. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Optimal Price Decision Problem for Simultaneous Multi-article Auction and Its Optimal Price Searching Method by Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Masuda, Kazuaki; Aiyoshi, Eitaro

    We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.

  3. An integrated optimum design approach for high speed prop rotors

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Mccarthy, Thomas R.

    1995-01-01

    The objective is to develop an optimization procedure for high-speed and civil tilt-rotors by coupling all of the necessary disciplines within a closed-loop optimization procedure. Both simplified and comprehensive analysis codes are used for the aerodynamic analyses. The structural properties are calculated using in-house developed algorithms for both isotropic and composite box beam sections. There are four major objectives of this study. (1) Aerodynamic optimization: The effects of blade aerodynamic characteristics on cruise and hover performance of prop-rotor aircraft are investigated using the classical blade element momentum approach with corrections for the high lift capability of rotors/propellers. (2) Coupled aerodynamic/structures optimization: A multilevel hybrid optimization technique is developed for the design of prop-rotor aircraft. The design problem is decomposed into a level for improved aerodynamics with continuous design variables and a level with discrete variables to investigate composite tailoring. The aerodynamic analysis is based on that developed in objective 1 and the structural analysis is performed using an in-house code which models a composite box beam. The results are compared to both a reference rotor and the optimum rotor found in the purely aerodynamic formulation. (3) Multipoint optimization: The multilevel optimization procedure of objective 2 is extended to a multipoint design problem. Hover, cruise, and take-off are the three flight conditions simultaneously maximized. (4) Coupled rotor/wing optimization: Using the comprehensive rotary wing code CAMRAD, an optimization procedure is developed for the coupled rotor/wing performance in high speed tilt-rotor aircraft. The developed procedure contains design variables which define the rotor and wing planforms.

  4. TH-EF-BRB-10: Dosimetric Validation of a Trajectory Based Cranial SRS Treatment Technique On a Varian TrueBeam Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, B; Vancouver Cancer Centre, Vancouver, BC; Gete, E

    2016-06-15

    Purpose: This work investigates the dosimetric accuracy of a trajectory based delivery technique in which an optimized radiation beam is delivered along a Couch-Gantry trajectory that is formed by simultaneous rotation of the linac gantry and the treatment couch. Methods: Nine trajectory based cranial SRS treatment plans were created using in-house optimization software. The plans were calculated for delivery on the TrueBeam STx linac with 6MV photon beam. Dose optimization was performed along a user-defined trajectory using MLC modulation, dose rate modulation and jaw tracking. The pre-defined trajectory chosen for this study is formed by a couch rotation through itsmore » full range of 180 degrees while the gantry makes four partial arc sweeps which are 170 degrees each. For final dose calculation, the trajectory based plans were exported to the Varian Eclipse Treatment Planning System. The plans were calculated on a homogeneous cube phantom measuring 18.2×18.2×18.2 cm3 with the analytical anisotropic algorithm (AAA) using a 1mm3 calculation voxel. The plans were delivered on the TrueBeam linac via the developer’s mode. Point dose measurements were performed on 9 patients with the IBA CC01 mini-chamber with a sensitive volume of 0.01 cc. Gafchromic film measurements along the sagittal and coronal planes were performed on three of the 9 treatment plans. Point dose values were compared with ion chamber measurements. Gamma analysis comparing film measurement and AAA calculations was performed using FilmQA Pro. Results: The AAA calculations and measurements were in good agreement. The point dose difference between AAA and ion chamber measurements were within 2.2%. Gamma analysis test pass rates (2%, 2mm passing criteria) for the Gafchromic film measurements were >95%. Conclusion: We have successfully tested TrueBeam’s ability to deliver accurate trajectory based treatments involving simultaneous gantry and couch rotation with MLC and dose rate modulation along the trajectory.« less

  5. Simultaneous minimization of leaf travel distance and tongue-and-groove effect for segmental intensity-modulated radiation therapy.

    PubMed

    Dai, Jianrong; Que, William

    2004-12-07

    This paper introduces a method to simultaneously minimize the leaf travel distance and the tongue-and-groove effect for IMRT leaf sequences to be delivered in segmental mode. The basic idea is to add a large enough number of openings through cutting or splitting existing openings for those leaf pairs with openings fewer than the number of segments so that all leaf pairs have the same number of openings. The cutting positions are optimally determined with a simulated annealing technique called adaptive simulated annealing. The optimization goal is set to minimize the weighted summation of the leaf travel distance and tongue-and-groove effect. Its performance was evaluated with 19 beams from three clinical cases; one brain, one head-and-neck and one prostate case. The results show that it can reduce the leaf travel distance and (or) tongue-and-groove effect; the reduction of the leaf travel distance reaches its maximum of about 50% when minimized alone; the reduction of the tongue-and-groove reaches its maximum of about 70% when minimized alone. The maximum reduction in the leaf travel distance translates to a 1 to 2 min reduction in treatment delivery time per fraction, depending on leaf speed. If the method is implemented clinically, it could result in significant savings in treatment delivery time, and also result in significant reduction in the wear-and-tear of MLC mechanics.

  6. SETI - A preliminary search for narrowband signals at microwave frequencies

    NASA Technical Reports Server (NTRS)

    Cuzzi, J. N.; Clark, T. A.; Tarter, J. C.; Black, D. C.

    1977-01-01

    In the search for intelligent signals of extraterrestrial origin, certain forms of signals merit immediate and special attention. Extremely narrowband signals of spectral width similar to our own television transmissions are most favored energetically and least likely to be confused with natural celestial emission. A search of selected stars has been initiated using observational and data processing techniques optimized for the detection of such signals. These techniques allow simultaneous observation of 10 to the 5th to 10 to the 6th channels within the observed spectral range. About two hundred nearby (within 80 LY) solar type stars have been observed at frequencies near the main microwave transitions of the hydroxyl radical. In addition, several molecular (hydroxyl) masers and other non-thermal sources have been observed in this way in order to uncover any possible fine spectral structure of natural origin and to investigate the potential of such an instrument for radioastronomy.

  7. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  8. On-line concentration and determination of all-trans- and 13-cis- retinoic acids in rabbit serum by application of sweeping technique in micellar electrokinetic chromatography.

    PubMed

    Zhao, Yongxi; Kong, Yu; Wang, Bo; Wu, Yayan; Wu, Hong

    2007-03-30

    A simple and rapid micellar electrokinetic chromatography (MEKC) method with UV detection was developed for the simultaneous separation and determination of all-trans- and 13-cis-retinoic acids in rabbit serum by on-line sweeping concentration technique. The serum sample was simply deproteinized and centrifuged. Various parameters affecting sample enrichment and separation were systematically investigated. Under optimal conditions, the analytes could be well separated within 17min, and the relative standard deviations (RSD) of migration times and peak areas were less than 3.4%. Compared with the conventional MEKC injection method, the 18- and 19-fold improvements in sensitivity were achieved, respectively. The proposed method has been successfully applied to the determination of all-trans- and 13-cis-retinoic acids in serum samples from rabbits and could be feasible for the further pharmacokinetics study of all-trans-retinoic acid.

  9. Multimodal optical setup based on spectrometer and cameras combination for biological tissue characterization with spatially modulated illumination

    NASA Astrophysics Data System (ADS)

    Baruch, Daniel; Abookasis, David

    2017-04-01

    The application of optical techniques as tools for biomedical research has generated substantial interest for the ability of such methodologies to simultaneously measure biochemical and morphological parameters of tissue. Ongoing optimization of optical techniques may introduce such tools as alternative or complementary to conventional methodologies. The common approach shared by current optical techniques lies in the independent acquisition of tissue's optical properties (i.e., absorption and reduced scattering coefficients) from reflected or transmitted light. Such optical parameters, in turn, provide detailed information regarding both the concentrations of clinically relevant chromophores and macroscopic structural variations in tissue. We couple a noncontact optical setup with a simple analysis algorithm to obtain absorption and scattering coefficients of biological samples under test. Technically, a portable picoprojector projects serial sinusoidal patterns at low and high spatial frequencies, while a spectrometer and two independent CCD cameras simultaneously acquire the reflected diffuse light through a single spectrometer and two separate CCD cameras having different bandpass filters at nonisosbestic and isosbestic wavelengths in front of each. This configuration fills the gaps in each other's capabilities for acquiring optical properties of tissue at high spectral and spatial resolution. Experiments were performed on both tissue-mimicking phantoms as well as hands of healthy human volunteers to quantify their optical properties as proof of concept for the present technique. In a separate experiment, we derived the optical properties of the hand skin from the measured diffuse reflectance, based on a recently developed camera model. Additionally, oxygen saturation levels of tissue measured by the system were found to agree well with reference values. Taken together, the present results demonstrate the potential of this integrated setup for diagnostic and research applications.

  10. An efficiency study of the simultaneous analysis and design of structures

    NASA Technical Reports Server (NTRS)

    Striz, Alfred G.; Wu, Zhiqi; Sobieski, Jaroslaw

    1995-01-01

    The efficiency of the Simultaneous Analysis and Design (SAND) approach in the minimum weight optimization of structural systems subject to strength and displacement constraints as well as size side constraints is investigated. SAND allows for an optimization to take place in one single operation as opposed to the more traditional and sequential Nested Analysis and Design (NAND) method, where analyses and optimizations alternate. Thus, SAND has the advantage that the stiffness matrix is never factored during the optimization retaining its original sparsity. One of SAND's disadvantages is the increase in the number of design variables and in the associated number of constraint gradient evaluations. If SAND is to be an acceptable player in the optimization field, it is essential to investigate the efficiency of the method and to present a possible cure for any inherent deficiencies.

  11. Multigrid techniques for nonlinear eigenvalue probems: Solutions of a nonlinear Schroedinger eigenvalue problem in 2D and 3D

    NASA Technical Reports Server (NTRS)

    Costiner, Sorin; Taasan, Shlomo

    1994-01-01

    This paper presents multigrid (MG) techniques for nonlinear eigenvalue problems (EP) and emphasizes an MG algorithm for a nonlinear Schrodinger EP. The algorithm overcomes the mentioned difficulties combining the following techniques: an MG projection coupled with backrotations for separation of solutions and treatment of difficulties related to clusters of close and equal eigenvalues; MG subspace continuation techniques for treatment of the nonlinearity; an MG simultaneous treatment of the eigenvectors at the same time with the nonlinearity and with the global constraints. The simultaneous MG techniques reduce the large number of self consistent iterations to only a few or one MG simultaneous iteration and keep the solutions in a right neighborhood where the algorithm converges fast.

  12. Simultaneous sampling technique for two spectral sources

    NASA Technical Reports Server (NTRS)

    Jarrett, Olin, Jr.

    1987-01-01

    A technique is described that uses a bundle of fiber optics to simultaneously sample a dye laser and a spectral lamp. By the use of a real-time display with this technique, the two signals can be superimposed, and the effect of any spectral adjustments can be immediately accessed. In the NASA's CARS system used for combustion diagnostics, the dye laser mixes with a simultaneously pulsed Nd:YAG laser at 532 nm to probe the vibrational levels of nitrogen. An illustration of the oscilloscopic display of the system is presented.

  13. Multi-objective shape optimization of runner blade for Kaplan turbine

    NASA Astrophysics Data System (ADS)

    Semenova, A.; Chirkov, D.; Lyutov, A.; Chemy, S.; Skorospelov, V.; Pylev, I.

    2014-03-01

    Automatic runner shape optimization based on extensive CFD analysis proved to be a useful design tool in hydraulic turbomachinery. Previously the authors developed an efficient method for Francis runner optimization. It was successfully applied to the design of several runners with different specific speeds. In present work this method is extended to the task of a Kaplan runner optimization. Despite of relatively simpler blade shape, Kaplan turbines have several features, complicating the optimization problem. First, Kaplan turbines normally operate in a wide range of discharges, thus CFD analysis of each variant of the runner should be carried out for several operation points. Next, due to a high specific speed, draft tube losses have a great impact on the overall turbine efficiency, and thus should be accurately evaluated. Then, the flow in blade tip and hub clearances significantly affects the velocity profile behind the runner and draft tube behavior. All these features are accounted in the present optimization technique. Parameterization of runner blade surface using 24 geometrical parameters is described in details. For each variant of runner geometry steady state three-dimensional turbulent flow computations are carried out in the domain, including wicket gate, runner, draft tube, blade tip and hub clearances. The objectives are maximization of efficiency in best efficiency and high discharge operation points, with simultaneous minimization of cavitation area on the suction side of the blade. Multiobjective genetic algorithm is used for the solution of optimization problem, requiring the analysis of several thousands of runner variants. The method is applied to optimization of runner shape for several Kaplan turbines with different heads.

  14. Process-time Optimization of Vacuum Degassing Using a Genetic Alloy Design Approach

    PubMed Central

    Dilner, David; Lu, Qi; Mao, Huahai; Xu, Wei; van der Zwaag, Sybrand; Selleby, Malin

    2014-01-01

    This paper demonstrates the use of a new model consisting of a genetic algorithm in combination with thermodynamic calculations and analytical process models to minimize the processing time during a vacuum degassing treatment of liquid steel. The model sets multiple simultaneous targets for final S, N, O, Si and Al levels and uses the total slag mass, the slag composition, the steel composition and the start temperature as optimization variables. The predicted optimal conditions agree well with industrial practice. For those conditions leading to the shortest process time the target compositions for S, N and O are reached almost simultaneously. PMID:28788286

  15. Developing single-laser sources for multimodal coherent anti-Stokes Raman scattering microscopy

    NASA Astrophysics Data System (ADS)

    Pegoraro, Adrian Frank

    Coherent anti-Stokes Raman scattering (CARS) microscopy has developed rapidly and is opening the door to new types of experiments. This work describes the development of new laser sources for CARS microscopy and their use for different applications. It is specifically focused on multimodal nonlinear optical microscopy—the simultaneous combination of different imaging techniques. This allows us to address a diverse range of applications, such as the study of biomaterials, fluid inclusions, atherosclerosis, hepatitis C infection in cells, and ice formation in cells. For these applications new laser sources are developed that allow for practical multimodal imaging. For example, it is shown that using a single Ti:sapphire oscillator with a photonic crystal fiber, it is possible to develop a versatile multimodal imaging system using optimally chirped laser pulses. This system can perform simultaneous two photon excited fluorescence, second harmonic generation, and CARS microscopy. The versatility of the system is further demonstrated by showing that it is possible to probe different Raman modes using CARS microscopy simply by changing a time delay between the excitation beams. Using optimally chirped pulses also enables further simplification of the laser system required by using a single fiber laser combined with nonlinear optical fibers to perform effective multimodal imaging. While these sources are useful for practical multimodal imaging, it is believed that for further improvements in CARS microscopy sensitivity, new excitation schemes are necessary. This has led to the design of a new, high power, extended cavity oscillator that should be capable of implementing new excitation schemes for CARS microscopy as well as other techniques. Our interest in multimodal imaging has led us to other areas of research as well. For example, a fiber-coupling scheme for signal collection in the forward direction is demonstrated that allows for fluorescence lifetime imaging without significant temporal distortion. Also highlighted is an imaging artifact that is unique to CARS microscopy that can alter image interpretation, especially when using multimodal imaging. By combining expertise in nonlinear optics, laser development, fiber optics, and microscopy, we have developed systems and techniques that will be of benefit for multimodal CARS microscopy.

  16. Flat-plate solar array project process development area: Process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    Campbell, R. B.

    1986-01-01

    Several different techniques to simultaneously diffuse the front and back junctions in dendritic web silicon were investigated. A successful simultaneous diffusion reduces the cost of the solar cell by reducing the number of processing steps, the amount of capital equipment, and the labor cost. The three techniques studied were: (1) simultaneous diffusion at standard temperatures and times using a tube type diffusion furnace or a belt furnace; (2) diffusion using excimer laser drive-in; and (3) simultaneous diffusion at high temperature and short times using a pulse of high intensity light as the heat source. The use of an excimer laser and high temperature short time diffusion experiment were both more successful than the diffusion at standard temperature and times. The three techniques are described in detail and a cost analysis of the more successful techniques is provided.

  17. Simultaneous optimization of photons and electrons for mixed beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Mueller, S.; Fix, M. K.; Joosten, A.; Henzen, D.; Frei, D.; Volken, W.; Kueng, R.; Aebersold, D. M.; Stampanoni, M. F. M.; Manser, P.

    2017-07-01

    The aim of this work is to develop and investigate an inverse treatment planning process (TPP) for mixed beam radiotherapy (MBRT) capable of performing simultaneous optimization of photon and electron apertures. A simulated annealing based direct aperture optimization (DAO) is implemented to perform simultaneous optimization of photon and electron apertures, both shaped with the photon multileaf collimator (pMLC). Validated beam models are used as input for Monte Carlo dose calculations. Consideration of photon pMLC transmission during DAO and a weight re-optimization of the apertures after deliverable dose calculation are utilized to efficiently reduce the differences between optimized and deliverable dose distributions. The TPP for MBRT is evaluated for an academic situation with a superficial and an enlarged PTV in the depth, a left chest wall case including the internal mammary chain and a squamous cell carcinoma case. Deliverable dose distributions of MBRT plans are compared to those of modulated electron radiotherapy (MERT), photon IMRT and if available to those of clinical VMAT plans. The generated MBRT plans dosimetrically outperform the MERT, photon IMRT and VMAT plans for all investigated situations. For the clinical cases of the left chest wall and the squamous cell carcinoma, the MBRT plans cover the PTV similarly or more homogeneously than the VMAT plans, while OARs are spared considerably better with average reductions of the mean dose to parallel OARs and D 2% to serial OARs by 54% and 26%, respectively. Moreover, the low dose bath expressed as V 10% to normal tissue is substantially reduced by up to 45% compared to the VMAT plans. A TPP for MBRT including simultaneous optimization is successfully implemented and the dosimetric superiority of MBRT plans over MERT, photon IMRT and VMAT plans is demonstrated for academic and clinical situations including superficial targets with and without deep-seated part.

  18. Simultaneous optimization of photons and electrons for mixed beam radiotherapy.

    PubMed

    Mueller, S; Fix, M K; Joosten, A; Henzen, D; Frei, D; Volken, W; Kueng, R; Aebersold, D M; Stampanoni, M F M; Manser, P

    2017-06-26

    The aim of this work is to develop and investigate an inverse treatment planning process (TPP) for mixed beam radiotherapy (MBRT) capable of performing simultaneous optimization of photon and electron apertures. A simulated annealing based direct aperture optimization (DAO) is implemented to perform simultaneous optimization of photon and electron apertures, both shaped with the photon multileaf collimator (pMLC). Validated beam models are used as input for Monte Carlo dose calculations. Consideration of photon pMLC transmission during DAO and a weight re-optimization of the apertures after deliverable dose calculation are utilized to efficiently reduce the differences between optimized and deliverable dose distributions. The TPP for MBRT is evaluated for an academic situation with a superficial and an enlarged PTV in the depth, a left chest wall case including the internal mammary chain and a squamous cell carcinoma case. Deliverable dose distributions of MBRT plans are compared to those of modulated electron radiotherapy (MERT), photon IMRT and if available to those of clinical VMAT plans. The generated MBRT plans dosimetrically outperform the MERT, photon IMRT and VMAT plans for all investigated situations. For the clinical cases of the left chest wall and the squamous cell carcinoma, the MBRT plans cover the PTV similarly or more homogeneously than the VMAT plans, while OARs are spared considerably better with average reductions of the mean dose to parallel OARs and D 2% to serial OARs by 54% and 26%, respectively. Moreover, the low dose bath expressed as V 10% to normal tissue is substantially reduced by up to 45% compared to the VMAT plans. A TPP for MBRT including simultaneous optimization is successfully implemented and the dosimetric superiority of MBRT plans over MERT, photon IMRT and VMAT plans is demonstrated for academic and clinical situations including superficial targets with and without deep-seated part.

  19. Simultaneous dispersive liquid-liquid microextraction derivatisation and gas chromatography mass spectrometry analysis of subcritical water extracts of sweet and sour cherry stems.

    PubMed

    Švarc-Gajić, Jaroslava; Clavijo, Sabrina; Suárez, Ruth; Cvetanović, Aleksandra; Cerdà, Víctor

    2018-03-01

    Cherry stems have been used in traditional medicine mostly for the treatment of urinary tract infections. Extraction with subcritical water, according to its selectivity, efficiency and other aspects, differs substantially from conventional extraction techniques. The complexity of plant subcritical water extracts is due to the ability of subcritical water to extract different chemical classes of different physico-chemical properties and polarities in a single run. In this paper, dispersive liquid-liquid microextraction (DLLME) with simultaneous derivatisation was optimised for the analysis of complex subcritical water extracts of cherry stems to allow simple and rapid preparation prior to gas chromatography-mass spectrometry (GC-MS). After defining optimal extracting and dispersive solvents, the optimised method was used for the identification of compounds belonging to different chemical classes in a single analytical run. The developed sample preparation protocol enabled simultaneous extraction and derivatisation, as well as convenient coupling with GC-MS analysis, reducing the analysis time and number of steps. The applied analytical protocol allowed simple and rapid chemical screening of subcritical water extracts and was used for the comparison of subcritical water extracts of sweet and sour cherry stems. Graphical abstract DLLME GC MS analysis of cherry stem extracts obtained by subcritical water.

  20. Optimal stocking of species by diameter class for even-aged mid-to-late rotation Appalachian hardwoods

    Treesearch

    Joseph B. Roise; Joosang Chung; Chris B. LeDoux

    1988-01-01

    Nonlinear programming (NP) is applied to the problem of finding optimal thinning and harvest regimes simultaneously with species mix and diameter class distribution. Optimal results for given cases are reported. Results of the NP optimization are compared with prescriptions developed by Appalachian hardwood silviculturists.

  1. Media milling process optimization for manufacture of drug nanoparticles using design of experiments (DOE).

    PubMed

    Nekkanti, Vijaykumar; Marwah, Ashwani; Pillai, Raviraj

    2015-01-01

    Design of experiments (DOE), a component of Quality by Design (QbD), is systematic and simultaneous evaluation of process variables to develop a product with predetermined quality attributes. This article presents a case study to understand the effects of process variables in a bead milling process used for manufacture of drug nanoparticles. Experiments were designed and results were computed according to a 3-factor, 3-level face-centered central composite design (CCD). The factors investigated were motor speed, pump speed and bead volume. Responses analyzed for evaluating these effects and interactions were milling time, particle size and process yield. Process validation batches were executed using the optimum process conditions obtained from software Design-Expert® to evaluate both the repeatability and reproducibility of bead milling technique. Milling time was optimized to <5 h to obtain the desired particle size (d90 < 400 nm). The desirability function used to optimize the response variables and observed responses were in agreement with experimental values. These results demonstrated the reliability of selected model for manufacture of drug nanoparticles with predictable quality attributes. The optimization of bead milling process variables by applying DOE resulted in considerable decrease in milling time to achieve the desired particle size. The study indicates the applicability of DOE approach to optimize critical process parameters in the manufacture of drug nanoparticles.

  2. Optimization of digital breast tomosynthesis (DBT) acquisition parameters for human observers: effect of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Zeng, Rongping; Badano, Aldo; Myers, Kyle J.

    2017-04-01

    We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.

  3. ERTS direct readout ground station study

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A system configuration which provides for a wide variety of user requirements is described. Two distinct user types are considered and optimized configurations are provided. Independent satellite transmission systems allow simultaneous signal transmission to Regional Collection Centers via a high data rate channel and to local users who require near real time consumption of lower rate data. In order to maximize the ultimate utility of this study effort, a parametric system description is given such that in essence a shopping list is provided. To achieve these results, it was necessary to consider all technical disciplines associated with high resolution satellite imaging systems including signal processing, modulation and coding, recording, and display techniques. A total systems study was performed.

  4. A high performance hardware implementation image encryption with AES algorithm

    NASA Astrophysics Data System (ADS)

    Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab

    2011-06-01

    This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.

  5. Preliminary experimental results from a MARS Micro-CT system.

    PubMed

    He, Peng; Yu, Hengyong; Thayer, Patrick; Jin, Xin; Xu, Qiong; Bennett, James; Tappenden, Rachael; Wei, Biao; Goldstein, Aaron; Renaud, Peter; Butler, Anthony; Butler, Phillip; Wang, Ge

    2012-01-01

    The Medipix All Resolution System (MARS) system is a commercial spectral/multi-energy micro-CT scanner designed and assembled by the MARS Bioimaging, Ltd. in New Zealand. This system utilizes the state-of-the-art Medipix photon-counting, energy-discriminating detector technology developed by a collaboration at European Organization for Nuclear Research (CERN). In this paper, we report our preliminary experimental results using this system, including geometrical alignment, photon energy characterization, protocol optimization, and spectral image reconstruction. We produced our scan datasets with a multi-material phantom, and then applied ordered subset-simultaneous algebraic reconstruction technique (OS-SART) to reconstruct images in different energy ranges and principal component analysis (PCA) to evaluate spectral deviation among the energy ranges.

  6. Least-Squares Self-Calibration of Imaging Array Data

    NASA Technical Reports Server (NTRS)

    Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.

    2004-01-01

    When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.

  7. PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata

    We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratiomore » (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.« less

  8. Spatiotemporal radiotherapy planning using a global optimization approach

    NASA Astrophysics Data System (ADS)

    Adibi, Ali; Salari, Ehsan

    2018-02-01

    This paper aims at quantifying the extent of potential therapeutic gain, measured using biologically effective dose (BED), that can be achieved by altering the radiation dose distribution over treatment sessions in fractionated radiotherapy. To that end, a spatiotemporally integrated planning approach is developed, where the spatial and temporal dose modulations are optimized simultaneously. The concept of equivalent uniform BED (EUBED) is used to quantify and compare the clinical quality of spatiotemporally heterogeneous dose distributions in target and critical structures. This gives rise to a large-scale non-convex treatment-plan optimization problem, which is solved using global optimization techniques. The proposed spatiotemporal planning approach is tested on two stylized cancer cases resembling two different tumor sites and sensitivity analysis is performed for radio-biological and EUBED parameters. Numerical results validate that spatiotemporal plans are capable of delivering a larger BED to the target volume without increasing the BED in critical structures compared to conventional time-invariant plans. In particular, this additional gain is attributed to the irradiation of different regions of the target volume at different treatment sessions. Additionally, the trade-off between the potential therapeutic gain and the number of distinct dose distributions is quantified, which suggests a diminishing marginal gain as the number of dose distributions increases.

  9. Variable Complexity Structural Optimization of Shells

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Venkataraman, Satchi

    1999-01-01

    Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-2110 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition, several modeling issues for the design of shells of revolution were studied.

  10. Variable Complexity Structural Optimization of Shells

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Venkataraman, Satchi

    1998-01-01

    Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-1808 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition several modeling issues for the design of shells of revolution were studied.

  11. Tailoring vibration mode shapes using topology optimization and functionally graded material concepts

    NASA Astrophysics Data System (ADS)

    Montealegre Rubio, Wilfredo; Paulino, Glaucio H.; Nelli Silva, Emilio Carlos

    2011-02-01

    Tailoring specified vibration modes is a requirement for designing piezoelectric devices aimed at dynamic-type applications. A technique for designing the shape of specified vibration modes is the topology optimization method (TOM) which finds an optimum material distribution inside a design domain to obtain a structure that vibrates according to specified eigenfrequencies and eigenmodes. Nevertheless, when the TOM is applied to dynamic problems, the well-known grayscale or intermediate material problem arises which can invalidate the post-processing of the optimal result. Thus, a more natural way for solving dynamic problems using TOM is to allow intermediate material values. This idea leads to the functionally graded material (FGM) concept. In fact, FGMs are materials whose properties and microstructure continuously change along a specific direction. Therefore, in this paper, an approach is presented for tailoring user-defined vibration modes, by applying the TOM and FGM concepts to design functionally graded piezoelectric transducers (FGPT) and non-piezoelectric structures (functionally graded structures—FGS) in order to achieve maximum and/or minimum vibration amplitudes at certain points of the structure, by simultaneously finding the topology and material gradation function. The optimization problem is solved by using sequential linear programming. Two-dimensional results are presented to illustrate the method.

  12. Simultaneous spectrophotometric determination of crystal violet and malachite green in water samples using partial least squares regression and central composite design after preconcentration by dispersive solid-phase extraction.

    PubMed

    Razi-Asrami, Mahboobeh; Ghasemi, Jahan B; Amiri, Nayereh; Sadeghi, Seyed Jamal

    2017-04-01

    In this paper, a simple, fast, and inexpensive method is introduced for the simultaneous spectrophotometric determination of crystal violet (CV) and malachite green (MG) contents in aquatic samples using partial least squares regression (PLS) as a multivariate calibration technique after preconcentration by graphene oxide (GO). The method was based on the sorption and desorption of analytes onto GO and direct determination by ultraviolet-visible spectrophotometric techniques. GO was synthesized according to Hummers method. To characterize the shape and structure of GO, FT-IR, SEM, and XRD were used. The effective factors on the extraction efficiency such as pH, extraction time, and the amount of adsorbent were optimized using central composite design. The optimum values of these factors were 6, 15 min, and 12 mg, respectively. The maximum capacity of GO for the adsorption of CV and MG was 63.17 and 77.02 mg g -1 , respectively. Preconcentration factors and extraction recoveries were obtained and were 19.6, 98% for CV and 20, 100% for MG, respectively. LOD and linear dynamic ranges for CV and MG were 0.009, 0.03-0.3, 0.015, and 0.05-0.5 (μg mL -1 ), respectively. The intra-day and inter-day relative standard deviations were 1.99 and 0.58 for CV and 1.69 and 3.13 for MG at the concentration level of 50 ng mL -1 , respectively. Finally, the proposed DSPE/PLS method was successfully applied for the simultaneous determination of the trace amount of CV and MG in the real water samples.

  13. Simultaneous detection of genetically modified organisms by multiplex ligation-dependent genome amplification and capillary gel electrophoresis with laser-induced fluorescence.

    PubMed

    García-Cañas, Virginia; Mondello, Monica; Cifuentes, Alejandro

    2010-07-01

    In this work, an innovative method useful to simultaneously analyze multiple genetically modified organisms is described. The developed method consists in the combination of multiplex ligation-dependent genome dependent amplification (MLGA) with CGE and LIF detection using bare-fused silica capillaries. The MLGA process is based on oligonucleotide constructs, formed by a universal sequence (vector) and long specific oligonucleotides (selectors) that facilitate the circularization of specific DNA target regions. Subsequently, the circularized target sequences are simultaneously amplified with the same couple of primers and analyzed by CGE-LIF using a bare-fused silica capillary and a run electrolyte containing 2-hydroxyethyl cellulose acting as both sieving matrix and dynamic capillary coating. CGE-LIF is shown to be very useful and informative for optimizing MLGA parameters such as annealing temperature, number of ligation cycles, and selector probes concentration. We demonstrate the specificity of the method in detecting the presence of transgenic DNA in certified reference and raw commercial samples. The method developed is sensitive and allows the simultaneous detection in a single run of percentages of transgenic maize as low as 1% of GA21, 1% of MON863, and 1% of MON810 in maize samples with signal-to-noise ratios for the corresponding DNA peaks of 15, 12, and 26, respectively. These results demonstrate, to our knowledge for the first time, the great possibilities of MLGA techniques for genetically modified organisms analysis.

  14. Multidisciplinary aerospace design optimization: Survey of recent developments

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Haftka, Raphael T.

    1995-01-01

    The increasing complexity of engineering systems has sparked increasing interest in multidisciplinary optimization (MDO). This paper presents a survey of recent publications in the field of aerospace where interest in MDO has been particularly intense. The two main challenges of MDO are computational expense and organizational complexity. Accordingly the survey is focussed on various ways different researchers use to deal with these challenges. The survey is organized by a breakdown of MDO into its conceptual components. Accordingly, the survey includes sections on Mathematical Modeling, Design-oriented Analysis, Approximation Concepts, Optimization Procedures, System Sensitivity, and Human Interface. With the authors' main expertise being in the structures area, the bulk of the references focus on the interaction of the structures discipline with other disciplines. In particular, two sections at the end focus on two such interactions that have recently been pursued with a particular vigor: Simultaneous Optimization of Structures and Aerodynamics, and Simultaneous Optimization of Structures Combined With Active Control.

  15. From the physics of interacting polymers to optimizing routes on the London Underground

    PubMed Central

    Yeung, Chi Ho; Saad, David; Wong, K. Y. Michael

    2013-01-01

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise. PMID:23898198

  16. From the physics of interacting polymers to optimizing routes on the London Underground.

    PubMed

    Yeung, Chi Ho; Saad, David; Wong, K Y Michael

    2013-08-20

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise.

  17. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE PAGES

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    2018-01-28

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  18. Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.

  19. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  20. Design and optimization of a volume-phase holographic grating for simultaneous use with red, green, and blue light using unpolarized light.

    PubMed

    Mahamat, Adoum H; Narducci, Frank A; Schwiegerling, James

    2016-03-01

    Volume-phase holographic (VPH) gratings have been designed for use in many areas of science and technology, such as optical communication, optical imaging, and astronomy. In this paper, the design of a volume-phase holographic grating, simultaneously optimized to operate in the red, green, and blue wavelengths, is presented along with a study of its fabrication tolerances. The grating is optimized to produce 98% efficiency at λ=532  nm and at least 75% efficiency in the region between 400 and 700 nm, when the incident light is unpolarized. The optimization is done for recording in dichromated gelatin with a thickness of 12 μm, an average refractive index of 1.5, and a refractive index modulation of 0.022.

  1. Truss topology optimization with simultaneous analysis and design

    NASA Technical Reports Server (NTRS)

    Sankaranarayanan, S.; Haftka, Raphael T.; Kapania, Rakesh K.

    1992-01-01

    Strategies for topology optimization of trusses for minimum weight subject to stress and displacement constraints by Simultaneous Analysis and Design (SAND) are considered. The ground structure approach is used. A penalty function formulation of SAND is compared with an augmented Lagrangian formulation. The efficiency of SAND in handling combinations of general constraints is tested. A strategy for obtaining an optimal topology by minimizing the compliance of the truss is compared with a direct weight minimization solution to satisfy stress and displacement constraints. It is shown that for some problems, starting from the ground structure and using SAND is better than starting from a minimum compliance topology design and optimizing only the cross sections for minimum weight under stress and displacement constraints. A member elimination strategy to save CPU time is discussed.

  2. Disparities in urban/rural environmental quality

    EPA Science Inventory

    Individuals experience simultaneous exposure to many pollutants and social factors, which cluster to affect human health outcomes. Because the optimal approach to combining these factors is unknown, we developed a method to model simultaneous exposure using criteria air pollutant...

  3. Traveling-Wave Tube Cold-Test Circuit Optimization Using CST MICROWAVE STUDIO

    NASA Technical Reports Server (NTRS)

    Chevalier, Christine T.; Kory, Carol L.; Wilson, Jeffrey D.; Wintucky, Edwin G.; Dayton, James A., Jr.

    2003-01-01

    The internal optimizer of CST MICROWAVE STUDIO (MWS) was used along with an application-specific Visual Basic for Applications (VBA) script to develop a method to optimize traveling-wave tube (TWT) cold-test circuit performance. The optimization procedure allows simultaneous optimization of circuit specifications including on-axis interaction impedance, bandwidth or geometric limitations. The application of Microwave Studio to TWT cold-test circuit optimization is described.

  4. Aerodynamic optimization by simultaneously updating flow variables and design parameters with application to advanced propeller designs

    NASA Technical Reports Server (NTRS)

    Rizk, Magdi H.

    1988-01-01

    A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.

  5. Development of a PC interface board for true color control using an Ar Kr white-light laser

    NASA Astrophysics Data System (ADS)

    Shin, Yongjin; Park, Sohee; Kim, Youngseop; Lee, Jangwoen

    2006-06-01

    For the optimal laser display, it is crucial to select and control color signals of proper wavelengths in order to construct a wide range of laser display colors. In traditional laser display schemes, color control has been achieved through the mechanical manipulation of red, green, and blue (RGB) laser beam intensities using color filters. To maximize the effect of a laser display and its color contents, it is desirable to generate laser beams with wide selection of wavelengths. We present an innovative laser display control technique, which generates six channel laser wavelengths from a white-light laser using a RF-controlled polychromatic acousto optical modulator (PCAOM). This technique enables us not only to control the intensity of individual channels, but also to achieve true color signals for the laser beam display including RGB, yellow, cyan, and violet (YCV), and other intermediate colors. For the optimal control of the PCAOM and galvano-mirror, we designed and fabricated a PC interface board. Using this PC control, we separated the white-light from an Ar-Kr mixed gas laser into various wavelengths and reconstructed them into different color schemes. Also we demonstrated the effective control and simultaneous display of reconstructed true color laser beams on a flat screen.

  6. Adaptive Batch Mode Active Learning.

    PubMed

    Chakraborty, Shayok; Balasubramanian, Vineeth; Panchanathan, Sethuraman

    2015-08-01

    Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar and representative instances to be selected for manual annotation. More recently, there have been attempts toward a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. Real-world applications require adaptive approaches for batch selection in active learning, depending on the complexity of the data stream in question. However, the existing work in this field has primarily focused on static or heuristic batch size selection. In this paper, we propose two novel optimization-based frameworks for adaptive batch mode active learning (BMAL), where the batch size as well as the selection criteria are combined in a single formulation. We exploit gradient-descent-based optimization strategies as well as properties of submodular functions to derive the adaptive BMAL algorithms. The solution procedures have the same computational complexity as existing state-of-the-art static BMAL techniques. Our empirical results on the widely used VidTIMIT and the mobile biometric (MOBIO) data sets portray the efficacy of the proposed frameworks and also certify the potential of these approaches in being used for real-world biometric recognition applications.

  7. Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information.

    PubMed

    Lian, Jian; Zheng, Yuanjie; Jiao, Wanzhen; Yan, Fang; Zhao, Bojun

    2018-06-01

    Multi-spectral imaging (MSI) produces a sequence of spectral images to capture the inner structure of different species, which was recently introduced into ocular disease diagnosis. However, the quality of MSI images can be significantly degraded by motion blur caused by the inevitable saccades and exposure time required for maintaining a sufficiently high signal-to-noise ratio. This degradation may confuse an ophthalmologist, reduce the examination quality, or defeat various image analysis algorithms. We propose an early work specially on deblurring sequential MSI images, which is distinguished from many of the current image deblurring techniques by resolving the blur kernel simultaneously for all the images in an MSI sequence. It is accomplished by incorporating several a priori constraints including the sharpness of the latent clear image, the spatial and temporal smoothness of the blur kernel and the similarity between temporally-neighboring images in MSI sequence. Specifically, we model the similarity between MSI images with mutual information considering the different wavelengths used for capturing different images in MSI sequence. The optimization of the proposed approach is based on a multi-scale framework and stepwise optimization strategy. Experimental results from 22 MSI sequences validate that our approach outperforms several state-of-the-art techniques in natural image deblurring.

  8. Performance of coincidence-based PSD on LiF/ZnS Detectors for Multiplicity Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Sean M.; Stave, Sean C.; Lintereur, Azaree

    Abstract: Mass accountancy measurement is a nuclear nonproliferation application which utilizes coincidence and multiplicity counters to verify special nuclear material declarations. With a well-designed and efficient detector system, several relevant parameters of the material can be verified simultaneously. 6LiF/ZnS scintillating sheets may be used for this purpose due to a combination of high efficiency and short die-away times in systems designed with this material, but involve choices of detector geometry and exact material composition (e.g., the addition of Ni-quenching in the material) that must be optimized for the application. Multiplicity counting for verification of declared nuclear fuel mass involves neutronmore » detection in conditions where several neutrons arrive in a short time window, with confounding gamma rays. This paper considers coincidence-based Pulse-Shape Discrimination (PSD) techniques developed to work under conditions of high pileup, and the performance of these algorithms with different detection materials. Simulated and real data from modern LiF/ZnS scintillator systems are evaluated with these techniques and the relationship between the performance under pileup and material characteristics (e.g., neutron peak width and total light collection efficiency) are determined, to allow for an optimal choice of detector and material.« less

  9. The Taguchi Method Application to Improve the Quality of a Sustainable Process

    NASA Astrophysics Data System (ADS)

    Titu, A. M.; Sandu, A. V.; Pop, A. B.; Titu, S.; Ciungu, T. C.

    2018-06-01

    Taguchi’s method has always been a method used to improve the quality of the analyzed processes and products. This research shows an unusual situation, namely the modeling of some parameters, considered technical parameters, in a process that is wanted to be durable by improving the quality process and by ensuring quality using an experimental research method. Modern experimental techniques can be applied in any field and this study reflects the benefits of interacting between the agriculture sustainability principles and the Taguchi’s Method application. The experimental method used in this practical study consists of combining engineering techniques with experimental statistical modeling to achieve rapid improvement of quality costs, in fact seeking optimization at the level of existing processes and the main technical parameters. The paper is actually a purely technical research that promotes a technical experiment using the Taguchi method, considered to be an effective method since it allows for rapid achievement of 70 to 90% of the desired optimization of the technical parameters. The missing 10 to 30 percent can be obtained with one or two complementary experiments, limited to 2 to 4 technical parameters that are considered to be the most influential. Applying the Taguchi’s Method in the technique and not only, allowed the simultaneous study in the same experiment of the influence factors considered to be the most important in different combinations and, at the same time, determining each factor contribution.

  10. Defect Engineering in SrI 2:Eu 2+ Single Crystal Scintillators

    DOE PAGES

    Wu, Yuntao; Boatner, Lynn A.; Lindsey, Adam C.; ...

    2015-06-23

    Eu 2+-activated strontium iodide is an excellent single crystal scintillator used for gamma-ray detection and significant effort is currently focused on the development of large-scale crystal growth techniques. A new approach of molten-salt pumping or so-called melt aging was recently applied to optimize the crystal quality and scintillation performance. Nevertheless, a detailed understanding of the underlying mechanism of this technique is still lacking. The main purpose of this paper is to conduct an in-depth study of the interplay between microstructure, trap centers and scintillation efficiency after melt aging treatment. Three SrI 2:2 mol% Eu2+ single crystals with 16 mm diametermore » were grown using the Bridgman method under identical growth conditions with the exception of the melt aging time (e.g. 0, 24 and 72 hours). Using energy-dispersive X-ray spectroscopy, it is found that the matrix composition of the finished crystal after melt aging treatment approaches the stoichiometric composition. The mechanism responsible for the formation of secondary phase inclusions in melt-aged SrI 2:Eu 2+ is discussed. Simultaneous improvement in light yield, energy resolution, scintillation decay-time and afterglow is achieved in melt-aged SrI 2:Eu 2+. The correlation between performance improvement and defect structure is addressed. The results of this paper lead to a better understanding of the effects of defect engineering in control and optimization of metal halide scintillators using the melt aging technique.« less

  11. 1H MAS NMR (magic-angle spinning nuclear magnetic resonance) techniques for the quantitative determination of hydrogen types in solid catalysts and supports.

    PubMed

    Kennedy, Gordon J; Afeworki, Mobae; Calabro, David C; Chase, Clarence E; Smiley, Randolph J

    2004-06-01

    Distinct hydrogen species are present in important inorganic solids such as zeolites, silicoaluminophosphates (SAPOs), mesoporous materials, amorphous silicas, and aluminas. These H species include hydrogens associated with acidic sites such as Al(OH)Si, non-framework aluminum sites, silanols, and surface functionalities. Direct and quantitative methodology to identify, measure, and monitor these hydrogen species are key to monitoring catalyst activity, optimizing synthesis conditions, tracking post-synthesis structural modifications, and in the preparation of novel catalytic materials. Many workers have developed several techniques to address these issues, including 1H MAS NMR (magic-angle spinning nuclear magnetic resonance). 1H MAS NMR offers many potential advantages over other techniques, but care is needed in recognizing experimental limitations and developing sample handling and NMR methodology to obtain quantitatively reliable data. A simplified approach is described that permits vacuum dehydration of multiple samples simultaneously and directly in the MAS rotor without the need for epoxy, flame sealing, or extensive glovebox use. We have found that careful optimization of important NMR conditions, such as magnetic field homogeneity and magic angle setting are necessary to acquire quantitative, high-resolution spectra that accurately measure the concentrations of the different hydrogen species present. Details of this 1H MAS NMR methodology with representative applications to zeolites, SAPOs, M41S, and silicas as a function of synthesis conditions and post-synthesis treatments (i.e., steaming, thermal dehydroxylation, and functionalization) are presented.

  12. Grating-based tomography applications in biomedical engineering

    NASA Astrophysics Data System (ADS)

    Schulz, Georg; Thalmann, Peter; Khimchenko, Anna; Müller, Bert

    2017-10-01

    For the investigation of soft tissues or tissues consisting of soft and hard tissues on the microscopic level, hard X-ray phase tomography has become one of the most suitable imaging techniques. Besides other phase contrast methods grating interferometry has the advantage of higher sensitivity than inline methods and the quantitative results. One disadvantage of the conventional double-grating setup (XDGI) compared to inline methods is the limitation of the spatial resolution. This limitation can be overcome by removing the analyser grating resulting in a single-grating setup (XSGI). In order to verify the performance of XSGI concerning contrast and spatial resolution, a quantitative comparison of XSGI and XDGI tomograms of a human nerve was performed. Both techniques provide sufficient contrast to allow for the distinction of tissue types. The spatial resolution of the two-fold binned XSGI data set is improved by a factor of two in comparison to XDGI which underlies its performance in tomography of soft tissues. Another application for grating-based X-ray phase tomography is the simultaneous visualization of soft and hard tissues of a plaque-containing coronary artery. The simultaneous visualization of both tissues is important for the segmentation of the lumen. The segmented data can be used for flow simulations in order to obtain information about the three-dimensional wall shear stress distribution needed for the optimization of mechano-sensitive nanocontainers used for drug delivery.

  13. Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes

    NASA Astrophysics Data System (ADS)

    Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen

    2016-06-01

    Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.

  14. A comprehensive formulation for volumetric modulated arc therapy planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Dan; Lyu, Qihui; Ruan, Dan

    2016-07-15

    Purpose: Volumetric modulated arc therapy (VMAT) is a widely employed radiation therapy technique, showing comparable dosimetry to static beam intensity modulated radiation therapy (IMRT) with reduced monitor units and treatment time. However, the current VMAT optimization has various greedy heuristics employed for an empirical solution, which jeopardizes plan consistency and quality. The authors introduce a novel direct aperture optimization method for VMAT to overcome these limitations. Methods: The comprehensive VMAT (comVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term to penalize the difference between the optimized dose and the prescribed dose, as well as an anisotropicmore » total variation term to promote piecewise continuity in the fluence maps, preparing it for direct aperture optimization. A level set function was used to describe the aperture shapes and the difference between aperture shapes at adjacent angles was penalized to control MLC motion range. A proximal-class optimization solver was adopted to solve the large scale optimization problem, and an alternating optimization strategy was implemented to solve the fluence intensity and aperture shapes simultaneously. Single arc comVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme case, a lung (LNG) case, and two head and neck cases—one with three PTVs (H&N{sub 3PTV}) and one with foue PTVs (H&N{sub 4PTV})—to test the efficacy. The plans were optimized using an alternating optimization strategy. The plans were compared against the clinical VMAT (clnVMAT) plans utilizing two overlapping coplanar arcs for treatment. Results: The optimization of the comVMAT plans had converged within 600 iterations of the block minimization algorithm. comVMAT plans were able to consistently reduce the dose to all organs-at-risk (OARs) as compared to the clnVMAT plans. On average, comVMAT plans reduced the max and mean OAR dose by 6.59% and 7.45%, respectively, of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N{sub 3PTV} case. PTV coverages measured by D95, D98, and D99 were within 0.25% of the prescription dose. By comprehensively optimizing all beams, the comVMAT optimizer gained the freedom to allow some selected beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel nongreedy VMAT approach simultaneously optimizes all beams in an arc and then directly generates deliverable apertures. The single arc VMAT approach thus fully utilizes the digital Linac’s capability in dose rate and gantry rotation speed modulation. In practice, the new single VMAT algorithm generates plans superior to existing VMAT algorithms utilizing two arcs.« less

  15. Beam orientation optimization for intensity-modulated radiation therapy using mixed integer programming

    NASA Astrophysics Data System (ADS)

    Yang, Ruijie; Dai, Jianrong; Yang, Yong; Hu, Yimin

    2006-08-01

    The purpose of this study is to extend an algorithm proposed for beam orientation optimization in classical conformal radiotherapy to intensity-modulated radiation therapy (IMRT) and to evaluate the algorithm's performance in IMRT scenarios. In addition, the effect of the candidate pool of beam orientations, in terms of beam orientation resolution and starting orientation, on the optimized beam configuration, plan quality and optimization time is also explored. The algorithm is based on the technique of mixed integer linear programming in which binary and positive float variables are employed to represent candidates for beam orientation and beamlet weights in beam intensity maps. Both beam orientations and beam intensity maps are simultaneously optimized in the algorithm with a deterministic method. Several different clinical cases were used to test the algorithm and the results show that both target coverage and critical structures sparing were significantly improved for the plans with optimized beam orientations compared to those with equi-spaced beam orientations. The calculation time was less than an hour for the cases with 36 binary variables on a PC with a Pentium IV 2.66 GHz processor. It is also found that decreasing beam orientation resolution to 10° greatly reduced the size of the candidate pool of beam orientations without significant influence on the optimized beam configuration and plan quality, while selecting different starting orientations had large influence. Our study demonstrates that the algorithm can be applied to IMRT scenarios, and better beam orientation configurations can be obtained using this algorithm. Furthermore, the optimization efficiency can be greatly increased through proper selection of beam orientation resolution and starting beam orientation while guaranteeing the optimized beam configurations and plan quality.

  16. Enzymatic electrochemical detection coupled to multivariate calibration for the determination of phenolic compounds in environmental samples.

    PubMed

    Hernandez, Silvia R; Kergaravat, Silvina V; Pividori, Maria Isabel

    2013-03-15

    An approach based on the electrochemical detection of the horseradish peroxidase enzymatic reaction by means of square wave voltammetry was developed for the determination of phenolic compounds in environmental samples. First, a systematic optimization procedure of three factors involved in the enzymatic reaction was carried out using response surface methodology through a central composite design. Second, the enzymatic electrochemical detection coupled with a multivariate calibration method based in the partial least-squares technique was optimized for the determination of a mixture of five phenolic compounds, i.e. phenol, p-aminophenol, p-chlorophenol, hydroquinone and pyrocatechol. The calibration and validation sets were built and assessed. In the calibration model, the LODs for phenolic compounds oscillated from 0.6 to 1.4 × 10(-6) mol L(-1). Recoveries for prediction samples were higher than 85%. These compounds were analyzed simultaneously in spiked samples and in water samples collected close to tanneries and landfills. Published by Elsevier B.V.

  17. Digital signal processing the Tevatron BPM signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cancelo, G.; James, E.; Wolbers, S.

    2005-05-01

    The Beam Position Monitor (TeV BPM) readout system at Fermilab's Tevatron has been updated and is currently being commissioned. The new BPMs use new analog and digital hardware to achieve better beam position measurement resolution. The new system reads signals from both ends of the existing directional stripline pickups to provide simultaneous proton and antiproton measurements. The signals provided by the two ends of the BPM pickups are processed by analog band-pass filters and sampled by 14-bit ADCs at 74.3MHz. A crucial part of this work has been the design of digital filters that process the signal. This paper describesmore » the digital processing and estimation techniques used to optimize the beam position measurement. The BPM electronics must operate in narrow-band and wide-band modes to enable measurements of closed-orbit and turn-by-turn positions. The filtering and timing conditions of the signals are tuned accordingly for the operational modes. The analysis and the optimized result for each mode are presented.« less

  18. Concurrent design of composite materials and structures considering thermal conductivity constraints

    NASA Astrophysics Data System (ADS)

    Jia, J.; Cheng, W.; Long, K.

    2017-08-01

    This article introduces thermal conductivity constraints into concurrent design. The influence of thermal conductivity on macrostructure and orthotropic composite material is extensively investigated using the minimum mean compliance as the objective function. To simultaneously control the amounts of different phase materials, a given mass fraction is applied in the optimization algorithm. Two phase materials are assumed to compete with each other to be distributed during the process of maximizing stiffness and thermal conductivity when the mass fraction constraint is small, where phase 1 has superior stiffness and thermal conductivity whereas phase 2 has a superior ratio of stiffness to density. The effective properties of the material microstructure are computed by a numerical homogenization technique, in which the effective elasticity matrix is applied to macrostructural analyses and the effective thermal conductivity matrix is applied to the thermal conductivity constraint. To validate the effectiveness of the proposed optimization algorithm, several three-dimensional illustrative examples are provided and the features under different boundary conditions are analysed.

  19. PET AND SPECT STUDIES IN CHILDREN WITH HEMISPHERIC LOW-GRADE GLIOMAS

    PubMed Central

    Juhász, Csaba; Bosnyák, Edit

    2016-01-01

    Molecular imaging is playing an increasing role in the pre-treatment evaluation of low-grade gliomas. While glucose positron emission tomography (PET) can be helpful to differentiate low-grade from high-grade tumors, PET imaging with amino acid radiotracers has several advantages, such as better differentiation between tumors and non-tumorous lesions, optimized biopsy targeting and improved detection of tumor recurrence. This review provides a brief overview of single photon emission computed tomography (SPECT) studies followed by a more detailed review of clinical applications of glucose and amino acid PET imaging in low-grade hemispheric gliomas. We discuss key differences in the performance of the most commonly utilized PET radiotracers and highlight the advantage of PET/MRI fusion to obtain optimal information about tumor extent, heterogeneity and metabolism. Recent data also suggest that simultaneous acquisition of PET/MR images and the combination of advanced MRI techniques with quantitative PET can further improve the pre- and post-treatment evaluation of pediatric brain tumors. PMID:27659825

  20. Optimized RNA ISH, RNA FISH and protein-RNA double labeling (IF/FISH) in Drosophila ovaries

    PubMed Central

    Zimmerman, Sandra G; Peters, Nathaniel C; Altaras, Ariel E; Berg, Celeste A

    2014-01-01

    In situ hybridization (ISH) is a powerful technique for detecting nucleic acids in cells and tissues. Here we describe three ISH procedures that are optimized for Drosophila ovaries: whole-mount, digoxigenin-labeled RNA ISH; RNA fluorescent ISH (FISH); and protein immunofluorescence (IF)–RNA FISH double labeling (IF/FISH). Each procedure balances conflicting requirements for permeabilization, fixation and preservation of antigenicity to detect RNA and protein expression with high resolution and sensitivity. The ISH protocol uses alkaline phosphatase–conjugated digoxigenin antibodies followed by a color reaction, whereas FISH detection involves tyramide signal amplification (TSA). To simultaneously preserve antigens for protein detection and enable RNA probe penetration for IF/FISH, we perform IF before FISH and use xylenes and detergents to permeabilize the tissue rather than proteinase K, which can damage the antigens. ISH and FISH take 3 d to perform, whereas IF/FISH takes 5 d. Probe generation takes 1 or 2 d to perform. PMID:24113787

  1. Voltage Controlled Hot Carrier Injection Enables Ohmic Contacts Using Au Island Metal Films on Ge.

    PubMed

    Ganti, Srinivas; King, Peter J; Arac, Erhan; Dawson, Karl; Heikkilä, Mikko J; Quilter, John H; Murdoch, Billy; Cumpson, Peter; O'Neill, Anthony

    2017-08-23

    We introduce a new approach to creating low-resistance metal-semiconductor ohmic contacts, illustrated using high conductivity Au island metal films (IMFs) on Ge, with hot carrier injection initiated at low applied voltage. The same metallization process simultaneously allows ohmic contact to n-Ge and p-Ge, because hot carriers circumvent the Schottky barrier formed at metal/n-Ge interfaces. A 2.5× improvement in contact resistivity is reported over previous techniques to achieve ohmic contact to both n- and p- semiconductor. Ohmic contacts at 4.2 K confirm nonequilibrium current transport. Self-assembled Au IMFs are strongly orientated to Ge by annealing near the Au/Ge eutectic temperature. Au IMF nanostructures form, provided the Au layer is below a critical thickness. We anticipate that optimized IMF contacts may have applicability to many material systems. Optimizing this new paradigm for metal-semiconductor contacts offers the prospect of improved nanoelectronic systems and the study of voltage controlled hot holes and electrons.

  2. Improved performance of nanoscale junctionless tunnel field-effect transistor based on gate engineering approach

    NASA Astrophysics Data System (ADS)

    Molaei Imen Abadi, Rouzbeh; Sedigh Ziabari, Seyed Ali

    2016-11-01

    In this paper, a first qualitative study on the performance characteristics of dual-work function gate junctionless TFET (DWG-JLTFET) on the basis of energy band profile modulation is investigated. A dual-work function gate technique is used in a JLTFET in order to create a downward band bending on the source side similar to PNPN structure. Compared with the single-work function gate junctionless TFET (SWG-JLTFET), the numerical simulation results demonstrated that the DWG-JLTFET simultaneously optimizes the ON-state current, the OFF-state leakage current, and the threshold voltage and also improves average subthreshold slope. It is illustrated that if appropriate work functions are selected for the gate materials on the source side and the drain side, the JLTFET exhibits a considerably improved performance. Furthermore, the optimization design of the tunnel gate length ( L Tun) for the proposed DWG-JLTFET is studied. All the simulations are done in Silvaco TCAD for a channel length of 20 nm using the nonlocal band-to-band tunneling (BTBT) model.

  3. Optimization of Exposure Time Division for Multi-object Photometry

    NASA Astrophysics Data System (ADS)

    Popowicz, Adam; Kurek, Aleksander R.

    2017-09-01

    Optical observations of wide fields of view entail the problem of selecting the best exposure time. As many objects are usually observed simultaneously, the quality of photometry of the brightest ones is always better than that of the dimmer ones, even though all of them are frequently equally interesting for astronomers. Thus, measuring all objects with the highest possible precision is desirable. In this paper, we present a new optimization algorithm, dedicated for the division of exposure time into sub-exposures, which enables photometry with a more balanced noise budget. The proposed technique increases the photometric precision of dimmer objects at the expense of the measurement fidelity of the brightest ones. We have tested the method on real observations using two telescope setups, demonstrating its usefulness and good consistency with theoretical expectations. The main application of our approach is a wide range of sky surveys, including ones performed by space telescopes. The method can be used to plan virtually any photometric observation of objects that show a wide range of magnitudes.

  4. Coordinated Platoon Routing in a Metropolitan Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Jeffrey; Munson, Todd; Sokolov, Vadim

    2016-10-10

    Platooning vehicles—connected and automated vehicles traveling with small intervehicle distances—use less fuel because of reduced aerodynamic drag. Given a network de- fined by vertex and edge sets and a set of vehicles with origin/destination nodes/times, we model and solve the combinatorial optimization problem of coordinated routing of vehicles in a manner that routes them to their destination on time while using the least amount of fuel. Common approaches decompose the platoon coordination and vehicle routing into separate problems. Our model addresses both problems simultaneously to obtain the best solution. We use modern modeling techniques and constraints implied from analyzing themore » platoon routing problem to address larger numbers of vehicles and larger networks than previously considered. While the numerical method used is unable to certify optimality for candidate solutions to all networks and parameters considered, we obtain excellent solutions in approximately one minute for much larger networks and vehicle sets than previously considered in the literature.« less

  5. PET and SPECT studies in children with hemispheric low-grade gliomas.

    PubMed

    Juhász, Csaba; Bosnyák, Edit

    2016-10-01

    Molecular imaging is playing an increasing role in the pretreatment evaluation of low-grade gliomas. While glucose positron emission tomography (PET) can be helpful to differentiate low-grade from high-grade tumors, PET imaging with amino acid radiotracers has several advantages, such as better differentiation between tumors and non-tumorous lesions, optimized biopsy targeting, and improved detection of tumor recurrence. This review provides a brief overview of single-photon emission computed tomography (SPECT) studies followed by a more detailed review of the clinical applications of glucose and amino acid PET imaging in low-grade hemispheric gliomas. We discuss key differences in the performance of the most commonly utilized PET radiotracers and highlight the advantage of PET/MRI fusion to obtain optimal information about tumor extent, heterogeneity, and metabolism. Recent data also suggest that simultaneous acquisition of PET/MR images and the combination of advanced MRI techniques with quantitative PET can further improve the pretreatment and post-treatment evaluation of pediatric brain tumors.

  6. Ensemble of hybrid genetic algorithm for two-dimensional phase unwrapping

    NASA Astrophysics Data System (ADS)

    Balakrishnan, D.; Quan, C.; Tay, C. J.

    2013-06-01

    The phase unwrapping is the final and trickiest step in any phase retrieval technique. Phase unwrapping by artificial intelligence methods (optimization algorithms) such as hybrid genetic algorithm, reverse simulated annealing, particle swarm optimization, minimum cost matching showed better results than conventional phase unwrapping methods. In this paper, Ensemble of hybrid genetic algorithm with parallel populations is proposed to solve the branch-cut phase unwrapping problem. In a single populated hybrid genetic algorithm, the selection, cross-over and mutation operators are applied to obtain new population in every generation. The parameters and choice of operators will affect the performance of the hybrid genetic algorithm. The ensemble of hybrid genetic algorithm will facilitate to have different parameters set and different choice of operators simultaneously. Each population will use different set of parameters and the offspring of each population will compete against the offspring of all other populations, which use different set of parameters. The effectiveness of proposed algorithm is demonstrated by phase unwrapping examples and advantages of the proposed method are discussed.

  7. Technological advances in radiotherapy of rectal cancer: opportunities and challenges.

    PubMed

    Appelt, Ane L; Sebag-Montefiore, David

    2016-07-01

    This review summarizes the available evidence for the use of modern radiotherapy techniques for chemoradiotherapy for rectal cancer, with specific focus on intensity-modulated radiotherapy (IMRT) and volumetric arc therapy (VMAT) techniques. The dosimetric benefits of IMRT and VMAT are well established, but prospective clinical studies are limited, with phase I-II studies only. Recent years have seen the publication of a few larger prospective patient series as well as some retrospective cohorts, several of which include much needed late toxicity data. Overall results are encouraging, as toxicity levels - although varying across reports - appear lower than for 3D conformal radiotherapy. Innovative treatment techniques and strategies which may be facilitated by the use of IMRT/VMAT include simultaneously integrated tumour boost, adaptive treatment, selective sparing of specific organs to enable chemotherapy escalation, and nonsurgical management. Few prospective studies of IMRT and VMAT exist, which causes uncertainty not just in regards to the clinical benefit of these technologies but also in the optimal use. The priority for future research should be subgroups of patients who might receive relatively greater benefit from innovative treatment techniques, such as patients receiving chemoradiotherapy with definitive intent and patients treated with dose escalation.

  8. Construction and Potential Applications of Biosensors for Proteins in Clinical Laboratory Diagnosis

    PubMed Central

    Liu, Xuan

    2017-01-01

    Biosensors for proteins have shown attractive advantages compared to traditional techniques in clinical laboratory diagnosis. In virtue of modern fabrication modes and detection techniques, various immunosensing platforms have been reported on basis of the specific recognition between antigen-antibody pairs. In addition to profit from the development of nanotechnology and molecular biology, diverse fabrication and signal amplification strategies have been designed for detection of protein antigens, which has led to great achievements in fast quantitative and simultaneous testing with extremely high sensitivity and specificity. Besides antigens, determination of antibodies also possesses great significance for clinical laboratory diagnosis. In this review, we will categorize recent immunosensors for proteins by different detection techniques. The basic conception of detection techniques, sensing mechanisms, and the relevant signal amplification strategies are introduced. Since antibodies and antigens have an equal position to each other in immunosensing, all biosensing strategies for antigens can be extended to antibodies under appropriate optimizations. Biosensors for antibodies are summarized, focusing on potential applications in clinical laboratory diagnosis, such as a series of biomarkers for infectious diseases and autoimmune diseases, and an evaluation of vaccine immunity. The excellent performances of these biosensors provide a prospective space for future antibody-detection-based disease serodiagnosis. PMID:29207528

  9. A technique to calibrate spatial light modulator for varying phase response over its spatial regions

    NASA Astrophysics Data System (ADS)

    Gupta, Deepak K.; Tata, B. V. R.; Ravindran, T. R.

    2018-05-01

    Holographic Optical Tweezers (HOTs) employ the technique of beam shaping and holography in an optical manipulation system to create a multitude of focal spots for simultaneous trapping and manipulation of sub-microscopic particles. The beam shaping is accomplished by the use of a phase only liquid crystal spatial light modulator (SLM). The efficiency and the uniformity in the generated traps greatly depend on the phase response behavior of SLMs. In addition the SLMs are found to show different phase response over its different spatial regions, due to non-flat structure of SLMs. Also the phase responses are found to vary over different spatial regions due to non-uniform illumination (Gaussian profile of incident laser). There are various techniques to calibrate for the varying phase response by characterizing the phase modulation at various sub-sections. We present a simple and fast technique to calibrate the SLM suffering with spatially varying phase response. We divide the SLM into many sub-sections and optimize the brightness and gamma of each sub-section for maximum diffraction efficiency. This correction is incorporated in the Weighted Gerchberg Saxton (WGS) algorithm for generation of holograms.

  10. Body composition analysis techniques in adult and pediatric patients: how reliable are they? How useful are they clinically?

    PubMed

    Woodrow, Graham

    2007-06-01

    Complex abnormalities of body composition occur in peritoneal dialysis (PD). These abnormalities reflect changes in hydration, nutrition, and body fat, and they are of major clinical significance. Clinical assessment of these body compartments is insensitive and inaccurate. Frequently, simultaneous changes of hydration, wasting, and body fat content can occur, confounding clinical assessment of each component. Body composition can be described by models of varying complexity that use one or more measurement techniques. "Gold standard" methods provide accurate and precise data, but are not practical for routine clinical use. Dual energy X-ray absorptiometry allows for measurement of regional as well as whole-body composition, which can provide further information of clinical relevance. Simpler techniques such as anthropometry and bioelectrical impedance analysis are suited to routine use in clinic or at the bedside, but may be less accurate. Body composition methodology sometimes makes assumptions regarding relationships between components, particularly in regard to hydration, which may be invalid in pathologic states. Uncritical application of these methods to the PD patient may result in erroneous interpretation of results. Understanding the foundations and limitations of body composition techniques allows for optimal application in clinical practice.

  11. Construction and Potential Applications of Biosensors for Proteins in Clinical Laboratory Diagnosis.

    PubMed

    Liu, Xuan; Jiang, Hui

    2017-12-04

    Biosensors for proteins have shown attractive advantages compared to traditional techniques in clinical laboratory diagnosis. In virtue of modern fabrication modes and detection techniques, various immunosensing platforms have been reported on basis of the specific recognition between antigen-antibody pairs. In addition to profit from the development of nanotechnology and molecular biology, diverse fabrication and signal amplification strategies have been designed for detection of protein antigens, which has led to great achievements in fast quantitative and simultaneous testing with extremely high sensitivity and specificity. Besides antigens, determination of antibodies also possesses great significance for clinical laboratory diagnosis. In this review, we will categorize recent immunosensors for proteins by different detection techniques. The basic conception of detection techniques, sensing mechanisms, and the relevant signal amplification strategies are introduced. Since antibodies and antigens have an equal position to each other in immunosensing, all biosensing strategies for antigens can be extended to antibodies under appropriate optimizations. Biosensors for antibodies are summarized, focusing on potential applications in clinical laboratory diagnosis, such as a series of biomarkers for infectious diseases and autoimmune diseases, and an evaluation of vaccine immunity. The excellent performances of these biosensors provide a prospective space for future antibody-detection-based disease serodiagnosis.

  12. Research on the decision-making model of land-use spatial optimization

    NASA Astrophysics Data System (ADS)

    He, Jianhua; Yu, Yan; Liu, Yanfang; Liang, Fei; Cai, Yuqiu

    2009-10-01

    Using the optimization result of landscape pattern and land use structure optimization as constraints of CA simulation results, a decision-making model of land use spatial optimization is established coupled the landscape pattern model with cellular automata to realize the land use quantitative and spatial optimization simultaneously. And Huangpi district is taken as a case study to verify the rationality of the model.

  13. Application of HFCT and UHF Sensors in On-Line Partial Discharge Measurements for Insulation Diagnosis of High Voltage Equipment

    PubMed Central

    Álvarez, Fernando; Garnacho, Fernando; Ortego, Javier; Sánchez-Urán, Miguel Ángel

    2015-01-01

    Partial discharge (PD) measurements provide valuable information for assessing the condition of high voltage (HV) insulation systems, contributing to their quality assurance. Different PD measuring techniques have been developed in the last years specially designed to perform on-line measurements. Non-conventional PD methods operating in high frequency bands are usually used when this type of tests are carried out. In PD measurements the signal acquisition, the subsequent signal processing and the capability to obtain an accurate diagnosis are conditioned by the selection of a suitable detection technique and by the implementation of effective signal processing tools. This paper proposes an optimized electromagnetic detection method based on the combined use of wideband PD sensors for measurements performed in the HF and UHF frequency ranges, together with the implementation of powerful processing tools. The effectiveness of the measuring techniques proposed is demonstrated through an example, where several PD sources are measured simultaneously in a HV installation consisting of a cable system connected by a plug-in terminal to a gas insulated substation (GIS) compartment. PMID:25815452

  14. Extraction of bioactive carbohydrates from artichoke (Cynara scolymus L.) external bracts using microwave assisted extraction and pressurized liquid extraction.

    PubMed

    Ruiz-Aceituno, Laura; García-Sarrió, M Jesús; Alonso-Rodriguez, Belén; Ramos, Lourdes; Sanz, M Luz

    2016-04-01

    Microwave assisted extraction (MAE) and pressurized liquid extraction (PLE) methods using water as solvent have been optimized by means of a Box-Behnken and 3(2) composite experimental designs, respectively, for the effective extraction of bioactive carbohydrates (inositols and inulin) from artichoke (Cynara scolymus L.) external bracts. MAE at 60 °C for 3 min of 0.3 g of sample allowed the extraction of slightly higher concentrations of inositol than PLE at 75 °C for 26.7 min (11.6 mg/g dry sample vs. 7.6 mg/g dry sample). On the contrary, under these conditions, higher concentrations of inulin were extracted with the latter technique (185.4 mg/g vs. 96.4 mg/g dry sample), considering two successive extraction cycles for both techniques. Both methodologies can be considered appropriate for the simultaneous extraction of these bioactive carbohydrates from this particular industrial by-product. To the best of our knowledge this is the first time that these techniques are applied for this purpose. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Beam position monitor engineering

    NASA Astrophysics Data System (ADS)

    Smith, Stephen R.

    1997-01-01

    The design of beam position monitors often involves challenging system design choices. Position transducers must be robust, accurate, and generate adequate position signal without unduly disturbing the beam. Electronics must be reliable and affordable, usually while meeting tough requirements on precision, accuracy, and dynamic range. These requirements may be difficult to achieve simultaneously, leading the designer into interesting opportunities for optimization or compromise. Some useful techniques and tools are shown. Both finite element analysis and analytic techniques will be used to investigate quasi-static aspects of electromagnetic fields such as the impedance of and the coupling of beam to striplines or buttons. Finite-element tools will be used to understand dynamic aspects of the electromagnetic fields of beams, such as wake fields and transmission-line and cavity effects in vacuum-to-air feedthroughs. Mathematical modeling of electrical signals through a processing chain will be demonstrated, in particular to illuminate areas where neither a pure time-domain nor a pure frequency-domain analysis is obviously advantageous. Emphasis will be on calculational techniques, in particular on using both time domain and frequency domain approaches to the applicable parts of interesting problems.

  16. Assessing environmental quality: the implications for social justice

    EPA Science Inventory

    Individuals experience simultaneous exposure to pollutants and social factors, which cluster to affect human health outcomes. The optimal approach to combining these factors is unknown, therefore we developed a method to model simultaneous exposure using criteria air pollutants, ...

  17. [Simultaneous desulfurization and denitrification by TiO2/ACF under different irradiation].

    PubMed

    Han, Jing; Zhao, Yi

    2009-04-15

    The supported TiO2 photocatalysts were prepared in laboratory, and the experiments of simultaneous desulfurization and denitrification were carried out by self-designed photocatalysis reactor. The optimal experimental conditions were achieved, and the efficiencies of simultaneous desulfurization and denitrification under two different light sources were compared. The results show that the oxygen content of flue gas, reaction temperature, flue gas humidity and irradiation intensity are most essential factors to photocatalysis. For TiO2/ACF, the removal efficiencies of 99.7% for SO2 and 64.3% for NO are obtained respectively at optimal experimental conditions under UV irradiation. For TiO2/ACF, the removal efficiencies of 97.5% for SO2 and 49.6% for NO are achieved respectively at optimal experimental conditions under the visible light irradiation. The results of five times parallel experiments indicate standard deviation S of parallel data is little. The mechanism of removal for SO2 and NO is proposed under two light sources by ion chromatography analysis of the absorption liquid.

  18. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    NASA Astrophysics Data System (ADS)

    Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem

    2017-11-01

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.

  19. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE PAGES

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...

    2017-10-24

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  20. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization.

    PubMed

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  2. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  3. Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim

    2016-05-01

    A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.

  4. Simultaneous acquisition of perfusion image and dynamic MR angiography using time‐encoded pseudo‐continuous ASL

    PubMed Central

    Helle, Michael; Koken, Peter; Van Cauteren, Marc; van Osch, Matthias J. P.

    2017-01-01

    Purpose Both dynamic magnetic resonance angiography (4D‐MRA) and perfusion imaging can be acquired by using arterial spin labeling (ASL). While 4D‐MRA highlights large vessel pathology, such as stenosis or collateral blood flow patterns, perfusion imaging provides information on the microvascular status. Therefore, a complete picture of the cerebral hemodynamic condition could be obtained by combining the two techniques. Here, we propose a novel technique for simultaneous acquisition of 4D‐MRA and perfusion imaging using time‐encoded pseudo‐continuous arterial spin labeling. Methods The time‐encoded pseudo‐continuous arterial spin labeling module consisted of a first subbolus that was optimized for perfusion imaging by using a labeling duration of 1800 ms, whereas the other six subboli of 130 ms were used for encoding the passage of the labeled spins through the arterial system for 4D‐MRA acquisition. After the entire labeling module, a multishot 3D turbo‐field echo‐planar‐imaging readout was executed for the 4D‐MRA acquisition, immediately followed by a single‐shot, multislice echo‐planar‐imaging readout for perfusion imaging. The optimal excitation flip angle for the 3D turbo‐field echo‐planar‐imaging readout was investigated by evaluating the image quality of the 4D‐MRA and perfusion images as well as the accuracy of the estimated cerebral blood flow values. Results When using 36 excitation radiofrequency pulses with flip angles of 5 or 7.5°, the saturation effects of the 3D turbo‐field echo‐planar‐imaging readout on the perfusion images were relatively moderate and after correction, there were no statistically significant differences between the obtained cerebral blood flow values and those from traditional time‐encoded pseudo‐continuous arterial spin labeling. Conclusions This study demonstrated that simultaneous acquisition of 4D‐MRA and perfusion images can be achieved by using time‐encoded pseudo‐continuous arterial spin labeling. Magn Reson Med 79:2676–2684, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. PMID:28913838

  5. Study on the Simultaneously Quantitative Detection for β-Lactoglobulin and Lactoferrin of Cow Milk by Using Protein Chip Technique.

    PubMed

    Yin, Ji Yong; Huo, Jun Sheng; Ma, Xin Xin; Sun, Jing; Huang, Jian

    2017-12-01

    To research a protein chip method which can simultaneously quantitative detect β-Lactoglobulin (β-L) and Lactoferrin (Lf) at one time. Protein chip printer was used to print both anti-β-L antibodies and anti-Lf antibodies on each block of protein chip. And then an improved sandwich detection method was applied while the other two detecting antibodies for the two antigens were added in the block after they were mixed. The detection conditions of the quantitative detection for simultaneous measurement of β-L and Lf with protein chip were optimized and evaluated. Based on these detected conditions, two standard curves of the two proteins were simultaneously established on one protein chip. Finally, the new detection method was evaluated by using the analysis of precision and accuracy. By comparison experiment, mouse monoclonal antibodies of the two antigens were chosen as the printing probe. The concentrations of β-L and Lf probes were 0.5 mg/mL and 0.5 mg/mL, respectively, while the titers of detection antibodies both of β-L and Lf were 1:2,000. Intra- and inter-assay variability was between 4.88% and 38.33% for all tests. The regression coefficients of protein chip comparing with ELISA for β-L and Lf were better than 0.734, and both of the two regression coefficients were statistically significant (r = 0.734, t = 2.644, P = 0.038; and r = 0.774, t = 2.998, P = 0.024). A protein chip method of simultaneously quantitative detection for β-L and Lf has been established and this method is worthy in further application. Copyright © 2017 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  6. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    NASA Astrophysics Data System (ADS)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  7. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  8. Optimization of microwave-assisted extraction with saponification (MAES) for the determination of polybrominated flame retardants in aquaculture samples.

    PubMed

    Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R

    2008-08-01

    The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.

  9. 360 degree vision system: opportunities in transportation

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2007-09-01

    Panoramic technologies are experiencing new and exciting opportunities in the transportation industries. The advantages of panoramic imagers are numerous: increased areas coverage with fewer cameras, imaging of multiple target simultaneously, instantaneous full horizon detection, easier integration of various applications on the same imager and others. This paper reports our work on panomorph optics and potential usage in transportation applications. The novel panomorph lens is a new type of high resolution panoramic imager perfectly suitable for the transportation industries. The panomorph lens uses optimization techniques to improve the performance of a customized optical system for specific applications. By adding a custom angle to pixel relation at the optical design stage, the optical system provides an ideal image coverage which is designed to reduce and optimize the processing. The optics can be customized for the visible, near infra-red (NIR) or infra-red (IR) wavebands. The panomorph lens is designed to optimize the cost per pixel which is particularly important in the IR. We discuss the use of the 360 vision system which can enhance on board collision avoidance systems, intelligent cruise controls and parking assistance. 360 panoramic vision systems might enable safer highways and significant reduction in casualties.

  10. Development and Testing of Control Laws for the Active Aeroelastic Wing Program

    NASA Technical Reports Server (NTRS)

    Dibley, Ryan P.; Allen, Michael J.; Clarke, Robert; Gera, Joseph; Hodgkinson, John

    2005-01-01

    The Active Aeroelastic Wing research program was a joint program between the U.S. Air Force Research Laboratory and NASA established to investigate the characteristics of an aeroelastic wing and the technique of using wing twist for roll control. The flight test program employed the use of an F/A-18 aircraft modified by reducing the wing torsional stiffness and adding a custom research flight control system. The research flight control system was optimized to maximize roll rate using only wing surfaces to twist the wing while simultaneously maintaining design load limits, stability margins, and handling qualities. NASA Dryden Flight Research Center developed control laws using the software design tool called CONDUIT, which employs a multi-objective function optimization to tune selected control system design parameters. Modifications were made to the Active Aeroelastic Wing implementation in this new software design tool to incorporate the NASA Dryden Flight Research Center nonlinear F/A-18 simulation for time history analysis. This paper describes the design process, including how the control law requirements were incorporated into constraints for the optimization of this specific software design tool. Predicted performance is also compared to results from flight.

  11. The impact of the condenser on cytogenetic image quality in digital microscope system.

    PubMed

    Ren, Liqiang; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Xiaodong; Liu, Hong

    2013-01-01

    Optimizing operational parameters of the digital microscope system is an important technique to acquire high quality cytogenetic images and facilitate the process of karyotyping so that the efficiency and accuracy of diagnosis can be improved. This study investigated the impact of the condenser on cytogenetic image quality and system working performance using a prototype digital microscope image scanning system. Both theoretical analysis and experimental validations through objectively evaluating a resolution test chart and subjectively observing large numbers of specimen were conducted. The results show that the optimal image quality and large depth of field (DOF) are simultaneously obtained when the numerical aperture of condenser is set as 60%-70% of the corresponding objective. Under this condition, more analyzable chromosomes and diagnostic information are obtained. As a result, the system shows higher working stability and less restriction for the implementation of algorithms such as autofocusing especially when the system is designed to achieve high throughput continuous image scanning. Although the above quantitative results were obtained using a specific prototype system under the experimental conditions reported in this paper, the presented evaluation methodologies can provide valuable guidelines for optimizing operational parameters in cytogenetic imaging using the high throughput continuous scanning microscopes in clinical practice.

  12. A novel method for biomaterial scaffold internal architecture design to match bone elastic properties with desired porosity.

    PubMed

    Lin, Cheng Yu; Kikuchi, Noboru; Hollister, Scott J

    2004-05-01

    An often-proposed tissue engineering design hypothesis is that the scaffold should provide a biomimetic mechanical environment for initial function and appropriate remodeling of regenerating tissue while concurrently providing sufficient porosity for cell migration and cell/gene delivery. To provide a systematic study of this hypothesis, the ability to precisely design and manufacture biomaterial scaffolds is needed. Traditional methods for scaffold design and fabrication cannot provide the control over scaffold architecture design to achieve specified properties within fixed limits on porosity. The purpose of this paper was to develop a general design optimization scheme for 3D internal scaffold architecture to match desired elastic properties and porosity simultaneously, by introducing the homogenization-based topology optimization algorithm (also known as general layout optimization). With an initial target for bone tissue engineering, we demonstrate that the method can produce highly porous structures that match human trabecular bone anisotropic stiffness using accepted biomaterials. In addition, we show that anisotropic bone stiffness may be matched with scaffolds of widely different porosity. Finally, we also demonstrate that prototypes of the designed structures can be fabricated using solid free-form fabrication (SFF) techniques.

  13. On the Optimization of Aerospace Plane Ascent Trajectory

    NASA Astrophysics Data System (ADS)

    Al-Garni, Ahmed; Kassem, Ayman Hamdy

    A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.

  14. Development of a chromatographic method with multi-criteria decision making design for simultaneous determination of nifedipine and atenolol in content uniformity testing.

    PubMed

    Ahmed, Sameh; Alqurshi, Abdulmalik; Mohamed, Abdel-Maaboud Ismail

    2018-07-01

    A new robust and reliable high-performance liquid chromatography (HPLC) method with multi-criteria decision making (MCDM) approach was developed to allow simultaneous quantification of atenolol (ATN) and nifedipine (NFD) in content uniformity testing. Felodipine (FLD) was used as an internal standard (I.S.) in this study. A novel marriage between a new interactive response optimizer and a HPLC method was suggested for multiple response optimizations of target responses. An interactive response optimizer was used as a decision and prediction tool for the optimal settings of target responses, according to specified criteria, based on Derringer's desirability. Four independent variables were considered in this study: Acetonitrile%, buffer pH and concentration along with column temperature. Eight responses were optimized: retention times of ATN, NFD, and FLD, resolutions between ATN/NFD and NFD/FLD, and plate numbers for ATN, NFD, and FLD. Multiple regression analysis was applied in order to scan the influences of the most significant variables for the regression models. The experimental design was set to give minimum retention times, maximum resolution and plate numbers. The interactive response optimizer allowed prediction of optimum conditions according to these criteria with a good composite desirability value of 0.98156. The developed method was validated according to the International Conference on Harmonization (ICH) guidelines with the aid of the experimental design. The developed MCDM-HPLC method showed superior robustness and resolution in short analysis time allowing successful simultaneous content uniformity testing of ATN and NFD in marketed capsules. The current work presents an interactive response optimizer as an efficient platform to optimize, predict responses, and validate HPLC methodology with tolerable design space for assay in quality control laboratories. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Team Training (Training at Own Facility) versus Individual Surgeon's Training (Training at Trainer's Facility) When Implementing a New Surgical Technique: Example from the ONSTEP Inguinal Hernia Repair

    PubMed Central

    Laursen, Jannie

    2014-01-01

    Background. When implementing a new surgical technique, the best method for didactic learning has not been settled. There are basically two scenarios: the trainee goes to the teacher's clinic and learns the new technique hands-on, or the teacher goes to the trainee's clinic and performs the teaching there. Methods. An informal literature review was conducted to provide a basis for discussing pros and cons. We also wanted to discuss how many surgeons can be trained in a day and the importance of the demand for a new surgical procedure to ensure a high adoption rate and finally to apply these issues on a discussion of barriers for adoption of the new ONSTEP technique for inguinal hernia repair after initial training. Results and Conclusions. The optimal training method would include moving the teacher to the trainee's department to obtain team-training effects simultaneous with surgical technical training of the trainee surgeon. The training should also include a theoretical presentation and discussion along with the practical training. Importantly, the training visit should probably be followed by a scheduled visit to clear misunderstandings and fine-tune the technique after an initial self-learning period. PMID:25506078

  16. Experimental technique for simultaneous measurement of absorption-, emission cross-sections, and background loss coefficient in doped optical fibers

    NASA Astrophysics Data System (ADS)

    Karimi, M.; Seraji, F. E.

    2010-01-01

    We report a new simple technique for the simultaneous measurements of absorption-, emission cross-sections, background loss coefficient, and dopant density of doped optical fibers with low dopant concentration. Using our proposed technique, the experimental characterization of a sample Ge-Er-doped optical fiber is presented, and the results are analyzed and compared with other reports. This technique is suitable for production line of doped optical fibers.

  17. MULTI-OBJECTIVE OPTIMAL DESIGN OF GROUNDWATER REMEDIATION SYSTEMS: APPLICATION OF THE NICHED PARETO GENETIC ALGORITHM (NPGA). (R826614)

    EPA Science Inventory

    A multiobjective optimization algorithm is applied to a groundwater quality management problem involving remediation by pump-and-treat (PAT). The multiobjective optimization framework uses the niched Pareto genetic algorithm (NPGA) and is applied to simultaneously minimize the...

  18. Green Pharmaceutical Analysis of Drugs Coformulated with Highly Different Concentrations Using Spiking and Manipulation of Their Ratio Spectra.

    PubMed

    Ayoub, Bassam M

    2017-07-01

    Introducing green analysis to pharmaceutical products is considered a significant approach to preserving the environment. This method can be an environmentally friendly alternative to the existing methods, accompanied by a validated automated procedure for the analysis of a drug with the lowest possible number of samples. Different simple spectrophotometric methods were developed for the simultaneous determination of empagliflozin (EG) and metformin (MT) by manipulating their ratio spectra in their application on a recently approved pharmaceutical combination, Synjardy tablets. A spiking technique was used to increase the concentration of EG in samples prepared from the tablets to allow for the simultaneous determination of EG with MT without prior separation. Validation parameters according to International Conference on Harmonization guidelines were acceptable over a concentration range of 2-12 μg/mL for both drugs using derivative ratio and ratio subtraction coupled with extended ratio subtraction. The optimized methods were compared using one-way analysis of variance and proved to be suitable as ecofriendly approaches for industrial QC laboratories.

  19. Simultaneous assessment of phase chemistry, phase abundance and bulk chemistry with statistical electron probe micro-analyses: Application to cement clinkers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef, E-mail: ulm@mit.edu

    2014-01-15

    According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located onmore » a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers.« less

  20. Optimization of Ocean Color Algorithms: Application to Satellite Data Merging

    NASA Technical Reports Server (NTRS)

    Ritorena, Stephane; Siegel, David A.; Morel, Andre

    2004-01-01

    The objective of the program is to develop and validate a procedure for ocean color data merging, which is one of the major goals of the SIMBIOS project. As part of the SIMBIOS Program, we have developed a merging method for ocean color data. Conversely to other methods our approach does not combine end-products like the subsurface chlorophyll concentration (chl) from different sensors to generate a unified product. Instead, our procedure uses the normalized water-leaving radiances L((sub wN)(lambda)) from single or multiple sensors and uses them in the inversion of a semi-analytical ocean color model that allows the retrieval of several ocean color variables simultaneously. Beside ensuring simultaneity and consistency of the retrievals (all products are derived from a single algorithm), this model-based approach has various benefits over techniques that blend end-products (e.g. chlorophyll): 1) It works with single or multiple data sources regardless of their specific bands; 2) It exploits band redundancies and band differences; 3) It accounts for uncertainties in the L((sub wN)(lambda)) data; 4) It provides uncertainty estimates for the retrieved variables.

  1. Dimethyl carbonate-mediated lipid extraction and lipase-catalyzed in situ transesterification for simultaneous preparation of fatty acid methyl esters and glycerol carbonate from Chlorella sp. KR-1 biomass.

    PubMed

    Jo, Yoon Ju; Lee, Ok Kyung; Lee, Eun Yeol

    2014-04-01

    Fatty acid methyl esters (FAMEs) and glycerol carbonate were simultaneously prepared from Chlorella sp. KR-1 containing 40.9% (w/w) lipid using a reactive extraction method with dimethyl carbonate (DMC). DMC was used as lipid extraction agent, acyl acceptor for transesterification of the extracted triglycerides, substrate for glycerol carbonate synthesis from glycerol, and reaction medium for the solvent-free reaction system. For 1g of biomass, 367.31 mg of FAMEs and 16.73 mg of glycerol carbonate were obtained under the optimized conditions: DMC to biomass ratio of 10:1 (v/w), water content of 0.5% (v/v), and Novozyme 435 to biomass ratio of 20% (w/w) at 70°C for 24h. The amount of residual glycerol was only in the range of 1-2.5mg. Compared to conventional method, the cost of FAME production with the proposed technique could be reduced by combining lipid extraction with transesterification and omitting the extraction solvent recovery process. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Ultra-performance liquid chromatography tandem mass spectrometry for simultaneous determination of natural steroid hormones in sea lamprey (Petromyzon marinus) plasma and tissues.

    PubMed

    Wang, Huiyong; Bussy, Ugo; Chung-Davidson, Yu-Wen; Li, Weiming

    2016-01-15

    This study aims to provide a rapid, sensitive and precise UPLC-MS/MS method for target steroid quantitation in biological matrices. We developed and validated an UPLC-MS/MS method to simultaneously determine 16 steroids in plasma and tissue samples. Ionization sources of Electrospray Ionization (ESI) and Atmospheric Pressure Chemical Ionization (APCI) were compared in this study by testing their spectrometry performances at the same chromatographic conditions, and the ESI source was found up to five times more sensitive than the APCI. Different sample preparation techniques were investigated for an optimal extraction of steroids from the biological matrices. The developed method exhibited excellent linearity for all analytes with regression coefficients higher than 0.99 in broad concentration ranges. The limit of detection (LOD) was from 0.003 to 0.1ng/mL. The method was validated according to FDA guidance and applied to determine steroids in sea lamprey plasma and tissues (fat and testes) by the developed method. Copyright © 2015. Published by Elsevier B.V.

  3. Live Speech Driven Head-and-Eye Motion Generators.

    PubMed

    Le, Binh H; Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.

  4. Extraction and derivatization of polar herbicides for GC-MS analyses.

    PubMed

    Ranz, Andreas; Maier, Eveline; Motter, Herbert; Lankmayr, Ernst

    2008-09-01

    A sample preparation procedure including a simultaneous microwave-assisted (MA) extraction and derivatization for the determination of chlorophenoxy acids in soil samples is presented. For a selective and sensitive measurement, an analytical technique such as GC coupled with MS needs to be adopted. For GC analyses, chlorophenoxy acids have to be converted into more volatile and thermally stable derivatives. Derivatization by means of microwave radiation offers new alternatives in terms of shorter derivatization time and reduces susceptibility for the formation of artefacts. Extraction and derivatization into methyl esters (ME) were performed with sulphuric acid and methanol. Due to the novelty of the simultaneous extraction and derivatization assisted by means of microwave radiation, a careful investigation and optimization of influential reaction parameters was necessary. It could be shown that the combination of sulphuric acid and methanol provides a fast sample preparation including an efficient clean up procedure. The data obtained by the described method are in good agreement with those published for the reference material. Finally, compared to conventional heating and also to the standard procedure of the EPA, the sample preparation time could be considerably shortened.

  5. 3D Simulation of Multiple Simultaneous Hydraulic Fractures with Different Initial Lengths in Rock

    NASA Astrophysics Data System (ADS)

    Tang, X.; Rayudu, N. M.; Singh, G.

    2017-12-01

    Hydraulic fracturing is widely used technique for extracting shale gas. During this process, fractures with various initial lengths are induced in rock mass with hydraulic pressure. Understanding the mechanism of propagation and interaction between these induced hydraulic cracks is critical for optimizing the fracking process. In this work, numerical results are presented for investigating the effect of in-situ parameters and fluid properties on growth and interaction of multi simultaneous hydraulic fractures. A fully coupled 3D fracture simulator, TOUGH- GFEM is used for simulating the effect of different vital parameters, including in-situ stress, initial fracture length, fracture spacing, fluid viscosity and flow rate on induced hydraulic fractures growth. This TOUGH-GFEM simulator is based on 3D finite volume method (FVM) and partition of unity element method (PUM). Displacement correlation method (DCM) is used for calculating multi - mode (Mode I, II, III) stress intensity factors. Maximum principal stress criteria is used for crack propagation. Key words: hydraulic fracturing, TOUGH, partition of unity element method , displacement correlation method, 3D fracturing simulator

  6. Simultaneous multislice refocusing via time optimal control.

    PubMed

    Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf

    2018-02-09

    Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.

  7. Simultaneous monitoring technique for ASE and MPI noises in distributed Raman Amplified Systems.

    PubMed

    Choi, H Y; Jun, S B; Shin, S K; Chung, Y C

    2007-07-09

    We develop a new technique for simultaneously monitoring the amplified spontaneous emission (ASE) and multi-path interference (MPI) noises in distributed Raman amplified (DRA) systems. This technique utilizes the facts that the degree-of polarization (DOP) of the MPI noise is 1/9, while the ASE noise is unpolarized. The results show that the proposed technique can accurately monitor both of these noises regardless of the bit rates, modulation formats, and optical signal-to-noise ratio (OSNR) levels of the signals.

  8. Critical fiber length technique for composite manufacturing processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sivley, G.N.; Vandiver, T.L.; Dougherty, N.S.

    1996-12-31

    An improved injection technique for composite structures has been cooperatively developed by the U.S. Army Missile Command (MICOM) and Rockwell International (RI). This process simultaneously injects chopped fiberglass fibers and an epoxy resin matrix into a mold. Four injection techniques: (1){open_quotes}Little Willie{close_quotes} RTM system, (2) Pressure Vat system, (3) Pressure Vat system with vacuum assistance, and (4) Injection gun system, were investigated for use with a 304.8 mm x 304.8 mm x 5.08 mm (12 in x 12 in x 0.2 in) flat plaque mold. The driving factors in the process optimization included: fiber length, fiber weight, matrix viscosity, injectionmore » pressure, flow rate, and tool design. At fiber weights higher than 30 percent, the injection gun appears to have advantages over the other systems investigated. Results of an experimental investigation are reviewed in this paper. The investigation of injection techniques is the initial part of the research involved in a developing process, {open_quotes}Critical Fiber Length Technique{close_quotes}. This process will use the data collected in injection experiment along with mechanical properties derived from coupon test data to be incorporated into a composite material design code. The {open_quotes}Critical Fiber Length Technique{close_quotes} is part of a Cooperative Research and Development Agreement (CRADA) established in 1994 between MICOM and RI.« less

  9. A control-theory model for human decision-making

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Tanner, R. B.

    1971-01-01

    A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.

  10. Preliminary design of the spatial filters used in the multipass amplification system of TIL

    NASA Astrophysics Data System (ADS)

    Zhu, Qihua; Zhang, Xiao Min; Jing, Feng

    1998-12-01

    The spatial filters are used in Technique Integration Line, which has a multi-pass amplifier, not only to suppress parasitic high spatial frequency modes but also to provide places for inserting a light isolator and injecting the seed beam, and to relay image while the beam passes through the amplifiers several times. To fulfill these functions, the parameters of the spatial filters are optimized by calculations and analyzes with the consideration of avoiding the plasma blow-off effect and components demanding by ghost beam focus. The 'ghost beams' are calculated by ray tracing. A software was developed to evaluate the tolerance of the spatial filters and their components, and to align the whole system on computer simultaneously.

  11. Wide-view charge exchange recombination spectroscopy diagnostic for Alcator C-Mod.

    PubMed

    Rowan, W L; Bespamyatnov, I O; Granetz, R S

    2008-10-01

    This diagnostic measures temperature, density, and rotation for the fully stripped boron ion between the pedestal top and the plasma core with resolution consistent with the profile gradients. The diagnostic neutral beam used for the measurements generates a 50 keV, 6 A hydrogen beam. The optical systems provide views in both poloidal and toroidal directions. The imaging spectrometer is optimized to simultaneously accept 45 views as input with minimum cross-talk. In situ calibration techniques are applied for spatial location, spectral intensity, and wavelength. In the analysis, measured spectra are fitted to a model constructed from a detailed description of the emission physics. Methods for removal of interfering spectra are included. Applications include impurity and thermal transport.

  12. Taking the Pulse of Plants

    NASA Astrophysics Data System (ADS)

    Jensen, Kaare H.; Beecher, Sierra; Holbrook, N. Michele; Knoblauch, Michael

    2014-11-01

    Many biological systems use complex networks of vascular conduits to distribute energy over great distances. Examples include sugar transport in the phloem tissue of vascular plants and cytoplasmic streaming in some slime molds. Detailed knowledge of transport patterns in these systems is important for our fundamental understanding of energy distribution during development and for engineering of more efficient crops. Current techniques for quantifying transport in these microfluidic systems, however, only allow for the determination of either the flow speed or the concentration of material. Here we demonstrate a new method, based on confocal microscopy, which allows us to simultaneously determine velocity and solute concentration by tracking the dispersion of a tracer dye. We attempt to rationalize the observed transport patterns through consideration of constrained optimization problems.

  13. Strategies and Challenges in Simultaneous Augmentation Mastopexy.

    PubMed

    Spring, Michelle A; Hartmann, Emily C; Stevens, W Grant

    2015-10-01

    Simultaneous breast augmentation and mastopexy is a common procedure often considered to be one of the most difficult cosmetic breast surgeries. One-stage augmentation mastopexy was initially described more than 50 years ago. The challenge lies in the fact that the surgery has multiple opposing goals: to increasing the volume of a breast, enhance the shape, and simultaneously decrease the skin envelope. Successful outcomes in augmentation can be expected with proper planning, technique, and patient education. This article focuses on common indications for simultaneous augmentation mastopexy, techniques for safe and effective combined procedures, challenges of the procedure, and potential complications. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Utilization of Supercapacitors in Adaptive Protection Applications for Resiliency against Communication Failures: A Size and Cost Optimization Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Hany F; El Hariri, Mohamad; Elsayed, Ahmed

    Microgrids’ adaptive protection techniques rely on communication signals from the point of common coupling to ad- just the corresponding relays’ settings for either grid-connected or islanded modes of operation. However, during communication out- ages or in the event of a cyberattack, relays settings are not changed. Thus adaptive protection schemes are rendered unsuc- cessful. Due to their fast response, supercapacitors, which are pre- sent in the microgrid to feed pulse loads, could also be utilized to enhance the resiliency of adaptive protection schemes to communi- cation outages. Proper sizing of the supercapacitors is therefore im- portant in order to maintainmore » a stable system operation and also reg- ulate the protection scheme’s cost. This paper presents a two-level optimization scheme for minimizing the supercapacitor size along with optimizing its controllers’ parameters. The latter will lead to a reduction of the supercapacitor fault current contribution and an increase in that of other AC resources in the microgrid in the ex- treme case of having a fault occurring simultaneously with a pulse load. It was also shown that the size of the supercapacitor can be reduced if the pulse load is temporary disconnected during the transient fault period. Simulations showed that the resulting super- capacitor size and the optimized controller parameters from the proposed two-level optimization scheme were feeding enough fault currents for different types of faults and minimizing the cost of the protection scheme.« less

  15. Optimal preconditioning of lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Izquierdo, Salvador; Fueyo, Norberto

    2009-09-01

    A preconditioning technique to accelerate the simulation of steady-state problems using the single-relaxation-time (SRT) lattice Boltzmann (LB) method was first proposed by Guo et al. [Z. Guo, T. Zhao, Y. Shi, Preconditioned lattice-Boltzmann method for steady flows, Phys. Rev. E 70 (2004) 066706-1]. The key idea in this preconditioner is to modify the equilibrium distribution function in such a way that, by means of a Chapman-Enskog expansion, a time-derivative preconditioner of the Navier-Stokes (NS) equations is obtained. In the present contribution, the optimal values for the free parameter γ of this preconditioner are searched both numerically and theoretically; the later with the aid of linear-stability analysis and with the condition number of the system of NS equations. The influence of the collision operator, single- versus multiple-relaxation-times (MRT), is also studied. Three steady-state laminar test cases are used for validation, namely: the two-dimensional lid-driven cavity, a two-dimensional microchannel and the three-dimensional backward-facing step. Finally, guidelines are suggested for an a priori definition of optimal preconditioning parameters as a function of the Reynolds and Mach numbers. The new optimally preconditioned MRT method derived is shown to improve, simultaneously, the rate of convergence, the stability and the accuracy of the lattice Boltzmann simulations, when compared to the non-preconditioned methods and to the optimally preconditioned SRT one. Additionally, direct time-derivative preconditioning of the LB equation is also studied.

  16. Structural Optimization of a Force Balance Using a Computational Experiment Design

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; DeLoach, R.

    2002-01-01

    This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.

  17. Designed Er(3+)-singly doped NaYF4 with double excitation bands for simultaneous deep macroscopic and microscopic upconverting bioimaging.

    PubMed

    Wen, Xuanyuan; Wang, Baoju; Wu, Ruitao; Li, Nana; He, Sailing; Zhan, Qiuqiang

    2016-06-01

    Simultaneous deep macroscopic imaging and microscopic imaging is in urgent demand, but is challenging to achieve experimentally due to the lack of proper fluorescent probes. Herein, we have designed and successfully synthesized simplex Er(3+)-doped upconversion nanoparticles (UCNPs) with double excitation bands for simultaneous deep macroscopic and microscopic imaging. The material structure and the excitation wavelength of Er(3+)-singly doped UCNPs were further optimized to enhance the upconversion emission efficiency. After optimization, we found that NaYF4:30%Er(3+)@NaYF4:2%Er(3+) could simultaneously achieve efficient two-photon excitation (2PE) macroscopic tissue imaging and three-photon excitation (3PE) deep microscopic when excited by 808 nm continuous wave (CW) and 1480 nm CW lasers, respectively. In vitro cell imaging and in vivo imaging have also been implemented to demonstrate the feasibility and potential of the proposed simplex Er(3+)-doped UCNPs as bioprobe.

  18. Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle

    PubMed Central

    Barriuso, Alberto L.; De Paz, Juan F.; Lozano, Álvaro

    2018-01-01

    Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed. PMID:29301310

  19. Development of a Digital Microarray with Interferometric Reflectance Imaging

    NASA Astrophysics Data System (ADS)

    Sevenler, Derin

    This dissertation describes a new type of molecular assay for nucleic acids and proteins. We call this technique a digital microarray since it is conceptually similar to conventional fluorescence microarrays, yet it performs enumerative ('digital') counting of the number captured molecules. Digital microarrays are approximately 10,000-fold more sensitive than fluorescence microarrays, yet maintain all of the strengths of the platform including low cost and high multiplexing (i.e., many different tests on the same sample simultaneously). Digital microarrays use gold nanorods to label the captured target molecules. Each gold nanorod on the array is individually detected based on its light scattering, with an interferometric microscopy technique called SP-IRIS. Our optimized high-throughput version of SP-IRIS is able to scan a typical array of 500 spots in less than 10 minutes. Digital DNA microarrays may have utility in applications where sequencing is prohibitively expensive or slow. As an example, we describe a digital microarray assay for gene expression markers of bacterial drug resistance.

  20. The relative entropy is fundamental to adaptive resolution simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreis, Karsten; Graduate School Materials Science in Mainz, Staudingerweg 9, 55128 Mainz; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy withmore » respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.« less

  1. Improvement of mechanical performance for vibratory microgyroscope based on sense mode closed-loop control

    NASA Astrophysics Data System (ADS)

    Xiao, Dingbang; Su, Jianbin; Chen, Zhihua; Hou, Zhanqiang; Wang, Xinghua; Wu, Xuezhong

    2013-04-01

    In order to improve its structural sensitivity, a vibratory microgyroscope is commonly sealed in high vacuum to increase the drive mode quality factor. The sense mode quality factor of the microgyroscope will also increase simultaneously after vacuum sealing, which will lead to a long decay time of free response and even self-oscillation of the sense mode. As a result, the mechanical performance of the microgyroscope will be seriously degraded. In order to solve this problem, a closed-loop control technique is presented to adjust and optimize the sense mode quality factor. A velocity feedback loop was designed to increase the electric damping of the sense mode vibration. A circuit was fabricated based on this technique, and experimental results indicate that the sense mode quality factor of the microgyroscope was adjusted from 8052 to 428. The decay time of the sense mode free response was shortened from 3 to 0.5 s, and the vibration-rejecting ability of the microgyroscope was improved obviously without sensitivity degradation.

  2. Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle.

    PubMed

    Barriuso, Alberto L; Villarrubia González, Gabriel; De Paz, Juan F; Lozano, Álvaro; Bajo, Javier

    2018-01-02

    Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed.

  3. The relative entropy is fundamental to adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Kreis, Karsten; Potestio, Raffaello

    2016-07-01

    Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.

  4. Simultaneous double-rod rotation technique in posterior instrumentation surgery for correction of adolescent idiopathic scoliosis.

    PubMed

    Ito, Manabu; Abumi, Kuniyoshi; Kotani, Yoshihisa; Takahata, Masahiko; Sudo, Hideki; Hojo, Yoshihiro; Minami, Akio

    2010-03-01

    The authors present a new posterior correction technique consisting of simultaneous double-rod rotation using 2 contoured rods and polyaxial pedicle screws with or without Nesplon tapes. The purpose of this study is to introduce the basic principles and surgical procedures of this new posterior surgery for correction of adolescent idiopathic scoliosis. Through gradual rotation of the concave-side rod by 2 rod holders, the convex-side rod simultaneously rotates with the the concave-side rod. This procedure does not involve any force pushing down the spinal column around the apex. Since this procedure consists of upward pushing and lateral translation of the spinal column with simultaneous double-rod rotation maneuvers, it is simple and can obtain thoracic kyphosis as well as favorable scoliosis correction. This technique is applicable not only to a thoracic single curve but also to double major curves in cases of adolescent idiopathic scoliosis.

  5. Sparsity-aware tight frame learning with adaptive subspace recognition for multiple fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yang, Boyuan

    2017-09-01

    It is a challenging problem to design excellent dictionaries to sparsely represent diverse fault information and simultaneously discriminate different fault sources. Therefore, this paper describes and analyzes a novel multiple feature recognition framework which incorporates the tight frame learning technique with an adaptive subspace recognition strategy. The proposed framework consists of four stages. Firstly, by introducing the tight frame constraint into the popular dictionary learning model, the proposed tight frame learning model could be formulated as a nonconvex optimization problem which can be solved by alternatively implementing hard thresholding operation and singular value decomposition. Secondly, the noises are effectively eliminated through transform sparse coding techniques. Thirdly, the denoised signal is decoupled into discriminative feature subspaces by each tight frame filter. Finally, in guidance of elaborately designed fault related sensitive indexes, latent fault feature subspaces can be adaptively recognized and multiple faults are diagnosed simultaneously. Extensive numerical experiments are sequently implemented to investigate the sparsifying capability of the learned tight frame as well as its comprehensive denoising performance. Most importantly, the feasibility and superiority of the proposed framework is verified through performing multiple fault diagnosis of motor bearings. Compared with the state-of-the-art fault detection techniques, some important advantages have been observed: firstly, the proposed framework incorporates the physical prior with the data-driven strategy and naturally multiple fault feature with similar oscillation morphology can be adaptively decoupled. Secondly, the tight frame dictionary directly learned from the noisy observation can significantly promote the sparsity of fault features compared to analytical tight frames. Thirdly, a satisfactory complete signal space description property is guaranteed and thus weak feature leakage problem is avoided compared to typical learning methods.

  6. Comparison of Simultaneous PIV and Hydroxyl Tagging Velocimetry in Low Velocity Flows

    NASA Technical Reports Server (NTRS)

    Andre, Matthieu A.; Bardet, Philippe M.; Burns, Ross A.; Danehy, Paul M.

    2016-01-01

    Hydroxyl tagging velocimetry (HTV) is a molecular tagging velocimetry (MTV) technique that relies on the photo- dissociation of water vapor into OH radicals and their subsequent tracking using laser-induced fluorescence. At ambient temperature in air, the OH species lifetime is about 50 micro-s. The feasibility of using HTV for probing low- speed flows (a few m/s) is investigated by using an inert, heated gas as a means to increase the OH species lifetime. Unlike particle-based techniques, MTV does not suffer from tracer settling, which is particularly problematic at low speeds. Furthermore, the flow needs to be seeded with only a small mole fraction of water vapor, making it safer for both the user and facilities than other MTV techniques based on corrosive or toxic chemical tracers. HTV is demonstrated on a steam-seeded nitrogen jet at approximately 75 C in the laminar (Umean=3.31 m/s, Re=1,540), transitional (Umean=4.48 m/s, Re=2,039), and turbulent (Umean=6.91 m/s, Re=3,016) regimes at atmospheric pressure. The measured velocity profiles are compared with particle image velocimetry (PIV) measurements performed simultaneously with a second imager. Seeding for the PIV is achieved by introducing micron-sized water droplets into the flow with the steam; the same laser sheet is used for PIV and HTV to guarantee spatial and temporal overlap of the data. Optimizing each of these methods, however, requires conflicting operating conditions: higher temperatures benefit the HTV signals but reduce the available seed density for the PIV through evaporation. Nevertheless, data are found to agree within 10% for the instantaneous velocity profiles and within 5% for the mean profiles and demonstrate the feasibility of HTV for low-speed flows at moderate to high temperatures.

  7. Novel single stripper with side-draw to remove ammonia and sour gas simultaneously for coal-gasification wastewater treatment and the industrial implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, D.C.; Yu, Z.J.; Chen, Y.

    2009-06-15

    A large amount of wastewater is produced in the Lurgi coal-gasification process with the complex compounds carbon dioxide, ammonia, phenol, etc., which cause a serious environmental problem. In this paper, a novel stripper operated at elevated pressure is designed to improve the pretreatment process. In this technology, two noticeable improvements were established. First, the carbon dioxide and ammonia were removed simultaneously in a single stripper where sour gas (mainly carbon dioxide) is removed from the tower top and the ammonia vapor is drawn from the side and recovered by partial condensation. Second, the ammonia is removed before the phenol recoverymore » to reduce the pH value of the subsequent extraction units, so as the phenol removal performance of the extraction is greatly improved. To ensure the operational efficiency, some key operational parameters are analyzed and optimized though simulation. It is shown that when the top temperature is kept at 40 C and the weight ratio of the side draw to the feed is above 9%, the elevated pressures can ensure the removal efficiency of NH{sub 3} and carbon dioxide and the desired purified water as the bottom product of the unit is obtained. A real industrial application demonstrates the attractiveness of the new technique: it removes 99.9% CO{sub 2} and 99.6% ammonia, compared to known techniques which remove 66.5% and 94.4%, respectively. As a result, the pH value of the wastewater is reduced from above 9 to below 7. This ensures that the phenol removal ratio is above 93% in the following extraction units. The operating cost is lower than that of known techniques, and the operation is simplified.« less

  8. Simultaneous versus sequential optimal experiment design for the identification of multi-parameter microbial growth kinetics as a function of temperature.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2010-05-21

    Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  9. Combined evaluation of grazing incidence X-ray fluorescence and X-ray reflectivity data for improved profiling of ultra-shallow depth distributions☆

    PubMed Central

    Ingerle, D.; Meirer, F.; Pepponi, G.; Demenev, E.; Giubertoni, D.; Wobrauschek, P.; Streli, C.

    2014-01-01

    The continuous downscaling of the process size for semiconductor devices pushes the junction depths and consequentially the implantation depths to the top few nanometers of the Si substrate. This motivates the need for sensitive methods capable of analyzing dopant distribution, total dose and possible impurities. X-ray techniques utilizing the external reflection of X-rays are very surface sensitive, hence providing a non-destructive tool for process analysis and control. X-ray reflectometry (XRR) is an established technique for the characterization of single- and multi-layered thin film structures with layer thicknesses in the nanometer range. XRR spectra are acquired by varying the incident angle in the grazing incidence regime while measuring the specular reflected X-ray beam. The shape of the resulting angle-dependent curve is correlated to changes of the electron density in the sample, but does not provide direct information on the presence or distribution of chemical elements in the sample. Grazing Incidence XRF (GIXRF) measures the X-ray fluorescence induced by an X-ray beam incident under grazing angles. The resulting angle dependent intensity curves are correlated to the depth distribution and mass density of the elements in the sample. GIXRF provides information on contaminations, total implanted dose and to some extent on the depth of the dopant distribution, but is ambiguous with regard to the exact distribution function. Both techniques use similar measurement procedures and data evaluation strategies, i.e. optimization of a sample model by fitting measured and calculated angle curves. Moreover, the applied sample models can be derived from the same physical properties, like atomic scattering/form factors and elemental concentrations; a simultaneous analysis is therefore a straightforward approach. This combined analysis in turn reduces the uncertainties of the individual techniques, allowing a determination of dose and depth profile of the implanted elements with drastically increased confidence level. Silicon wafers implanted with Arsenic at different implantation energies were measured by XRR and GIXRF using a combined, simultaneous measurement and data evaluation procedure. The data were processed using a self-developed software package (JGIXA), designed for simultaneous fitting of GIXRF and XRR data. The results were compared with depth profiles obtained by Secondary Ion Mass Spectrometry (SIMS). PMID:25202165

  10. Two-step voltage dual electromembrane extraction: A new approach to simultaneous extraction of acidic and basic drugs.

    PubMed

    Asadi, Sakine; Nojavan, Saeed

    2016-06-07

    In the present work, acidic and basic drugs were simultaneously extracted by a novel method of high efficiency herein referred to as two-step voltage dual electromembrane extraction (TSV-DEME). Optimizing effective parameters such as composition of organic liquid membrane, pH values of donor and acceptor solutions, voltage and duration of each step, the method had its figures of merit investigated in pure water, human plasma, wastewater, and breast milk samples. Simultaneous extraction of acidic and basic drugs was done by applying potentials of 150 V and 400 V for 6 min and 19 min as the first and second steps, respectively. The model compounds were extracted from 4 mL of sample solution (pH = 6) into 20 μL of each acceptor solution (32 mM NaOH for acidic drugs and 32 mM HCL for basic drugs). 1-Octanol was immobilized within the pores of a porous hollow fiber of polypropylene, as the supported liquid membrane (SLM) for acidic drugs, and 2-ethyle hexanol, as the SLM for basic drugs. The proposed TSV-DEME technique provided good linearity with the resulting correlation coefficients ranging from 0.993 to 0.998 over a concentration range of 1-1000 ng mL(-1). The limit of detections of the drugs were found to range within 0.3-1.5 ng mL(-1), while the corresponding repeatability ranged from 7.7 to 15.5% (n = 4). The proposed method was further compared to simple dual electromembrane extraction (DEME), indicating significantly higher recoveries for TSV-DEME procedure (38.1-68%), as compared to those of simple DEME procedure (17.7-46%). Finally, the optimized TSV-DEME was applied to extract and quantify model compounds in breast milk, wastewater, and plasma samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Quinone-based stable isotope probing for assessment of 13C substrate-utilizing bacteria

    NASA Astrophysics Data System (ADS)

    Kunihiro, Tadao; Katayama, Arata; Demachi, Toyoko; Veuger, Bart; Boschker, Henricus T. S.; van Oevelen, Dick

    2015-04-01

    In this study, we attempted to establish quinone-stable-isotope probing (SIP) technique to link substrate-utilizing bacterial group to chemotaxonomic group in bacterial community. To identify metabolically active bacterial group in various environments, SIP techniques combined with biomarkers have been widely utilized as an attractive method for environmental study. Quantitative approaches of the SIP technique have unique advantage to assess substrate-incorporation into bacteria. As a most major quantitative approach, SIP technique based on phospholipid-derived fatty acids (PLFA) have been applied to simultaneously assess substrate-incorporation rate into bacteria and microbial community structure. This approach is powerful to estimate the incorporation rate because of the high sensitivity due to the detection by a gas chromatograph-combustion interface-isotope ratio mass spectrometer (GC-c-IRMS). However, its phylogenetic resolution is limited by specificity of a compound-specific marker. We focused on respiratory quinone as a biomarker. Our previous study found a good correlation between concentrations of bacteria-specific PLFAs and quinones over several orders of magnitude in various marine sediments, and the quinone method has a higher resolution (bacterial phylum level) for resolving differences in bacterial community composition more than that of bacterial PLFA. Therefore, respiratory quinones are potentially good biomarkers for quantitative approaches of the SIP technique. The LC-APCI-MS method as molecular-mass based detection method for quinone was developed and provides useful structural information for identifying quinone molecular species in environmental samples. LC-MS/MS on hybrid triple quadrupole/linear ion trap, which enables to simultaneously identify and quantify compounds in a single analysis, can detect high molecular compounds with their isotope ions. Use of LC-MS/MS allows us to develop quinone-SIP based on molecular mass differences due to 13C abundance in the quinone. In this study, we verified carbon stable isotope of quinone compared with bulk carbon stable isotope of bacterial culture. Results indicated a good correlation between carbon stable isotope of quinone compared with bulk carbon stable isotope. However, our measurement conditions for detection of quinone isotope-ions incurred underestimation of 13C abundance in the quinone. The quinone-SIP technique needs further optimization for measurement conditions of LC-MS/MS.

  12. Fuzzy Mixed Assembly Line Sequencing and Scheduling Optimization Model Using Multiobjective Dynamic Fuzzy GA

    PubMed Central

    Tahriri, Farzad; Dawal, Siti Zawiah Md; Taha, Zahari

    2014-01-01

    A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model. PMID:24982962

  13. A novel rapid and reproducible flow cytometric method for optimization of transfection efficiency in cells

    PubMed Central

    Homann, Stefanie; Hofmann, Christian; Gorin, Aleksandr M.; Nguyen, Huy Cong Xuan; Huynh, Diana; Hamid, Phillip; Maithel, Neil; Yacoubian, Vahe; Mu, Wenli; Kossyvakis, Athanasios; Sen Roy, Shubhendu; Yang, Otto Orlean

    2017-01-01

    Transfection is one of the most frequently used techniques in molecular biology that is also applicable for gene therapy studies in humans. One of the biggest challenges to investigate the protein function and interaction in gene therapy studies is to have reliable monospecific detection reagents, particularly antibodies, for all human gene products. Thus, a reliable method that can optimize transfection efficiency based on not only expression of the target protein of interest but also the uptake of the nucleic acid plasmid, can be an important tool in molecular biology. Here, we present a simple, rapid and robust flow cytometric method that can be used as a tool to optimize transfection efficiency at the single cell level while overcoming limitations of prior established methods that quantify transfection efficiency. By using optimized ratios of transfection reagent and a nucleic acid (DNA or RNA) vector directly labeled with a fluorochrome, this method can be used as a tool to simultaneously quantify cellular toxicity of different transfection reagents, the amount of nucleic acid plasmid that cells have taken up during transfection as well as the amount of the encoded expressed protein. Finally, we demonstrate that this method is reproducible, can be standardized and can reliably and rapidly quantify transfection efficiency, reducing assay costs and increasing throughput while increasing data robustness. PMID:28863132

  14. Computational multiobjective topology optimization of silicon anode structures for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Mitchell, Sarah L.; Ortiz, Michael

    2016-09-01

    This study utilizes computational topology optimization methods for the systematic design of optimal multifunctional silicon anode structures for lithium-ion batteries. In order to develop next generation high performance lithium-ion batteries, key design challenges relating to the silicon anode structure must be addressed, namely the lithiation-induced mechanical degradation and the low intrinsic electrical conductivity of silicon. As such this work considers two design objectives, the first being minimum compliance under design dependent volume expansion, and the second maximum electrical conduction through the structure, both of which are subject to a constraint on material volume. Density-based topology optimization methods are employed in conjunction with regularization techniques, a continuation scheme, and mathematical programming methods. The objectives are first considered individually, during which the influence of the minimum structural feature size and prescribed volume fraction are investigated. The methodology is subsequently extended to a bi-objective formulation to simultaneously address both the structural and conduction design criteria. The weighted sum method is used to derive the Pareto fronts, which demonstrate a clear trade-off between the competing design objectives. A rigid frame structure was found to be an excellent compromise between the structural and conduction design criteria, providing both the required structural rigidity and direct conduction pathways. The developments and results presented in this work provide a foundation for the informed design and development of silicon anode structures for high performance lithium-ion batteries.

  15. A computational framework for simultaneous estimation of muscle and joint contact forces and body motion using optimization and surrogate modeling.

    PubMed

    Eskinazi, Ilan; Fregly, Benjamin J

    2018-04-01

    Concurrent estimation of muscle activations, joint contact forces, and joint kinematics by means of gradient-based optimization of musculoskeletal models is hindered by computationally expensive and non-smooth joint contact and muscle wrapping algorithms. We present a framework that simultaneously speeds up computation and removes sources of non-smoothness from muscle force optimizations using a combination of parallelization and surrogate modeling, with special emphasis on a novel method for modeling joint contact as a surrogate model of a static analysis. The approach allows one to efficiently introduce elastic joint contact models within static and dynamic optimizations of human motion. We demonstrate the approach by performing two optimizations, one static and one dynamic, using a pelvis-leg musculoskeletal model undergoing a gait cycle. We observed convergence on the order of seconds for a static optimization time frame and on the order of minutes for an entire dynamic optimization. The presented framework may facilitate model-based efforts to predict how planned surgical or rehabilitation interventions will affect post-treatment joint and muscle function. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Optimization design of strong and tough nacreous nanocomposites through tuning characteristic lengths

    NASA Astrophysics Data System (ADS)

    Ni, Yong; Song, Zhaoqiang; Jiang, Hongyuan; Yu, Shu-Hong; He, Linghui

    2015-08-01

    How nacreous nanocomposites with optimal combinations of stiffness, strength and toughness depend on constituent property and microstructure parameters is studied using a nonlinear shear-lag model. We show that the interfacial elasto-plasticity and the overlapping length between bricks dependent on the brick size and brick staggering mode significantly affect the nonuniformity of the shear stress, the stress-transfer efficiency and thus the failure path. There are two characteristic lengths at which the strength and toughness are optimized respectively. Simultaneous optimization of the strength and toughness is achieved by matching these lengths as close as possible in the nacreous nanocomposite with regularly staggered brick-and-mortar (BM) structure where simultaneous uniform failures of the brick and interface occur. In the randomly staggered BM structure, as the overlapping length is distributed, the nacreous nanocomposite turns the simultaneous uniform failure into progressive interface or brick failure with moderate decrease of the strength and toughness. Specifically there is a parametric range at which the strength and toughness are insensitive to the brick staggering randomness. The obtained results propose a parametric selection guideline based on the length matching for rational design of nacreous nanocomposites. Such guideline explains why nacre is strong and tough while most artificial nacreous nanocomposites aere not.

  17. Simultaneous Scheduling of Jobs, AGVs and Tools Considering Tool Transfer Times in Multi Machine FMS By SOS Algorithm

    NASA Astrophysics Data System (ADS)

    Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.

    2017-08-01

    This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.

  18. Simultaneous recovery of Ni and Cu from computer-printed circuit boards using bioleaching: statistical evaluation and optimization.

    PubMed

    Arshadi, M; Mousavi, S M

    2014-12-01

    Computer printed circuit boards (CPCBs) have a rich metal content and are produced in high volume, making them an important component of electronic waste. The present study used a pure culture of Acidithiobacillus ferrooxidans to leach Cu and Ni from CPCBs waste. The adaptation phase began at 1g/l CPCBs powder with 10% inoculation and final pulp density was reached at 20g/l after about 80d. Four effective factors including initial pH, particle size, pulp density, and initial Fe(3+) concentration were optimized to achieve maximum simultaneous recovery of Cu and Ni. Their interactions were also identified using central composite design in response surface methodology. The suggested optimal conditions were initial pH 3, initial Fe(3+) 8.4g/l, pulp density 20g/l and particle size 95μm. Nearly 100% of Cu and Ni were simultaneously recovered under optimum conditions. Finally, bacterial growth characteristics versus time at optimum conditions were plotted. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Online Solution of Two-Player Zero-Sum Games for Continuous-Time Nonlinear Systems With Completely Unknown Dynamics.

    PubMed

    Fu, Yue; Chai, Tianyou

    2016-12-01

    Regarding two-player zero-sum games of continuous-time nonlinear systems with completely unknown dynamics, this paper presents an online adaptive algorithm for learning the Nash equilibrium solution, i.e., the optimal policy pair. First, for known systems, the simultaneous policy updating algorithm (SPUA) is reviewed. A new analytical method to prove the convergence is presented. Then, based on the SPUA, without using a priori knowledge of any system dynamics, an online algorithm is proposed to simultaneously learn in real time either the minimal nonnegative solution of the Hamilton-Jacobi-Isaacs (HJI) equation or the generalized algebraic Riccati equation for linear systems as a special case, along with the optimal policy pair. The approximate solution to the HJI equation and the admissible policy pair is reexpressed by the approximation theorem. The unknown constants or weights of each are identified simultaneously by resorting to the recursive least square method. The convergence of the online algorithm to the optimal solutions is provided. A practical online algorithm is also developed. Simulation results illustrate the effectiveness of the proposed method.

  20. Multiplex RT-PCR and indirect immunofluorescence assays for detection and subtyping of human influenza virus in Tunisia.

    PubMed

    Ben M'hadheb, Manel; Harrabi, Myriam; Souii, Amira; Jrad-Battikh, Nadia; Gharbi, Jawhar

    2015-03-01

    Influenza viruses are negative stranded segmented RNA viruses belonging to Orthomyxoviridae family. They are classified into three types A, B, and C. Type A influenza viruses are classified into subtypes according to the antigenic characters of the surface glycoproteins: hemagglutinin (H) and neuraminidase (N). The aim of the present study is to develop a fast and reliable multiplex RT-PCR technique for detecting simultaneously the subtypes A/H1N1 and A/H3N2 of influenza virus. Our study included 398 patients (mean age 30.33 ± 19.92 years) with flu or flu-like syndromes, consulting physicians affiliated with collaborating teams. A multiplex RT-PCR detecting A/H1N1 and A/H3N2 influenza viruses and an examination by indirect immunofluorescence (IFI) were performed. In the optimized conditions, we diagnosed by IFI a viral infection in 90 patients (22.6 %): 85 cases of influenza type A, four cases of influenza type B, and only one case of coinfection with types A and B. An evaluation of the technique was performed on 19 clinical specimens positive in IFI, and we detected eight cases of A/H3N2, five cases of A/H1N1, one case of influenza virus type A which is not an H1N1 nor H3N2, and five negative cases. Multiplex RT-PCR is a sensitive technique allowing an effective and fast diagnosis of respiratory infections caused by influenza viruses in which the optimization often collides with problems of sensibility.

  1. Active edge control in the precessions polishing process for manufacturing large mirror segments

    NASA Astrophysics Data System (ADS)

    Li, Hongyu; Zhang, Wei; Walker, David; Yu, Gouyo

    2014-09-01

    The segmentation of the primary mirror is the only promising solution for building the next generation of ground telescopes. However, manufacturing segmented mirrors presents its own challenges. The edge mis-figure impacts directly on the telescope's scientific output. The `Edge effect' significantly dominates the polishing precision. Therefore, the edge control is regarded as one of the most difficult technical issues in the segment production that needs to be addressed urgently. This paper reports an active edge control technique for the mirror segments fabrication using the Precession's polishing technique. The strategy in this technique requires that the large spot be selected on the bulk area for fast polishing, and the small spot is used for edge figuring. This can be performed by tool lift and optimizing the dell time to compensate for non-uniform material removal at the edge zone. This requires accurate and stable edge tool influence functions. To obtain the full tool influence function at the edge, we have demonstrated in previous work a novel hybrid-measurement method which uses both simultaneous phase interferometry and profilometry. In this paper, the edge effect under `Bonnet tool' polishing is investigated. The pressure distribution is analyzed by means of finite element analysis (FEA). According to the `Preston' equation, the shape of the edge tool influence functions is predicted. With this help, the multiple process parameters at the edge zone are optimized. This is demonstrated on a 200mm crosscorners hexagonal part with a result of PV less than 200nm for entire surface.

  2. Optimization of a dual-energy contrast-enhanced technique for a photon-counting digital breast tomosynthesis system: I. A theoretical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carton, Ann-Katherine; Ullberg, Christer; Lindman, Karin

    2010-11-15

    Purpose: Dual-energy (DE) iodine contrast-enhanced x-ray imaging of the breast has been shown to identify cancers that would otherwise be mammographically occult. In this article, theoretical modeling was performed to obtain optimally enhanced iodine images for a photon-counting digital breast tomosynthesis (DBT) system using a DE acquisition technique. Methods: In the system examined, the breast is scanned with a multislit prepatient collimator aligned with a multidetector camera. Each detector collects a projection image at a unique angle during the scan. Low-energy (LE) and high-energy (HE) projection images are acquired simultaneously in a single scan by covering alternate collimator slits withmore » Sn and Cu filters, respectively. Sn filters ranging from 0.08 to 0.22 mm thickness and Cu filters from 0.11 to 0.27 mm thickness were investigated. A tube voltage of 49 kV was selected. Tomographic images, hereafter referred to as DBT images, were reconstructed using a shift-and-add algorithm. Iodine-enhanced DBT images were acquired by performing a weighted logarithmic subtraction of the HE and LE DBT images. The DE technique was evaluated for 20-80 mm thick breasts. Weighting factors, w{sub t}, that optimally cancel breast tissue were computed. Signal-difference-to-noise ratios (SDNRs) between iodine-enhanced and nonenhanced breast tissue normalized to the square root of the mean glandular dose (MGD) were computed as a function of the fraction of the MGD allocated to the HE images. Peak SDNR/{radical}(MGD) and optimal dose allocations were identified. SDNR/{radical}(MGD) and dose allocations were computed for several practical feasible system configurations (i.e., determined by the number of collimator slits covered by Sn and Cu). A practical system configuration and Sn-Cu filter pair that accounts for the trade-off between SDNR, tube-output, and MGD were selected. Results: w{sub t} depends on the Sn-Cu filter combination used, as well as on the breast thickness; to optimally cancel 0% with 50% glandular breast tissue, w{sub t} values were found to range from 0.46 to 0.72 for all breast thicknesses and Sn-Cu filter pairs studied. The optimal w{sub t} values needed to cancel all possible breast tissue glandularites vary by less than 1% for 20 mm thick breasts and 18% for 80 mm breasts. The system configuration where one collimator slit covered by Sn is alternated with two collimator slits covered by Cu delivers SDNR/{radical}(MGD) nearest to the peak value. A reasonable compromise is a 0.16 mm Sn-0.23 mm Cu filter pair, resulting in SDNR values between 1.64 and 0.61 and MGD between 0.70 and 0.53 mGy for 20-80 mm thick breasts at the maximum tube current. Conclusions: A DE acquisition technique for a photon-counting DBT imaging system has been developed and optimized.« less

  3. Determination of ultratrace elements in natural waters by solid-phase extraction and atomic spectrometry methods.

    PubMed

    Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Frache, Roberto

    2003-01-01

    A study was carried out on the preconcentration of ultratrace amounts of cadmium, lead, manganese, copper and iron from high-salinity aqueous samples and determination by atomic spectrometry methods. Sample volume, amount of resin, loading flow rate, and elution volume were optimized in order to obtain the simultaneous preconcentration of all the analytes. Quantitative recoveries were obtained by using 200 mg of iminodiacetic resin with a loading flow rate of 2 mL min(-1), elution volume of 3 mL and sample volume of 50-450 mL. Only copper in seawater samples was not completely retained by the resin (60-70% recovery), due to unfavorable competition of iminodiacetic-active groups with organically bound metal.To quantify the metals in the eluates, two atomic spectrometry techniques were compared: electrothermal atomization atomic absorption spectrometry (ETAAS) and inductively coupled plasma-optical emission spectrometry (ICP-OES) with simultaneous CCD detection system. Both techniques are suitable for sample analysis with detection limits of 1.0, 4.7, 3.3, 6.8, and 53 ng L(-1) using ETAAS and 12, 122, 3.4, 17, and 21 ng L(-1) using ICP-OES for Cd, Pb, Mn, Cu, and Fe, respectively. Relative standard deviations of the procedures ranged from 1.7 to 14% at the sub-microg L(-1) concentration level. The accuracy of both methods was verified by analyzing various certified reference materials (river water, estuarine water, coastal and off-shore seawater).

  4. Encoding probabilistic brain atlases using Bayesian inference.

    PubMed

    Van Leemput, Koen

    2009-06-01

    This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. Probabilistic atlases are typically constructed by counting the relative frequency of occurrence of labels in corresponding locations across the training images. However, such an "averaging" approach generalizes poorly to unseen cases when the number of training images is limited, and provides no principled way of aligning the training datasets using deformable registration. In this paper, we generalize the generative image model implicitly underlying standard "average" atlases, using mesh-based representations endowed with an explicit deformation model. Bayesian inference is used to infer the optimal model parameters from the training data, leading to a simultaneous group-wise registration and atlas estimation scheme that encompasses standard averaging as a special case. We also use Bayesian inference to compare alternative atlas models in light of the training data, and show how this leads to a data compression problem that is intuitive to interpret and computationally feasible. Using this technique, we automatically determine the optimal amount of spatial blurring, the best deformation field flexibility, and the most compact mesh representation. We demonstrate, using 2-D training datasets, that the resulting models are better at capturing the structure in the training data than conventional probabilistic atlases. We also present experiments of the proposed atlas construction technique in 3-D, and show the resulting atlases' potential in fully-automated, pulse sequence-adaptive segmentation of 36 neuroanatomical structures in brain MRI scans.

  5. A high efficiency gene disruption strategy using a positive-negative split selection marker and electroporation for Fusarium oxysporum.

    PubMed

    Liang, Liqin; Li, Jianqiang; Cheng, Lin; Ling, Jian; Luo, Zhongqin; Bai, Miao; Xie, Bingyan

    2014-11-01

    The Fusarium oxysporum species complex consists of fungal pathogens that cause serial vascular wilt disease on more than 100 cultivated species throughout the world. Gene function analysis is rapidly becoming more and more important as the whole-genome sequences of various F. oxysporum strains are being completed. Gene-disruption techniques are a common molecular tool for studying gene function, yet are often a limiting step in gene function identification. In this study we have developed a F. oxysporum high-efficiency gene-disruption strategy based on split-marker homologous recombination cassettes with dual selection and electroporation transformation. The method was efficiently used to delete three RNA-dependent RNA polymerase (RdRP) genes. The gene-disruption cassettes of three genes can be constructed simultaneously within a short time using this technique. The optimal condition for electroporation is 10μF capacitance, 300Ω resistance, 4kV/cm field strength, with 1μg of DNA (gene-disruption cassettes). Under these optimal conditions, we were able to obtain 95 transformants per μg DNA. And after positive-negative selection, the transformants were efficiently screened by PCR, screening efficiency averaged 85%: 90% (RdRP1), 85% (RdRP2) and 77% (RdRP3). This gene-disruption strategy should pave the way for high throughout genetic analysis in F. oxysporum. Copyright © 2014 Elsevier GmbH. All rights reserved.

  6. Physical-level synthesis for digital lab-on-a-chip considering variation, contamination, and defect.

    PubMed

    Liao, Chen; Hu, Shiyan

    2014-03-01

    Microfluidic lab-on-a-chips have been widely utilized in biochemical analysis and human health studies due to high detection accuracy, high timing efficiency, and low cost. The increasing design complexity of lab-on-a-chips necessitates the computer-aided design (CAD) methodology in contrast to the classical manual design methodology. A key part in lab-on-a-chip CAD is physical-level synthesis. It includes the lab-on-a-chip placement and routing, where placement is to determine the physical location and the starting time of each operation and routing is to transport each droplet from the source to the destination. In the lab-on-a-chip design, variation, contamination, and defect need to be considered. This work designs a physical-level synthesis flow which simultaneously considers variation, contamination, and defect of the lab-on-a-chip design. It proposes a maze routing based, variation, contamination, and defect aware droplet routing technique, which is seamlessly integrated into an existing placement technique. The proposed technique improves the placement solution for routing and achieves the placement and routing co-optimization to handle variation, contamination, and defect. The simulation results demonstrate that our technique does not use any defective/contaminated grids, while the technique without considering contamination and defect uses 17.0% of the defective/contaminated grids on average. In addition, our routing variation aware technique significantly improves the average routing yield by 51.2% with only 3.5% increase in completion time compared to a routing variation unaware technique.

  7. Simultaneous determination of macronutrients, micronutrients and trace elements in mineral fertilizers by inductively coupled plasma optical emission spectrometry

    NASA Astrophysics Data System (ADS)

    de Oliveira Souza, Sidnei; da Costa, Silvânio Silvério Lopes; Santos, Dayane Melo; dos Santos Pinto, Jéssica; Garcia, Carlos Alexandre Borges; Alves, José do Patrocínio Hora; Araujo, Rennan Geovanny Oliveira

    2014-06-01

    An analytical method for simultaneous determination of macronutrients (Ca, Mg, Na and P), micronutrients (Cu, Fe, Mn and Zn) and trace elements (Al, As, Cd, Pb and V) in mineral fertilizers was optimized. Two-level full factorial design was applied to evaluate the optimal proportions of reagents used in the sample digestion on hot plate. A Doehlert design for two variables was used to evaluate the operating conditions of the inductively coupled plasma optical emission spectrometer in order to accomplish the simultaneous determination of the analyte concentrations. The limits of quantification (LOQs) ranged from 2.0 mg kg- 1 for Mn to 77.3 mg kg- 1 for P. The accuracy and precision of the proposed method were evaluated by analysis of standard reference materials (SRMs) of Western phosphate rock (NIST 694), Florida phosphate rock (NIST 120C) and Trace elements in multi-nutrient fertilizer (NIST 695), considered to be adequate for simultaneous determination. Twenty-one samples of mineral fertilizers collected in Sergipe State, Brazil, were analyzed. For all samples, the As, Ca, Cd and Pb concentrations were below the LOQ values of the analytical method. For As, Cd and Pb the obtained LOQ values were below the maximum limit allowed by the Brazilian Ministry of Agriculture, Livestock and Food Supply (Ministério da Agricultura, Pecuária e Abastecimento - MAPA). The optimized method presented good accuracy and was effectively applied to quantitative simultaneous determination of the analytes in mineral fertilizers by inductively coupled plasma optical emission spectrometry (ICP OES).

  8. Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques

    PubMed Central

    Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.

    2011-01-01

    The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering. PMID:21949695

  9. Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.

    PubMed

    Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T

    2011-01-01

    The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering.

  10. Optimizing Equivalence-Based Instruction: Effects of Training Protocols on Equivalence Class Formation

    ERIC Educational Resources Information Center

    Fienup, Daniel M.; Wright, Nicole A.; Fields, Lanny

    2015-01-01

    Two experiments evaluated the effects of the simple-to-complex and simultaneous training protocols on the formation of academically relevant equivalence classes. The simple-to-complex protocol intersperses derived relations probes with training baseline relations. The simultaneous protocol conducts all training trials and test trials in separate…

  11. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  12. Overview: MURI Center on spectroscopic and time domain detection of trace explosives in condensed and vapor phases

    NASA Astrophysics Data System (ADS)

    Spicer, James B.; Dagdigian, Paul; Osiander, Robert; Miragliotta, Joseph A.; Zhang, Xi-Cheng; Kersting, Roland; Crosley, David R.; Hanson, Ronald K.; Jeffries, Jay

    2003-09-01

    The research center established by Army Research Office under the Multidisciplinary University Research Initiative program pursues a multidisciplinary approach to investigate and advance the use of complementary analytical techniques for sensing of explosives and/or explosive-related compounds as they occur in the environment. The techniques being investigated include Terahertz (THz) imaging and spectroscopy, Laser-Induced Breakdown Spectroscopy (LIBS), Cavity Ring Down Spectroscopy (CRDS) and Resonance Enhanced Multiphoton Ionization (REMPI). This suite of techniques encompasses a diversity of sensing approaches that can be applied to detection of explosives in condensed phases such as adsorbed species in soil or can be used for vapor phase detection above the source. Some techniques allow for remote detection while others have highly specific and sensitive analysis capabilities. This program is addressing a range of fundamental, technical issues associated with trace detection of explosive related compounds using these techniques. For example, while both LIBS and THz can be used to carry-out remote analysis of condensed phase analyte from a distance in excess several meters, the sensitivities of these techniques to surface adsorbed explosive-related compounds are not currently known. In current implementations, both CRDS and REMPI require sample collection techniques that have not been optimized for environmental applications. Early program elements will pursue the fundamental advances required for these techniques including signature identification for explosive-related compounds/interferents and trace analyte extraction. Later program tasks will explore simultaneous application of two or more techniques to assess the benefits of sensor fusion.

  13. Spectral-decomposition techniques for the identification of radon anomalies temporally associated with earthquakes occurring in the UK in 2002 and 2008.

    NASA Astrophysics Data System (ADS)

    Crockett, R. G. M.; Gillmore, G. K.

    2009-04-01

    During the second half of 2002, the University of Northampton Radon Research Group operated two continuous hourly-sampling radon detectors 2.25 km apart in Northampton, in the (English) East Midlands. This period included the Dudley earthquake (22/09/2002) which was widely noticed by members of the public in the Northampton area. Also, at various periods during 2008 the Group has operated another pair of continuous hourly-sampling radon detectors similar distances apart in Northampton. One such period included the Market Rasen earthquake (27/02/2008) which was also widely noticed by members of the public in the Northampton area. During each period of monitoring, two time-series of radon readings were obtained, one from each detector. These have been analysed for evidence of simultaneous similar anomalies: the premise being that big disturbances occurring at big distances (in relation to the detector separation) should produce simultaneous similar anomalies but that simultaneous anomalies occurring by chance will be dissimilar. As previously reported, cross-correlating the two 2002 time-series over periods of 1-30 days duration, rolled forwards through the time-series at one-hour intervals produced two periods of significant correlation, i.e. two periods of simultaneous similar behaviour in the radon concentrations. One of these periods corresponded in time to the Dudley earthquake, the other corresponded in time to a smaller earthquake which occurred in the English Channel (26/08/2002). We here report subsequent investigation of the 2002 time-series and the 2008 time-series using spectral-decomposition techniques. These techniques have revealed additional simultaneous similar behaviour in the two radon concentrations, not revealed by the rolling correlation on the raw data. These correspond in time to the Manchester earthquake swarm of October 2002 and the Market Rasen earthquake of February 2008. The spectral-decomposition techniques effectively ‘de-noise' the data, and also remove lower-frequency variations (e.g. tidal variations), revealing the simultaneous similarities. Whilst this is very much work in progress, there is the potential that such techniques enhance the possibility that simultaneous real-time monitoring of radon levels - for short-term simultaneous anomalies - at several locations in earthquake areas might provide the core of an earthquake prediction method. Keywords: Radon; earthquakes; time series; cross-correlation; spectral-decomposition; real-time simultaneous monitoring.

  14. A novel method for simultaneous measurement of doped optical fiber parameters

    NASA Astrophysics Data System (ADS)

    Karimi, M.; Seraji, F. E.

    2010-05-01

    Simultaneous measurement technique of evaluating the doped optical fibers (DOF) parameters is a suitable scheme for DOF production industries. In this paper, we introduce a novel technique to characterize simultaneously the main parameters of DOF such as absorption and emission cross-sections (ACS, ECS), background loss coefficient (BLC), and low dopant concentration using the gain equation of DOFs. We used this new method to determine the ACS, ECS, BLC in a standard sample of Al-P-Erbium doped optical fiber. The results have been analyzed and compared with other reports.

  15. Simultaneous fingerprint and high-wavenumber fiber-optic Raman endoscopy for in vivo diagnosis of laryngeal cancer

    NASA Astrophysics Data System (ADS)

    Lin, Kan; Zheng, Wei; Wang, Jianfeng; Lim, Chwee Ming; Huang, Zhiwei

    2016-02-01

    We report a unique simultaneous fingerprint (FP) and high-wavenumber (HW) fiber-optic confocal Raman spectroscopy for in vivo diagnosis of laryngeal cancer in the head and neck under wide-field endoscopic imaging. The simultaneous FP and HW Raman endoscopy technique was performed on 21 patients and differentiated laryngeal carcinoma from normal tissues with both sensitivity and specificity of ~85%. This study shows the great potential of the FP/HW Raman endoscopic technique developed for in vivo diagnosis of laryngeal carcinoma during routine endoscopic examination.

  16. Simultaneous immersion Mirau interferometry.

    PubMed

    Lyulko, Oleksandra V; Randers-Pehrson, Gerhard; Brenner, David J

    2013-05-01

    A novel technique for label-free imaging of live biological cells in aqueous medium that is insensitive to ambient vibrations is presented. This technique is a spin-off from previously developed immersion Mirau interferometry. Both approaches utilize a modified Mirau interferometric attachment for a microscope objective that can be used both in air and in immersion mode, when the device is submerged in cell medium and has its internal space filled with liquid. While immersion Mirau interferometry involves first capturing a series of images, the resulting images are potentially distorted by ambient vibrations. Overcoming these serial-acquisition challenges, simultaneous immersion Mirau interferometry incorporates polarizing elements into the optics to allow simultaneous acquisition of two interferograms. The system design and production are described and images produced with the developed techniques are presented.

  17. A Learning Framework for Control-Oriented Modeling of Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubio-Herrero, Javier; Chandan, Vikas; Siegel, Charles M.

    Buildings consume a significant amount of energy worldwide. Several building optimization and control use cases require models of energy consumption which are control oriented, have high predictive capability, imposes minimal data pre-processing requirements, and have the ability to be adapted continuously to account for changing conditions as new data becomes available. Data driven modeling techniques, that have been investigated so far, while promising in the context of buildings, have been unable to simultaneously satisfy all the requirements mentioned above. In this context, deep learning techniques such as Recurrent Neural Networks (RNNs) hold promise, empowered by advanced computational capabilities and bigmore » data opportunities. In this paper, we propose a deep learning based methodology for the development of control oriented models for building energy management and test in on data from a real building. Results show that the proposed methodology outperforms other data driven modeling techniques significantly. We perform a detailed analysis of the proposed methodology along dimensions such as topology, sensitivity, and downsampling. Lastly, we conclude by envisioning a building analytics suite empowered by the proposed deep framework, that can drive several use cases related to building energy management.« less

  18. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    DOE PAGES

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; ...

    2016-07-08

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain,more » texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. Additionally, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.« less

  19. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy.

    PubMed

    Tremsin, Anton S; Gao, Yan; Dial, Laura C; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.

  20. Passive monitoring for near surface void detection using traffic as a seismic source

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Kuzma, H. A.; Rector, J.; Nazari, S.

    2009-12-01

    In this poster we present preliminary results based on our several field experiments in which we study seismic detection of voids using a passive array of surface geophones. The source of seismic excitation is vehicle traffic on nearby roads, which we model as a continuous line source of seismic energy. Our passive seismic technique is based on cross-correlation of surface wave fields and studying the resulting power spectra, looking for "shadows" caused by the scattering effect of a void. High frequency noise masks this effect in the time domain, so it is difficult to see on conventional traces. Our technique does not rely on phase distortions caused by small voids because they are generally too tiny to measure. Unlike traditional impulsive seismic sources which generate highly coherent broadband signals, perfect for resolving phase but too weak for resolving amplitude, vehicle traffic affords a high power signal a frequency range which is optimal for finding shallow structures. Our technique results in clear detections of an abandoned railroad tunnel and a septic tank. The ultimate goal of this project is to develop a technology for the simultaneous imaging of shallow underground structures and traffic monitoring near these structures.

  1. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    NASA Astrophysics Data System (ADS)

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with 100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.

  2. Introduction of pre-etch deposition techniques in EUV patterning

    NASA Astrophysics Data System (ADS)

    Xiang, Xun; Beique, Genevieve; Sun, Lei; Labonte, Andre; Labelle, Catherine; Nagabhirava, Bhaskar; Friddle, Phil; Schmitz, Stefan; Goss, Michael; Metzler, Dominik; Arnold, John

    2018-04-01

    The thin nature of EUV (Extreme Ultraviolet) resist has posed significant challenges for etch processes. In particular, EUV patterning combined with conventional etch approaches suffers from loss of pattern fidelity in the form of line breaks. A typical conventional etch approach prevents the etch process from having sufficient resist margin to control the trench CD (Critical Dimension), minimize the LWR (Line Width Roughness), LER (Line Edge Roughness) and reduce the T2T (Tip-to-Tip). Pre-etch deposition increases the resist budget by adding additional material to the resist layer, thus enabling the etch process to explore a wider set of process parameters to achieve better pattern fidelity. Preliminary tests with pre-etch deposition resulted in blocked isolated trenches. In order to mitigate these effects, a cyclic deposition and etch technique is proposed. With optimization of deposition and etch cycle time as well as total number of cycles, it is possible to open the underlying layers with a beneficial over etch and simultaneously keep the isolated trenches open. This study compares the impact of no pre-etch deposition, one time deposition and cyclic deposition/etch techniques on 4 aspects: resist budget, isolated trench open, LWR/LER and T2T.

  3. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain,more » texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. Additionally, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.« less

  4. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    PubMed Central

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Abstract Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components. PMID:27877885

  5. Orthogonality-breaking sensing model based on the instantaneous Stokes vector and the Mueller calculus

    NASA Astrophysics Data System (ADS)

    Ortega-Quijano, Noé; Fade, Julien; Roche, Muriel; Parnet, François; Alouini, Mehdi

    2016-04-01

    Polarimetric sensing by orthogonality breaking has been recently proposed as an alternative technique for performing direct and fast polarimetric measurements using a specific dual-frequency dual-polarization (DFDP) source. Based on the instantaneous Stokes-Mueller formalism to describe the high-frequency evolution of the DFDP beam intensity, we thoroughly analyze the interaction of such a beam with birefringent, dichroic and depolarizing samples. This allows us to confirm that orthogonality breaking is produced by the sample diattenuation, whereas this technique is immune to both birefringence and diagonal depolarization. We further analyze the robustness of this technique when polarimetric sensing is performed through a birefringent waveguide, and the optimal DFDP source configuration for fiber-based endoscopic measurements is subsequently identified. Finally, we consider a stochastic depolarization model based on an ensemble of random linear diattenuators, which makes it possible to understand the progressive vanishing of the detected orthogonality breaking signal as the spatial heterogeneity of the sample increases, thus confirming the insensitivity of this method to diagonal depolarization. The fact that the orthogonality breaking signal is exclusively due to the sample dichroism is an advantageous feature for the precise decoupled characterization of such an anisotropic parameter in samples showing several simultaneous effects.

  6. Discrimination methods for biological contaminants in fresh-cut lettuce based on VNIR and NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Mo, Changyeun; Kim, Giyoung; Kim, Moon S.; Lim, Jongguk; Lee, Seung Hyun; Lee, Hong-Seok; Cho, Byoung-Kwan

    2017-09-01

    The rapid detection of biological contaminants such as worms in fresh-cut vegetables is necessary to improve the efficiency of visual inspections carried out by workers. Multispectral imaging algorithms were developed using visible-near-infrared (VNIR) and near-infrared (NIR) hyperspectral imaging (HSI) techniques to detect worms in fresh-cut lettuce. The optimal wavebands that can detect worms in fresh-cut lettuce were investigated for each type of HSI using one-way ANOVA. Worm-detection imaging algorithms for VNIR and NIR imaging exhibited prediction accuracies of 97.00% (RI547/945) and 100.0% (RI1064/1176, SI1064-1176, RSI-I(1064-1173)/1064, and RSI-II(1064-1176)/(1064+1176)), respectively. The two HSI techniques revealed that spectral images with a pixel size of 1 × 1 mm or 2 × 2 mm had the best classification accuracy for worms. The results demonstrate that hyperspectral reflectance imaging techniques have the potential to detect worms in fresh-cut lettuce. Future research relating to this work will focus on a real-time sorting system for lettuce that can simultaneously detect various defects such as browning, worms, and slugs.

  7. Simultaneous determination of the impurity and radial tensile strength of reduced glutathione tablets by a high selective NIR-PLS method.

    PubMed

    Li, Juan; Jiang, Yue; Fan, Qi; Chen, Yang; Wu, Ruanqi

    2014-05-05

    This paper establishes a high-throughput and high selective method to determine the impurity named oxidized glutathione (GSSG) and radial tensile strength (RTS) of reduced glutathione (GSH) tablets based on near infrared (NIR) spectroscopy and partial least squares (PLS). In order to build and evaluate the calibration models, the NIR diffuse reflectance spectra (DRS) and transmittance spectra (TS) for 330 GSH tablets were accurately measured by using the optimized parameter values. For analyzing GSSG or RTS of GSH tablets, the NIR-DRS or NIR-TS were selected, subdivided reasonably into calibration and prediction sets, and processed appropriately with chemometric techniques. After selecting spectral sub-ranges and neglecting spectrum outliers, the PLS calibration models were built and the factor numbers were optimized. Then, the PLS models were evaluated by the root mean square errors of calibration (RMSEC), cross-validation (RMSECV) and prediction (RMSEP), and by the correlation coefficients of calibration (R(c)) and prediction (R(p)). The results indicate that the proposed models have good performances. It is thus clear that the NIR-PLS can simultaneously, selectively, nondestructively and rapidly analyze the GSSG and RTS of GSH tablets, although the contents of GSSG impurity were quite low while those of GSH active pharmaceutical ingredient (API) quite high. This strategy can be an important complement to the common NIR methods used in the on-line analysis of API in pharmaceutical preparations. And this work expands the NIR applications in the high-throughput and extraordinarily selective analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Optimal Recursive Digital Filters for Active Bending Stabilization

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2013-01-01

    In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

  9. Improvement of band gap profile in Cu(InGa)Se{sub 2} solar cells through rapid thermal annealing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, D.S.; College of Mathematics and Physics, Shanghai University of Electric Power, Shanghai, 200090; Yang, J.

    Highlights: • Proper RTA treatment can effectively optimize band gap profile to more expected level. • Inter-diffusion of atoms account for the improvement of the graded band gap profile. • The variation of the band gap profile created an absolute gain in the efficiency by 1.22%. - Abstract: In the paper, the effect of rapid thermal annealing on non-optimal double-graded band gap profiles was investigated by using X-ray photoelectron spectroscopy and capacitance–voltage measurement techniques. Experimental results revealed that proper rapid thermal annealing treatment can effectively improve band gap profile to more optimal level. The annealing treatment could not only reducemore » the values of front band gap and minimum band gap, but also shift the position of the minimum band gap toward front electrode and enter into space charge region. In addition, the thickness of Cu(InGa)Se{sub 2} thin film decreased by 25 nm after rapid thermal annealing treatment. All of these modifications were attributed to the inter-diffusion of atoms during thermal treatment process. Simultaneously, the variation of the band gap profile created an absolute gain in the efficiency by 1.22%, short-circuit current density by 2.16 mA/cm{sup 2} and filled factor by 3.57%.« less

  10. Real-time CO2 sensor for the optimal control of electronic EGR system

    NASA Astrophysics Data System (ADS)

    Kim, Gwang-jung; Choi, Byungchul; Choi, Inchul

    2013-12-01

    In modern diesel engines, EGR (Exhaust Gas Recirculation) is an important technique used in nitrogen oxide (NOx) emission reduction. This paper describes the development and experimental results of a fiber-optical sensor using a 2.7 μm wavelength absorption to quantify the simultaneous CO2 concentration which is the primary variable of EGR rate (CO2 in the exhaust gas versus CO2 in the intake gas, %). A real-time laser absorption method was developed using a DFB (distributed feedback) diode laser and waveguide to make optimal design and control of electronic EGR system required for `Euro-6' and `Tier 4 Final' NOx emission regulations. While EGR is effective to reduce NOx significantly, the amount of HC and CO is increased in the exhaust gas if EGR rate is not controlled based on driving conditions. Therefore, it is important to recirculate an appropriate amount of exhaust gas in the operation condition generating high volume of NOx. In this study, we evaluated basic characteristics and functions of our optical sensor and studied basically in order to find out optimal design condition. We demonstrated CO2 measurement speed, accuracy and linearity as making a condition similar to real engine through the bench-scale experiment.

  11. The Impact of the Condenser on Cytogenetic Image Quality in Digital Microscope System

    PubMed Central

    Ren, Liqiang; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Xiaodong; Liu, Hong

    2013-01-01

    Background: Optimizing operational parameters of the digital microscope system is an important technique to acquire high quality cytogenetic images and facilitate the process of karyotyping so that the efficiency and accuracy of diagnosis can be improved. OBJECTIVE: This study investigated the impact of the condenser on cytogenetic image quality and system working performance using a prototype digital microscope image scanning system. Methods: Both theoretical analysis and experimental validations through objectively evaluating a resolution test chart and subjectively observing large numbers of specimen were conducted. Results: The results show that the optimal image quality and large depth of field (DOF) are simultaneously obtained when the numerical aperture of condenser is set as 60%–70% of the corresponding objective. Under this condition, more analyzable chromosomes and diagnostic information are obtained. As a result, the system shows higher working stability and less restriction for the implementation of algorithms such as autofocusing especially when the system is designed to achieve high throughput continuous image scanning. Conclusions: Although the above quantitative results were obtained using a specific prototype system under the experimental conditions reported in this paper, the presented evaluation methodologies can provide valuable guidelines for optimizing operational parameters in cytogenetic imaging using the high throughput continuous scanning microscopes in clinical practice. PMID:23676284

  12. Profiling the Fatty Acids Content of Ornamental Camellia Seeds Cultivated in Galicia by an Optimized Matrix Solid-Phase Dispersion Extraction

    PubMed Central

    Garcia-Jares, Carmen; Sanchez-Nande, Marta; Lamas, Juan Pablo; Lores, Marta

    2017-01-01

    Camellia (genus of flowering plants of fam. Theaceae) is one of the main crops in Asia, where tea and oil from leaves and seeds have been utilized for thousands of years. This plant is excellently adapted to the climate and soil of Galicia (northwestern Spain) and northern Portugal where it is grown not only as an ornamental plant, but to be evaluated as a source of bioactive compounds. In this work, the main fatty acids were extracted from Camellia seeds of four varieties of Camellia: sasanqua, reticulata, japonica and sinensis, by means of matrix-solid phase dispersion (MSPD), and analyzed by gas chromatography (GC) with MS detection of the corresponding methyl esters. MSPD constitutes an efficient and greener alternative to conventional extraction techniques, moreover if it is combined with the use of green solvents such as limonene. The optimization of the MSPD extraction procedure has been conducted using a multivariate approach based on strategies of experimental design, which enabled the simultaneous evaluation of the factors influencing the extraction efficiency as well as interactions between factors. The optimized method was applied to characterize the fatty acids profiles of four Camellia varieties seeds, allowing us to compare their fatty acid composition. PMID:29039745

  13. Intelligent reservoir operation system based on evolving artificial neural networks

    NASA Astrophysics Data System (ADS)

    Chaves, Paulo; Chang, Fi-John

    2008-06-01

    We propose a novel intelligent reservoir operation system based on an evolving artificial neural network (ANN). Evolving means the parameters of the ANN model are identified by the GA evolutionary optimization technique. Accordingly, the ANN model should represent the operational strategies of reservoir operation. The main advantages of the Evolving ANN Intelligent System (ENNIS) are as follows: (i) only a small number of parameters to be optimized even for long optimization horizons, (ii) easy to handle multiple decision variables, and (iii) the straightforward combination of the operation model with other prediction models. The developed intelligent system was applied to the operation of the Shihmen Reservoir in North Taiwan, to investigate its applicability and practicability. The proposed method is first built to a simple formulation for the operation of the Shihmen Reservoir, with single objective and single decision. Its results were compared to those obtained by dynamic programming. The constructed network proved to be a good operational strategy. The method was then built and applied to the reservoir with multiple (five) decision variables. The results demonstrated that the developed evolving neural networks improved the operation performance of the reservoir when compared to its current operational strategy. The system was capable of successfully simultaneously handling various decision variables and provided reasonable and suitable decisions.

  14. Coherent anti-Stokes Raman scattering microscopy: overcoming technical barriers for clinical translation

    PubMed Central

    Tu, Haohua; Boppart, Stephen A.

    2015-01-01

    Clinical translation of coherent anti-Stokes Raman scattering microscopy is of great interest because of the advantages of noninvasive label-free imaging, high sensitivity, and chemical specificity. For this to happen, we have identified and review the technical barriers that must be overcome. Prior investigations have developed advanced techniques (features), each of which can be used to effectively overcome one particular technical barrier. However, the implementation of one or a small number of these advanced features in previous attempts for clinical translation has often introduced more tradeoffs than benefits. In this review, we outline a strategy that would integrate multiple advanced features to overcome all the technical barriers simultaneously, effectively reduce tradeoffs, and synergistically optimize CARS microscopy for clinical translation. The operation of the envisioned system incorporates coherent Raman micro-spectroscopy for identifying vibrational biomolecular markers of disease and single-frequency (or hyperspectral) Raman imaging of these specific biomarkers for real-time in vivo diagnostics and monitoring. An optimal scheme of clinical CARS micro-spectroscopy for thin ex vivo tissues. PMID:23674234

  15. Hardware-in-the-Loop Rendezvous Tests of a Novel Actuators Command Concept

    NASA Astrophysics Data System (ADS)

    Gomes dos Santos, Willer; Marconi Rocco, Evandro; Boge, Toralf; Benninghoff, Heike; Rems, Florian

    2016-12-01

    Integration, test and validation results, in a real-time environment, of a novel concept for spacecraft control are presented in this paper. The proposed method commands simultaneously a group of actuators optimizing a given set of objective functions based on a multiobjective optimization technique. Since close proximity maneuvers play an important role in orbital servicing missions, the entire GNC system has been integrated and tested at a hardware-in-the-loop (HIL) rendezvous and docking simulator known as European Proximity Operations Simulator (EPOS). During the test campaign at EPOS facility, a visual camera has been used to provide the necessary measurements for calculating the relative position with respect to the target satellite during closed-loop simulations. In addition, two different configurations of spacecraft control have been considered in this paper: a thruster reaction control system and a mixed actuators mode which includes thrusters, reaction wheels, and magnetic torqrods. At EPOS, results of HIL closed-loop tests have demonstrated that a safe and stable rendezvous approach can be achieved with the proposed GNC loop.

  16. Development and validation of an universal interface for compound-specific stable isotope analysis of chlorine (37Cl/35Cl) by GC-high-temperature conversion (HTC)-MS/IRMS.

    PubMed

    Renpenning, Julian; Hitzfeld, Kristina L; Gilevska, Tetyana; Nijenhuis, Ivonne; Gehre, Matthias; Richnow, Hans-Hermann

    2015-03-03

    A universal application of compound-specific isotope analysis of chlorine was thus far limited by the availability of suitable analysis techniques. In this study, gas chromatography in combination with a high-temperature conversion interface (GC-HTC), converting organic chlorine in the presence of H2 to gaseous HCl, was coupled to a dual-detection system, combining an ion trap mass spectrometer (MS) and isotope-ratio mass spectrometer (IRMS). The combination of the MS/IRMS detection enabled a detailed characterization, optimization, and online monitoring of the high-temperature conversion process via ion trap MS as well as a simultaneous chlorine isotope analysis by the IRMS. Using GC-HTC-MS/IRMS, chlorine isotope analysis at optimized conversion conditions resulted in very accurate isotope values (δ(37)Cl(SMOC)) for measured reference material with known isotope composition, including chlorinated ethylene, chloromethane, hexachlorocyclohexane, and trichloroacetic acids methyl ester. Respective detection limits were determined to be <15 nmol Cl on column with achieved precision of <0.3‰.

  17. Compositional Models of Glass/Melt Properties and their Use for Glass Formulation

    DOE PAGES

    Vienna, John D.; USA, Richland Washington

    2014-12-18

    Nuclear waste glasses must simultaneously meet a number of criteria related to their processability, product quality, and cost factors. The properties that must be controlled in glass formulation and waste vitrification plant operation tend to vary smoothly with composition allowing for glass property-composition models to be developed and used. Models have been fit to the key glass properties. The properties are transformed so that simple functions of composition (e.g., linear, polynomial, or component ratios) can be used as model forms. The model forms are fit to experimental data designed statistically to efficiently cover the composition space of interest. Examples ofmore » these models are found in literature. The glass property-composition models, their uncertainty definitions, property constraints, and optimality criteria are combined to formulate optimal glass compositions, control composition in vitrification plants, and to qualify waste glasses for disposal. An overview of current glass property-composition modeling techniques is summarized in this paper along with an example of how those models are applied to glass formulation and product qualification at the planned Hanford high-level waste vitrification plant.« less

  18. Optimization-Based Approach for Joint X-Ray Fluorescence and Transmission Tomographic Inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Zichao; Leyffer, Sven; Wild, Stefan M.

    2016-01-01

    Fluorescence tomographic reconstruction, based on the detection of photons coming from fluorescent emission, can be used for revealing the internal elemental composition of a sample. On the other hand, conventional X-ray transmission tomography can be used for reconstructing the spatial distribution of the absorption coefficient inside a sample. In this work, we integrate both X-ray fluorescence and X-ray transmission data modalities and formulate a nonlinear optimization-based approach for reconstruction of the elemental composition of a given object. This model provides a simultaneous reconstruction of both the quantitative spatial distribution of all elements and the absorption effect in the sample. Mathematicallymore » speaking, we show that compared with the single-modality inversion (i.e., the X-ray transmission or fluorescence alone), the joint inversion provides a better-posed problem, which implies a better recovery. Therefore, the challenges in X-ray fluorescence tomography arising mainly from the effects of self-absorption in the sample are partially mitigated. The use of this technique is demonstrated on the reconstruction of several synthetic samples.« less

  19. Multi-provider architecture for cloud outsourcing of medical imaging repositories.

    PubMed

    Godinho, Tiago Marques; Bastião Silva, Luís A; Costa, Carlos; Oliveira, José Luís

    2014-01-01

    Over the last few years, the extended usage of medical imaging procedures has raised the medical community attention towards the optimization of their workflows. More recently, the federation of multiple institutions into a seamless distribution network has brought hope of increased quality healthcare services along with more efficient resource management. As a result, medical institutions are constantly looking for the best infrastructure to deploy their imaging archives. In this scenario, public cloud infrastructures arise as major candidates, as they offer elastic storage space, optimal data availability without great requirements of maintenance costs or IT personnel, in a pay-as-you-go model. However, standard methodologies still do not take full advantage of outsourced archives, namely because their integration with other in-house solutions is troublesome. This document proposes a multi-provider architecture for integration of outsourced archives with in-house PACS resources, taking advantage of foreign providers to store medical imaging studies, without disregarding security. It enables the retrieval of images from multiple archives simultaneously, improving performance, data availability and avoiding the vendor-locking problem. Moreover it enables load balancing and cache techniques.

  20. Thermo-optical Modelling of Laser Matter Interactions in Selective Laser Melting Processes.

    NASA Astrophysics Data System (ADS)

    Vinnakota, Raj; Genov, Dentcho

    Selective laser melting (SLM) is one of the promising advanced manufacturing techniques, which is providing an ideal platform to manufacture components with zero geometric constraints. Coupling the electromagnetic and thermodynamic processes involved in the SLM, and developing the comprehensive theoretical model of the same is of great importance since it can provide significant improvements in the printing processes by revealing the optimal parametric space related to applied laser power, scan velocity, powder material, layer thickness and porosity. Here, we present a self-consistent Thermo-optical model which simultaneously solves the Maxwell's and the heat transfer equations and provides an insight into the electromagnetic energy released in the powder-beds and the concurrent thermodynamics of the particles temperature rise and onset of melting. The numerical calculations are compared with developed analytical model of the SLM process providing insight into the dynamics between laser facilitated Joule heating and radiation mitigated rise in temperature. These results provide guidelines toward improved energy efficiency and optimization of the SLM process scan rates. The current work is funded by the NSF EPSCoR CIMM project under award #OIA-1541079.

  1. High-throughput prediction of tablet weight and trimethoprim content of compound sulfamethoxazole tablets for controlling the uniformity of dosage units by NIR.

    PubMed

    Dong, Yanhong; Li, Juan; Zhong, Xiaoxiao; Cao, Liya; Luo, Yang; Fan, Qi

    2016-04-15

    This paper establishes a novel method to simultaneously predict the tablet weight (TW) and trimethoprim (TMP) content of compound sulfamethoxazole tablets (SMZCO) by near infrared (NIR) spectroscopy with partial least squares (PLS) regression for controlling the uniformity of dosage units (UODU). The NIR spectra for 257 samples were measured using the optimized parameter values and pretreated using the optimized chemometric techniques. After the outliers were ignored, two PLS models for predicting TW and TMP content were respectively established by using the selected spectral sub-ranges and the reference values. The TW model reaches the correlation coefficient of calibration (R(c)) 0.9543 and the TMP content model has the R(c) 0.9205. The experimental results indicate that this strategy expands the NIR application in controlling UODU, especially in the high-throughput and rapid analysis of TWs and contents of the compound pharmaceutical tablets, and may be an important complement to the common NIR on-line analytical method for pharmaceutical tablets. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-01-01

    The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.

  3. Chopped random-basis quantum optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone

    2011-08-15

    In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.

  4. Strategies for Fermentation Medium Optimization: An In-Depth Review

    PubMed Central

    Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.

    2017-01-01

    Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566

  5. Robotic lower limb prosthesis design through simultaneous computer optimizations of human and prosthesis costs

    NASA Astrophysics Data System (ADS)

    Handford, Matthew L.; Srinivasan, Manoj

    2016-02-01

    Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost - even lower than assuming that the non-amputee’s ankle torques are cost-free.

  6. New evidence favoring multilevel decomposition and optimization

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Polignone, Debra A.

    1990-01-01

    The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.

  7. Topology optimized gold nanostrips for enhanced near-infrared photon upconversion

    NASA Astrophysics Data System (ADS)

    Vester-Petersen, Joakim; Christiansen, Rasmus E.; Julsgaard, Brian; Balling, Peter; Sigmund, Ole; Madsen, Søren P.

    2017-09-01

    This letter presents a topology optimization study of metal nanostructures optimized for electric-field enhancement in the infrared spectrum. Coupling of such nanostructures with suitable ions allows for an increased photon-upconversion yield, with one application being an increased solar-cell efficiency by exploiting the long-wavelength part of the solar spectrum. In this work, topology optimization is used to design a periodic array of two-dimensional gold nanostrips for electric-field enhancements in a thin film doped with upconverting erbium ions. The infrared absorption band of erbium is utilized by simultaneously optimizing for two polarizations, up to three wavelengths, and three incident angles. Geometric robustness towards manufacturing variations is implemented considering three different design realizations simultaneously in the optimization. The polarization-averaged field enhancement for each design is evaluated over an 80 nm wavelength range and a ±15-degree incident angle span. The highest polarization-averaged field enhancement is 42.2 varying by maximally 2% under ±5 nm near-uniform design perturbations at three different wavelengths (1480 nm, 1520 nm, and 1560 nm). The proposed method is generally applicable to many optical systems and is therefore not limited to enhancing photon upconversion.

  8. Concurrent topology optimization for minimization of total mass considering load-carrying capabilities and thermal insulation simultaneously

    NASA Astrophysics Data System (ADS)

    Long, Kai; Wang, Xuan; Gu, Xianguang

    2017-09-01

    The present work introduces a novel concurrent optimization formulation to meet the requirements of lightweight design and various constraints simultaneously. Nodal displacement of macrostructure and effective thermal conductivity of microstructure are regarded as the constraint functions, which means taking into account both the load-carrying capabilities and the thermal insulation properties. The effective properties of porous material derived from numerical homogenization are used for macrostructural analysis. Meanwhile, displacement vectors of macrostructures from original and adjoint load cases are used for sensitivity analysis of the microstructure. Design variables in the form of reciprocal functions of relative densities are introduced and used for linearization of the constraint function. The objective function of total mass is approximately expressed by the second order Taylor series expansion. Then, the proposed concurrent optimization problem is solved using a sequential quadratic programming algorithm, by splitting into a series of sub-problems in the form of the quadratic program. Finally, several numerical examples are presented to validate the effectiveness of the proposed optimization method. The various effects including initial designs, prescribed limits of nodal displacement, and effective thermal conductivity on optimized designs are also investigated. An amount of optimized macrostructures and their corresponding microstructures are achieved.

  9. Multiobjective Multifactorial Optimization in Evolutionary Multitasking.

    PubMed

    Gupta, Abhishek; Ong, Yew-Soon; Feng, Liang; Tan, Kay Chen

    2016-05-03

    In recent decades, the field of multiobjective optimization has attracted considerable interest among evolutionary computation researchers. One of the main features that makes evolutionary methods particularly appealing for multiobjective problems is the implicit parallelism offered by a population, which enables simultaneous convergence toward the entire Pareto front. While a plethora of related algorithms have been proposed till date, a common attribute among them is that they focus on efficiently solving only a single optimization problem at a time. Despite the known power of implicit parallelism, seldom has an attempt been made to multitask, i.e., to solve multiple optimization problems simultaneously. It is contended that the notion of evolutionary multitasking leads to the possibility of automated transfer of information across different optimization exercises that may share underlying similarities, thereby facilitating improved convergence characteristics. In particular, the potential for automated transfer is deemed invaluable from the standpoint of engineering design exercises where manual knowledge adaptation and reuse are routine. Accordingly, in this paper, we present a realization of the evolutionary multitasking paradigm within the domain of multiobjective optimization. The efficacy of the associated evolutionary algorithm is demonstrated on some benchmark test functions as well as on a real-world manufacturing process design problem from the composites industry.

  10. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  11. Simultaneous fingerprint and high-wavenumber fiber-optic Raman spectroscopy improves in vivo diagnosis of esophageal squamous cell carcinoma at endoscopy

    NASA Astrophysics Data System (ADS)

    Wang, Jianfeng; Lin, Kan; Zheng, Wei; Yu Ho, Khek; Teh, Ming; Guan Yeoh, Khay; Huang, Zhiwei

    2015-08-01

    This work aims to evaluate clinical value of a fiber-optic Raman spectroscopy technique developed for in vivo diagnosis of esophageal squamous cell carcinoma (ESCC) during clinical endoscopy. We have developed a rapid fiber-optic Raman endoscopic system capable of simultaneously acquiring both fingerprint (FP)(800-1800 cm-1) and high-wavenumber (HW)(2800-3600 cm-1) Raman spectra from esophageal tissue in vivo. A total of 1172 in vivo FP/HW Raman spectra were acquired from 48 esophageal patients undergoing endoscopic examination. The total Raman dataset was split into two parts: 80% for training; while 20% for testing. Partial least squares-discriminant analysis (PLS-DA) and leave-one patient-out, cross validation (LOPCV) were implemented on training dataset to develop diagnostic algorithms for tissue classification. PLS-DA-LOPCV shows that simultaneous FP/HW Raman spectroscopy on training dataset provides a diagnostic sensitivity of 97.0% and specificity of 97.4% for ESCC classification. Further, the diagnostic algorithm applied to the independent testing dataset based on simultaneous FP/HW Raman technique gives a predictive diagnostic sensitivity of 92.7% and specificity of 93.6% for ESCC identification, which is superior to either FP or HW Raman technique alone. This work demonstrates that the simultaneous FP/HW fiber-optic Raman spectroscopy technique improves real-time in vivo diagnosis of esophageal neoplasia at endoscopy.

  12. A Novel Technique that Enables Efficient Conduct of Simultaneous Isomerization and Fermentation (SIF) of Xylose

    NASA Astrophysics Data System (ADS)

    Rao, Kripa; Chelikani, Silpa; Relue, Patricia; Varanasi, Sasidhar

    Of the sugars recovered from lignocellulose, D-glucose can be readily converted into ethanol by baker's or brewer's yeast (Saccharomyces cerevisiae). However, xylose that is obtained by the hydrolysis of the hemicellulosic portion is not fermentable by the same species of yeasts. Xylose fermentation by native yeasts can be achieved via isomerization of xylose to its ketose isomer, xylulose. Isomerization with exogenous xylose isomerase (XI) occurs optimally at a pH of 7-8, whereas subsequent fermentation of xylulose to ethanol occurs at a pH of 4-5. We present a novel scheme for efficient isomerization of xylose to xylulose at conditions suitable for the fermentation by using an immobilized enzyme system capable of sustaining two different pH microenvironments in a single vessel. The proof-of-concept of the two-enzyme pellet is presented, showing conversion of xylose to xylulose even when the immobilized enzyme pellets are suspended in a bulk solution whose pH is sub-optimal for XI activity. The co-immobilized enzyme pellets may prove extremely valuable in effectively conducting "simultaneous isomerization and fermentation" (SIF) of xylose. To help further shift the equilibrium in favor of xylulose formation, sodium tetraborate (borax) was added to the isomerization solution. Binding of tetrahydroxyborate ions to xylulose effectively reduces the concentration of xylulose and leads to increased xylose isomerization. The formation of tetrahydroxyborate ions and the enhancement in xylulose production resulting from the complexation was studied at two different bulk pH values. The addition of 0.05 M borax to the isomerization solution containing our co-immobilized enzyme pellets resulted in xylose to xylulose conversion as high as 86% under pH conditions that are suboptimal for XI activity. These initial findings, which can be optimized for industrial conditions, have significant potential for increasing the yield of ethanol from xylose in an SIF approach.

  13. Novel and sensitive reversed-phase high-pressure liquid chromatography method with electrochemical detection for the simultaneous and fast determination of eight biogenic amines and metabolites in human brain tissue.

    PubMed

    Van Dam, Debby; Vermeiren, Yannick; Aerts, Tony; De Deyn, Peter Paul

    2014-08-01

    A fast and simple RP-HPLC method with electrochemical detection (ECD) and ion pair chromatography was developed, optimized and validated in order to simultaneously determine eight different biogenic amines and metabolites in post-mortem human brain tissue in a single-run analytical approach. The compounds of interest are the indolamine serotonin (5-hydroxytryptamine, 5-HT), the catecholamines dopamine (DA) and (nor)epinephrine ((N)E), as well as their respective metabolites, i.e. 3,4-dihydroxyphenylacetic acid (DOPAC) and homovanillic acid (HVA), 5-hydroxy-3-indoleacetic acid (5-HIAA) and 3-methoxy-4-hydroxyphenylglycol (MHPG). A two-level fractional factorial experimental design was applied to study the effect of five experimental factors (i.e. the ion-pair counter concentration, the level of organic modifier, the pH of the mobile phase, the temperature of the column, and the voltage setting of the detector) on the chromatographic behaviour. The cross effect between the five quantitative factors and the capacity and separation factors of the analytes were then analysed using a Standard Least Squares model. The optimized method was fully validated according to the requirements of SFSTP (Société Française des Sciences et Techniques Pharmaceutiques). Our human brain tissue sample preparation procedure is straightforward and relatively short, which allows samples to be loaded onto the HPLC system within approximately 4h. Additionally, a high sample throughput was achieved after optimization due to a total runtime of maximally 40min per sample. The conditions and settings of the HPLC system were found to be accurate with high intra and inter-assay repeatability, recovery and accuracy rates. The robust analytical method results in very low detection limits and good separation for all of the eight biogenic amines and metabolites in this complex mixture of biological analytes. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Simultaneous off-axis multiplexed holography and regular fluorescence microscopy of biological cells.

    PubMed

    Nygate, Yoav N; Singh, Gyanendra; Barnea, Itay; Shaked, Natan T

    2018-06-01

    We present a new technique for obtaining simultaneous multimodal quantitative phase and fluorescence microscopy of biological cells, providing both quantitative phase imaging and molecular specificity using a single camera. Our system is based on an interferometric multiplexing module, externally positioned at the exit of an optical microscope. In contrast to previous approaches, the presented technique allows conventional fluorescence imaging, rather than interferometric off-axis fluorescence imaging. We demonstrate the presented technique for imaging fluorescent beads and live biological cells.

  15. Carbon Dioxide Separation Using Thermally Optimized Membranes

    NASA Astrophysics Data System (ADS)

    Young, J. S.; Jorgensen, B. S.; Espinoza, B. F.; Weimer, M. W.; Jarvinen, G. D.; Greenberg, A.; Khare, V.; Orme, C. J.; Wertsching, A. K.; Peterson, E. S.; Hopkins, S. D.; Acquaviva, J.

    2002-05-01

    The purpose of this project is to develop polymeric-metallic membranes for carbon dioxide separations that operate under a broad range of industrially relevant conditions not accessible with present membrane units. The last decade has witnessed a dramatic increase in the use of polymer membranes as an effective, economic and flexible tool for many commercial gas separations including air separation, the recovery of hydrogen from nitrogen, carbon monoxide, and methane mixtures, and the removal of carbon dioxide from natural gas. In each of these applications, high fluxes and excellent selectivities have relied on glassy polymer membranes which separate gases based on both size and solubility differences. To date, however, this technology has focused on optimizing materials for near ambient conditions. The development of polymeric materials that achieve the important combination of high selectivity, high permeability, and mechanical stability at temperatures significantly above 25oC and pressures above 10 bar, respectively, has been largely ignored. Consequently, there is a compelling rationale for the exploration of a new realm of polymer membrane separations. Indeed, the development of high temperature polymeric-metallic composite membranes for carbon dioxide separation at temperatures of 100-450 oC and pressures of 10-150 bar would provide a pivotal contribution with both economic and environmental benefits. Progress to date includes the first ever fabrication of a polymeric-metallic membrane that is selective from room temperature to 370oC. This achievement represents the highest demonstrated operating temperature at which a polymeric based membrane has successfully functioned. Additionally, we have generated the first polybenzamidizole silicate molecular composites. Finally, we have developed a technique that has enabled the first-ever simultaneous measurements of gas permeation and membrane compaction at elevated temperatures. This technique provides a unique approach to the optimization of long-term membrane performance under challenging operating conditions.

  16. Assessment of Surface Air Temperature over China Using Multi-criterion Model Ensemble Framework

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhu, Q.; Su, L.; He, X.; Zhang, X.

    2017-12-01

    The General Circulation Models (GCMs) are designed to simulate the present climate and project future trends. It has been noticed that the performances of GCMs are not always in agreement with each other over different regions. Model ensemble techniques have been developed to post-process the GCMs' outputs and improve their prediction reliabilities. To evaluate the performances of GCMs, root-mean-square error, correlation coefficient, and uncertainty are commonly used statistical measures. However, the simultaneous achievements of these satisfactory statistics cannot be guaranteed when using many model ensemble techniques. Meanwhile, uncertainties and future scenarios are critical for Water-Energy management and operation. In this study, a new multi-model ensemble framework was proposed. It uses a state-of-art evolutionary multi-objective optimization algorithm, termed Multi-Objective Complex Evolution Global Optimization with Principle Component Analysis and Crowding Distance (MOSPD), to derive optimal GCM ensembles and demonstrate the trade-offs among various solutions. Such trade-off information was further analyzed with a robust Pareto front with respect to different statistical measures. A case study was conducted to optimize the surface air temperature (SAT) ensemble solutions over seven geographical regions of China for the historical period (1900-2005) and future projection (2006-2100). The results showed that the ensemble solutions derived with MOSPD algorithm are superior over the simple model average and any single model output during the historical simulation period. For the future prediction, the proposed ensemble framework identified that the largest SAT change would occur in the South Central China under RCP 2.6 scenario, North Eastern China under RCP 4.5 scenario, and North Western China under RCP 8.5 scenario, while the smallest SAT change would occur in the Inner Mongolia under RCP 2.6 scenario, South Central China under RCP 4.5 scenario, and South Central China under RCP 8.5 scenario.

  17. Simultaneous ion sputter polishing and deposition

    NASA Technical Reports Server (NTRS)

    Rutledge, S.; Banks, B.; Brdar, M.

    1981-01-01

    Results of experiments to study ion beam sputter polishing in conjunction with simultaneous deposition as a mean of polishing copper surfaces are presented. Two types of simultaneous ion sputter polishing and deposition were used in these experiments. The first type utilized sputter polishing simultaneous with vapor deposition, and the second type utilized sputter polishing simultaneous with sputter deposition. The etch and deposition rates of both techniques were studied, as well as the surface morphology and surface roughness.

  18. Application of IFT and SPSA to servo system control.

    PubMed

    Rădac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M; Preitl, Stefan

    2011-12-01

    This paper treats the application of two data-based model-free gradient-based stochastic optimization techniques, i.e., iterative feedback tuning (IFT) and simultaneous perturbation stochastic approximation (SPSA), to servo system control. The representative case of controlled processes modeled by second-order systems with an integral component is discussed. New IFT and SPSA algorithms are suggested to tune the parameters of the state feedback controllers with an integrator in the linear-quadratic-Gaussian (LQG) problem formulation. An implementation case study concerning the LQG-based design of an angular position controller for a direct current servo system laboratory equipment is included to highlight the pros and cons of IFT and SPSA from an application's point of view. The comparison of IFT and SPSA algorithms is focused on an insight into their implementation.

  19. Optimization of the structural and control system for LSS with reduced-order model

    NASA Technical Reports Server (NTRS)

    Khot, N. S.

    1989-01-01

    The objective is the simultaneous design of the structural and control system for space structures. The minimum weight of the structure is the objective function, and the constraints are placed on the closed loop distribution of the frequencies and the damping parameters. The controls approach used is linear quadratic regulator with constant feedback. A reduced-order control system is used. The effect of uncontrolled modes is taken into consideration by the model error sensitivity suppression (MESS) technique which modified the weighting parameters for the control forces. For illustration, an ACOSS-FOUR structure is designed for a different number of controlled modes with specified values for the closed loop damping parameters and frequencies. The dynamic response of the optimum designs for an initial disturbance is compared.

  20. Towards ubiquitous access of computer-assisted surgery systems.

    PubMed

    Liu, Hui; Lufei, Hanping; Shi, Weishong; Chaudhary, Vipin

    2006-01-01

    Traditional stand-alone computer-assisted surgery (CAS) systems impede the ubiquitous and simultaneous access by multiple users. With advances in computing and networking technologies, ubiquitous access to CAS systems becomes possible and promising. Based on our preliminary work, CASMIL, a stand-alone CAS server developed at Wayne State University, we propose a novel mobile CAS system, UbiCAS, which allows surgeons to retrieve, review and interpret multimodal medical images, and to perform some critical neurosurgical procedures on heterogeneous devices from anywhere at anytime. Furthermore, various optimization techniques, including caching, prefetching, pseudo-streaming-model, and compression, are used to guarantee the QoS of the UbiCAS system. UbiCAS enables doctors at remote locations to actively participate remote surgeries, share patient information in real time before, during, and after the surgery.

Top