Modeling of tool path for the CNC sheet cutting machines
NASA Astrophysics Data System (ADS)
Petunin, Aleksandr A.
2015-11-01
In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.
Cost minimizing of cutting process for CNC thermal and water-jet machines
NASA Astrophysics Data System (ADS)
Tavaeva, Anastasia; Kurennov, Dmitry
2015-11-01
This paper deals with optimization problem of cutting process for CNC thermal and water-jet machines. The accuracy of objective function parameters calculation for optimization problem is investigated. This paper shows that working tool path speed is not constant value. One depends on some parameters that are described in this paper. The relations of working tool path speed depending on the numbers of NC programs frames, length of straight cut, configuration part are presented. Based on received results the correction coefficients for working tool speed are defined. Additionally the optimization problem may be solved by using mathematical model. Model takes into account the additional restrictions of thermal cutting (choice of piercing and output tool point, precedence condition, thermal deformations). At the second part of paper the non-standard cutting techniques are considered. Ones may lead to minimizing of cutting cost and time compared with standard cutting techniques. This paper considers the effectiveness of non-standard cutting techniques application. At the end of the paper the future research works are indicated.
Technical errors in planar bone scanning.
Naddaf, Sleiman Y; Collier, B David; Elgazzar, Abdelhamid H; Khalil, Magdy M
2004-09-01
Optimal technique for planar bone scanning improves image quality, which in turn improves diagnostic efficacy. Because planar bone scanning is one of the most frequently performed nuclear medicine examinations, maintaining high standards for this examination is a daily concern for most nuclear medicine departments. Although some problems such as patient motion are frequently encountered, the degraded images produced by many other deviations from optimal technique are rarely seen in clinical practice and therefore may be difficult to recognize. The objectives of this article are to list optimal techniques for 3-phase and whole-body bone scanning, to describe and illustrate a selection of deviations from these optimal techniques for planar bone scanning, and to explain how to minimize or avoid such technical errors.
Dual-energy KUB radiographic examination for the detection of renal calculus.
Yen, Peggy; Bailly, Greg; Pringle, Christopher; Barnes, David
2014-08-01
The dual-energy radiographic technique has been proved to be clinically useful in the thorax. Herein, we attempt to apply this technique to the abdomen and pelvis in the context of renal colic. The visibility of renal calculi were assessed using various dual energy peak kilovoltage combination radiographs applied to standard phantoms. This technique demonstrates a higher than acceptable radiation dosage required to optimize the image quality and the optimized diagnostic quality is inferior to that of the standard Kidneys, Ureters, and Bladder radiograph. The dual-energy radiographic technique could not better identify the radiopaque renal calculi. Limiting technical considerations include the increased subcutaneous and peritoneal adipose tissue and the limited contrast between the soft tissue and underlying calculi. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Basheti, Iman A; Reddel, Helen K; Armour, Carol L; Bosnic-Anticevich, Sinthia Z
2005-05-01
Optimal effects of asthma medications are dependent on correct inhaler technique. In a telephone survey, 77/87 patients reported that their Turbuhaler technique had not been checked by a health care professional. In a subsequent pilot study, 26 patients were randomized to receive one of 3 Turbuhaler counseling techniques, administered in the community pharmacy. Turbuhaler technique was scored before and 2 weeks after counseling (optimal technique = score 9/9). At baseline, 0/26 patients had optimal technique. After 2 weeks, optimal technique was achieved by 0/7 patients receiving standard verbal counseling (A), 2/8 receiving verbal counseling augmented with emphasis on Turbuhaler position during priming (B), and 7/9 receiving augmented verbal counseling plus physical demonstration (C) (Fisher's exact test for A vs C, p = 0.006). Satisfactory technique (4 essential steps correct) also improved (A: 3/8 to 4/7; B: 2/9 to 5/8; and C: 1/9 to 9/9 patients) (A vs C, p = 0.1). Counseling in Turbuhaler use represents an important opportunity for community pharmacists to improve asthma management, but physical demonstration appears to be an important component to effective Turbuhaler training for educating patients toward optimal Turbuhaler technique.
Integer Linear Programming in Computational Biology
NASA Astrophysics Data System (ADS)
Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut
Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.
Optimal Use of TDOA Geo-Location Techniques Within the Mountainous Terrain of Turkey
2012-09-01
Cross -Correlation TDOA Estimation Technique ................. 49 3. Standard Deviation...76 Figure 32. The Effect of Noise on Accuracy ........................................................ 77 Figure 33. The Effect of Noise to...finding techniques. In contrast, people have been using active location finding techniques, such as radar , for decades. When active location finding
Steering Quantum Dynamics of a Two-Qubit System via Optimal Bang-Bang Control
NASA Astrophysics Data System (ADS)
Hu, Juju; Ke, Qiang; Ji, Yinghua
2018-02-01
The optimization of control time for quantum systems has been an important field of control science attracting decades of focus, which is beneficial for efficiency improvement and decoherence suppression caused by the environment. Based on analyzing the advantages and disadvantages of the existing Lyapunov control, using a bang-bang optimal control technique, we investigate the fast state control in a closed two-qubit quantum system, and give three optimized control field design methods. Numerical simulation experiments indicate the effectiveness of the methods. Compared to the standard Lyapunov control or standard bang-bang control method, the optimized control field design methods effectively shorten the state control time and avoid high-frequency oscillation that occurs in bang-bang control.
Simulation and Modeling Capability for Standard Modular Hydropower Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Kevin M.; Smith, Brennan T.; Witt, Adam M.
Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.
Iterative metal artifact reduction: evaluation and optimization of technique.
Subhas, Naveen; Primak, Andrew N; Obuchowski, Nancy A; Gupta, Amit; Polster, Joshua M; Krauss, Andreas; Iannotti, Joseph P
2014-12-01
Iterative metal artifact reduction (IMAR) is a sinogram inpainting technique that incorporates high-frequency data from standard weighted filtered back projection (WFBP) reconstructions to reduce metal artifact on computed tomography (CT). This study was designed to compare the image quality of IMAR and WFBP in total shoulder arthroplasties (TSA); determine the optimal amount of WFBP high-frequency data needed for IMAR; and compare image quality of the standard 3D technique with that of a faster 2D technique. Eight patients with nine TSA underwent CT with standardized parameters: 140 kVp, 300 mAs, 0.6 mm collimation and slice thickness, and B30 kernel. WFBP, three 3D IMAR algorithms with different amounts of WFBP high-frequency data (IMARlo, lowest; IMARmod, moderate; IMARhi, highest), and one 2D IMAR algorithm were reconstructed. Differences in attenuation near hardware and away from hardware were measured and compared using repeated measures ANOVA. Five readers independently graded image quality; scores were compared using Friedman's test. Attenuation differences were smaller with all 3D IMAR techniques than with WFBP (p < 0.0063). With increasing high-frequency data, the attenuation difference increased slightly (differences not statistically significant). All readers ranked IMARmod and IMARhi more favorably than WFBP (p < 0.05), with IMARmod ranked highest for most structures. The attenuation difference was slightly higher with 2D than with 3D IMAR, with no significant reader preference for 3D over 2D. IMAR significantly decreases metal artifact compared to WFBP both objectively and subjectively in TSA. The incorporation of a moderate amount of WFBP high-frequency data and use of a 2D reconstruction technique optimize image quality and allow for relatively short reconstruction times.
NASA Astrophysics Data System (ADS)
Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole
2017-04-01
With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).
Galerkin v. discrete-optimal projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less
Low order H∞ optimal control for ACFA blended wing body aircraft
NASA Astrophysics Data System (ADS)
Haniš, T.; Kucera, V.; Hromčík, M.
2013-12-01
Advanced nonconvex nonsmooth optimization techniques for fixed-order H∞ robust control are proposed in this paper for design of flight control systems (FCS) with prescribed structure. Compared to classical techniques - tuning of and successive closures of particular single-input single-output (SISO) loops like dampers, attitude stabilizers, etc. - all loops are designed simultaneously by means of quite intuitive weighting filters selection. In contrast to standard optimization techniques, though (H2, H∞ optimization), the resulting controller respects the prescribed structure in terms of engaged channels and orders (e. g., proportional (P), proportional-integral (PI), and proportional-integralderivative (PID) controllers). In addition, robustness with regard to multimodel uncertainty is also addressed which is of most importance for aerospace applications as well. Such a way, robust controllers for various Mach numbers, altitudes, or mass cases can be obtained directly, based only on particular mathematical models for respective combinations of the §ight parameters.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1993-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1992-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Weak value amplification considered harmful
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-03-01
We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.
Habitat Design Optimization and Analysis
NASA Technical Reports Server (NTRS)
SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.
2006-01-01
Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.
Microspoiler Actuation for Guided Projectiles
2016-01-06
and be hardened to gun -launch. Several alternative designs will be explored using various actuation techniques, and downselection to an optimal design...aerodynamic optimization of the microspoiler mechanism, mechanical design/ gun hardening, and parameter estimation from experimental data. These...performed using the aerodynamic parameters in Table 2. Projectile trajectories were simulated without gravity at zero gun elevation. The standard 30mm
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
76 FR 16728 - Announcement of the American Petroleum Institute's Standards Activities
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-25
... voluntary standards for equipment, materials, operations, and processes for the petroleum and natural gas... Techniques for Designing and/or Optimizing Gas-lift Wells and Systems, 1st Ed. RP 13K, Chemical Analysis of... Q2, Quality Management Systems for Service Supply Organizations for the Petroleum and Natural Gas...
Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm
NASA Astrophysics Data System (ADS)
Hasançebi, O.; Kazemzadeh Azad, S.
2014-01-01
This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.
Comparisons of neural networks to standard techniques for image classification and correlation
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1994-01-01
Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.
Electroplating and stripping copper on molybdenum and niobium
NASA Technical Reports Server (NTRS)
Power, J. L.
1978-01-01
Molybdenum and niobium are often electroplated and subsequently stripped of copper. Since general standard plating techniques produce poor quality coatings, general procedures have been optimized and specified to give good results.
Improved Propulsion Modeling for Low-Thrust Trajectory Optimization
NASA Technical Reports Server (NTRS)
Knittel, Jeremy M.; Englander, Jacob A.; Ozimek, Martin T.; Atchison, Justin A.; Gould, Julian J.
2017-01-01
Low-thrust trajectory design is tightly coupled with spacecraft systems design. In particular, the propulsion and power characteristics of a low-thrust spacecraft are major drivers in the design of the optimal trajectory. Accurate modeling of the power and propulsion behavior is essential for meaningful low-thrust trajectory optimization. In this work, we discuss new techniques to improve the accuracy of propulsion modeling in low-thrust trajectory optimization while maintaining the smooth derivatives that are necessary for a gradient-based optimizer. The resulting model is significantly more realistic than the industry standard and performs well inside an optimizer. A variety of deep-space trajectory examples are presented.
Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.
Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O
2009-04-01
Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.
Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data
NASA Astrophysics Data System (ADS)
Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel
2015-08-01
Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.
Peek, Mirjam Cl; Charalampoudis, Petros; Anninga, Bauke; Baker, Rose; Douek, Michael
2017-02-01
The combined technique (radioisotope and blue dye) is the gold standard for sentinel lymph node biopsy (SLNB) and there is wide variation in techniques and blue dyes used. We performed a systematic review and meta-analysis to assess the need for radioisotope and the optimal blue dye for SLNB. A total of 21 studies were included. The SLNB identification rates are high with all the commonly used blue dyes. Furthermore, methylene blue is superior to iso-sulfan blue and Patent Blue V with respect to false-negative rates. The combined technique remains the most accurate and effective technique for SLNB. In order to standardize the SLNB technique, comparative trials to determine the most effective blue dye and national guidelines are required.
Standardized Method for High-throughput Sterilization of Arabidopsis Seeds.
Lindsey, Benson E; Rivero, Luz; Calhoun, Chistopher S; Grotewold, Erich; Brkljacic, Jelena
2017-10-17
Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and inexpensive seed sterilization for a large number of Arabidopsis lines.
Standardized Method for High-throughput Sterilization of Arabidopsis Seeds
Calhoun, Chistopher S.; Grotewold, Erich; Brkljacic, Jelena
2017-01-01
Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and inexpensive seed sterilization for a large number of Arabidopsis lines. PMID:29155739
A Comparison of Risk Sensitive Path Planning Methods for Aircraft Emergency Landing
NASA Technical Reports Server (NTRS)
Meuleau, Nicolas; Plaunt, Christian; Smith, David E.; Smith, Tristan
2009-01-01
Determining the best site to land a damaged aircraft presents some interesting challenges for standard path planning techniques. There are multiple possible locations to consider, the space is 3-dimensional with dynamics, the criteria for a good path is determined by overall risk rather than distance or time, and optimization really matters, since an improved path corresponds to greater expected survival rate. We have investigated a number of different path planning methods for solving this problem, including cell decomposition, visibility graphs, probabilistic road maps (PRMs), and local search techniques. In their pure form, none of these techniques have proven to be entirely satisfactory - some are too slow or unpredictable, some produce highly non-optimal paths or do not find certain types of paths, and some do not cope well with the dynamic constraints when controllability is limited. In the end, we are converging towards a hybrid technique that involves seeding a roadmap with a layered visibility graph, using PRM to extend that roadmap, and using local search to further optimize the resulting paths. We describe the techniques we have investigated, report on our experiments with these techniques, and discuss when and why various techniques were unsatisfactory.
A genetic algorithm solution to the unit commitment problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazarlis, S.A.; Bakirtzis, A.G.; Petridis, V.
1996-02-01
This paper presents a Genetic Algorithm (GA) solution to the Unit Commitment problem. GAs are general purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as natural selection, genetic recombination and survival of the fittest. A simple Ga algorithm implementation using the standard crossover and mutation operators could locate near optimal solutions but in most cases failed to converge to the optimal solution. However, using the Varying Quality Function technique and adding problem specific operators, satisfactory solutions to the Unit Commitment problem were obtained. Test results for systems of up to 100 unitsmore » and comparisons with results obtained using Lagrangian Relaxation and Dynamic Programming are also reported.« less
Design of Quiet Rotorcraft Approach Trajectories
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Burley, Casey L.; Boyd, D. Douglas, Jr.; Marcolini, Michael A.
2009-01-01
A optimization procedure for identifying quiet rotorcraft approach trajectories is proposed and demonstrated. The procedure employs a multi-objective genetic algorithm in order to reduce noise and create approach paths that will be acceptable to pilots and passengers. The concept is demonstrated by application to two different helicopters. The optimized paths are compared with one another and to a standard 6-deg approach path. The two demonstration cases validate the optimization procedure but highlight the need for improved noise prediction techniques and for additional rotorcraft acoustic data sets.
Cost-effectiveness of the streamflow-gaging program in Wyoming
Druse, S.A.; Wahl, K.L.
1988-01-01
This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)
Lallart, Mickaël; Garbuio, Lauric; Petit, Lionel; Richard, Claude; Guyomar, Daniel
2008-10-01
This paper presents a new technique for optimized energy harvesting using piezoelectric microgenerators called double synchronized switch harvesting (DSSH). This technique consists of a nonlinear treatment of the output voltage of the piezoelectric element. It also integrates an intermediate switching stage that ensures an optimal harvested power whatever the load connected to the microgenerator. Theoretical developments are presented considering either constant vibration magnitude, constant driving force, or independent extraction. Then experimental measurements are carried out to validate the theoretical predictions. This technique exhibits a constant output power for a wide range of load connected to the microgenerator. In addition, the extracted power obtained using such a technique allows a gain up to 500% in terms of maximal power output compared with the standard energy harvesting method. It is also shown that such a technique allows a fine-tuning of the trade-off between vibration damping and energy harvesting.
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
Optimal control of a harmonic oscillator: Economic interpretations
NASA Astrophysics Data System (ADS)
Janová, Jitka; Hampel, David
2013-10-01
Optimal control is a popular technique for modelling and solving the dynamic decision problems in economics. A standard interpretation of the criteria function and Lagrange multipliers in the profit maximization problem is well known. On a particular example, we aim to a deeper understanding of the possible economic interpretations of further mathematical and solution features of the optimal control problem: we focus on the solution of the optimal control problem for harmonic oscillator serving as a model for Phillips business cycle. We discuss the economic interpretations of arising mathematical objects with respect to well known reasoning for these in other problems.
Optimization of combined electron and photon beams for breast cancer
NASA Astrophysics Data System (ADS)
Xiong, W.; Li, J.; Chen, L.; Price, R. A.; Freedman, G.; Ding, M.; Qin, L.; Yang, J.; Ma, C.-M.
2004-05-01
Recently, intensity-modulated radiation therapy and modulated electron radiotherapy have gathered a growing interest for the treatment of breast and head and neck tumours. In this work, we carried out a study to combine electron and photon beams to achieve differential dose distributions for multiple target volumes simultaneously. A Monte Carlo based treatment planning system was investigated, which consists of a set of software tools to perform accurate dose calculation, treatment optimization, leaf sequencing and plan analysis. We compared breast treatment plans generated using this home-grown optimization and dose calculation software for different treatment techniques. Five different planning techniques have been developed for this study based on a standard photon beam whole breast treatment and an electron beam tumour bed cone down. Technique 1 includes two 6 MV tangential wedged photon beams followed by an anterior boost electron field. Technique 2 includes two 6 MV tangential intensity-modulated photon beams and the same boost electron field. Technique 3 optimizes two intensity-modulated photon beams based on a boost electron field. Technique 4 optimizes two intensity-modulated photon beams and the weight of the boost electron field. Technique 5 combines two intensity-modulated photon beams with an intensity-modulated electron field. Our results show that technique 2 can reduce hot spots both in the breast and the tumour bed compared to technique 1 (dose inhomogeneity is reduced from 34% to 28% for the target). Techniques 3, 4 and 5 can deliver a more homogeneous dose distribution to the target (with dose inhomogeneities for the target of 22%, 20% and 9%, respectively). In many cases techniques 3, 4 and 5 can reduce the dose to the lung and heart. It is concluded that combined photon and electron beam therapy may be advantageous for treating breast cancer compared to conventional treatment techniques using tangential wedged photon beams followed by a boost electron field.
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
Data Mining of Macromolecular Structures.
van Beusekom, Bart; Perrakis, Anastassis; Joosten, Robbie P
2016-01-01
The use of macromolecular structures is widespread for a variety of applications, from teaching protein structure principles all the way to ligand optimization in drug development. Applying data mining techniques on these experimentally determined structures requires a highly uniform, standardized structural data source. The Protein Data Bank (PDB) has evolved over the years toward becoming the standard resource for macromolecular structures. However, the process selecting the data most suitable for specific applications is still very much based on personal preferences and understanding of the experimental techniques used to obtain these models. In this chapter, we will first explain the challenges with data standardization, annotation, and uniformity in the PDB entries determined by X-ray crystallography. We then discuss the specific effect that crystallographic data quality and model optimization methods have on structural models and how validation tools can be used to make informed choices. We also discuss specific advantages of using the PDB_REDO databank as a resource for structural data. Finally, we will provide guidelines on how to select the most suitable protein structure models for detailed analysis and how to select a set of structure models suitable for data mining.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.
Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem
Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849
Lunar Habitat Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.
Applied Computational Electromagnetics Society Journal, Volume 9, Number 2
1994-07-01
input/output standardization; code or technique optimization and error minimization; innovations in solution technique or in data input/output...THE APPLIED COMPUTATIONAL ELECTROMAGNETICS SOCIETY JOURNAL EDITORS 3DITOR-IN-CH•IF/ACES EDITOR-IN-CHIEP/JOURNAL MANAGING EDITOR W. Perry Wheless...Adalbert Konrad and Paul P. Biringer Department of Electrical and Computer Engineering, University of Toronto Toronto, Ontario, CANADA M5S 1A4 Ailiwir
Solutions for medical databases optimal exploitation.
Branescu, I; Purcarea, V L; Dobrescu, R
2014-03-15
The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
Giudice, Valentina; Feng, Xingmin; Kajigaya, Sachiko; Young, Neal S.; Biancotto, Angélique
2017-01-01
Fluorescent cell barcoding (FCB) is a cell-based multiplexing technique for high-throughput flow cytometry. Barcoded samples can be stained and acquired collectively, minimizing staining variability and antibody consumption, and decreasing required sample volumes. Combined with functional measurements, FCB can be used for drug screening, signaling profiling, and cytokine detection, but technical issues are present. We optimized the FCB technique for routine utilization using DyLight 350, DyLight 800, Pacific Orange, and CBD500 for barcoding six, nine, or 36 human peripheral blood specimens. Working concentrations of FCB dyes ranging from 0 to 500 μg/ml were tested, and viability dye staining was optimized to increase robustness of data. A five-color staining with surface markers for Vβ usage analysis in CD4+ and CD8+ T cells was achieved in combination with nine sample barcoding. We provide improvements of the FCB technique that should be useful for multiplex drug screening and for lymphocyte characterization and perturbations in the diagnosis and during the course of disease. PMID:28692789
Image-guided optimization of the ECG trace in cardiac MRI.
Barnwell, James D; Klein, J Larry; Stallings, Cliff; Sturm, Amanda; Gillespie, Michael; Fine, Jason; Hyslop, W Brian
2012-03-01
Improper electrocardiogram (ECG) lead placement resulting in suboptimal gating may lead to reduced image quality in cardiac magnetic resonance imaging (CMR). A patientspecific systematic technique for rapid optimization of lead placement may improve CMR image quality. A rapid 3 dimensional image of the thorax was used to guide the realignment of ECG leads relative to the cardiac axis of the patient in forty consecutive adult patients. Using our novel approach and consensus reading of pre- and post-correction ECG traces, seventy-three percent of patients had a qualitative improvement in their ECG tracings, and no patient had a decrease in quality of their ECG tracing following the correction technique. Statistically significant improvement was observed independent of gender, body mass index, and cardiac rhythm. This technique provides an efficient option to improve the quality of the ECG tracing in patients who have a poor quality ECG with standard techniques.
Endoscopic innovations to increase the adenoma detection rate during colonoscopy
Dik, Vincent K; Moons, Leon MG; Siersema, Peter D
2014-01-01
Up to a quarter of polyps and adenomas are missed during colonoscopy due to poor visualization behind folds and the inner curves of flexures, and the presence of flat lesions that are difficult to detect. These numbers may however be conservative because they mainly come from back-to-back studies performed with standard colonoscopes, which are unable to visualize the entire mucosal surface. In the past several years, new endoscopic techniques have been introduced to improve the detection of polyps and adenomas. The introduction of high definition colonoscopes and visual image enhancement technologies have been suggested to lead to better recognition of flat and small lesions, but the absolute increase in diagnostic yield seems limited. Cap assisted colonoscopy and water-exchange colonoscopy are methods to facilitate cecal intubation and increase patients comfort, but show only a marginal or no benefit on polyp and adenoma detection. Retroflexion is routinely used in the rectum for the inspection of the dentate line, but withdrawal in retroflexion in the colon is in general not recommended due to the risk of perforation. In contrast, colonoscopy with the Third-Eye Retroscope® may result in considerable lower miss rates compared to standard colonoscopy, but this technique is not practical in case of polypectomy and is more time consuming. The recently introduced Full Spectrum Endoscopy™ colonoscopes maintains the technical capabilities of standard colonoscopes and provides a much wider view of 330 degrees compared to the 170 degrees with standard colonoscopes. Remarkable lower adenoma miss rates with this new technique were recently demonstrated in the first randomized study. Nonetheless, more studies are required to determine the exact additional diagnostic yield in clinical practice. Optimizing the efficacy of colorectal cancer screening and surveillance requires high definition colonoscopes with improved virtual chromoendoscopy technology that visualize the whole colon mucosa while maintaining optimal washing, suction and therapeutic capabilities, and keeping the procedural time as low and patient discomfort as optimal as possible. PMID:24605019
Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł
2014-01-01
Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411
NASA Astrophysics Data System (ADS)
Paek, Seung Weon; Kang, Jae Hyun; Ha, Naya; Kim, Byung-Moo; Jang, Dae-Hyun; Jeon, Junsu; Kim, DaeWook; Chung, Kun Young; Yu, Sung-eun; Park, Joo Hyun; Bae, SangMin; Song, DongSup; Noh, WooYoung; Kim, YoungDuck; Song, HyunSeok; Choi, HungBok; Kim, Kee Sup; Choi, Kyu-Myung; Choi, Woonhyuk; Jeon, JoongWon; Lee, JinWoo; Kim, Ki-Su; Park, SeongHo; Chung, No-Young; Lee, KangDuck; Hong, YoungKi; Kim, BongSeok
2012-03-01
A set of design for manufacturing (DFM) techniques have been developed and applied to 45nm, 32nm and 28nm logic process technologies. A noble technology combined a number of potential confliction of DFM techniques into a comprehensive solution. These techniques work in three phases for design optimization and one phase for silicon diagnostics. In the DFM prevention phase, foundation IP such as standard cells, IO, and memory and P&R tech file are optimized. In the DFM solution phase, which happens during ECO step, auto fixing of process weak patterns and advanced RC extraction are performed. In the DFM polishing phase, post-layout tuning is done to improve manufacturability. DFM analysis enables prioritization of random and systematic failures. The DFM technique presented in this paper has been silicon-proven with three successful tape-outs in Samsung 32nm processes; about 5% improvement in yield was achieved without any notable side effects. Visual inspection of silicon also confirmed the positive effect of the DFM techniques.
Point-based warping with optimized weighting factors of displacement vectors
NASA Astrophysics Data System (ADS)
Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas
2000-06-01
The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.
Performance of Optimized Actuator and Sensor Arrays in an Active Noise Control System
NASA Technical Reports Server (NTRS)
Palumbo, D. L.; Padula, S. L.; Lyle, K. H.; Cline, J. H.; Cabell, R. H.
1996-01-01
Experiments have been conducted in NASA Langley's Acoustics and Dynamics Laboratory to determine the effectiveness of optimized actuator/sensor architectures and controller algorithms for active control of harmonic interior noise. Tests were conducted in a large scale fuselage model - a composite cylinder which simulates a commuter class aircraft fuselage with three sections of trim panel and a floor. Using an optimization technique based on the component transfer functions, combinations of 4 out of 8 piezoceramic actuators and 8 out of 462 microphone locations were evaluated against predicted performance. A combinatorial optimization technique called tabu search was employed to select the optimum transducer arrays. Three test frequencies represent the cases of a strong acoustic and strong structural response, a weak acoustic and strong structural response and a strong acoustic and weak structural response. Noise reduction was obtained using a Time Averaged/Gradient Descent (TAGD) controller. Results indicate that the optimization technique successfully predicted best and worst case performance. An enhancement of the TAGD control algorithm was also evaluated. The principal components of the actuator/sensor transfer functions were used in the PC-TAGD controller. The principal components are shown to be independent of each other while providing control as effective as the standard TAGD.
Optimized protocol for combined PALM-dSTORM imaging.
Glushonkov, O; Réal, E; Boutant, E; Mély, Y; Didier, P
2018-06-08
Multi-colour super-resolution localization microscopy is an efficient technique to study a variety of intracellular processes, including protein-protein interactions. This technique requires specific labels that display transition between fluorescent and non-fluorescent states under given conditions. For the most commonly used label types, photoactivatable fluorescent proteins and organic fluorophores, these conditions are different, making experiments that combine both labels difficult. Here, we demonstrate that changing the standard imaging buffer of thiols/oxygen scavenging system, used for organic fluorophores, to the commercial mounting medium Vectashield increased the number of photons emitted by the fluorescent protein mEos2 and enhanced the photoconversion rate between its green and red forms. In addition, the photophysical properties of organic fluorophores remained unaltered with respect to the standard imaging buffer. The use of Vectashield together with our optimized protocol for correction of sample drift and chromatic aberrations enabled us to perform two-colour 3D super-resolution imaging of the nucleolus and resolve its three compartments.
Sci-Thur PM - Colourful Interactions: Highlights 08: ARC TBI using Single-Step Optimized VMAT Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudson, Alana; Gordon, Deborah; Moore, Roseanne
Purpose: This work outlines a new TBI delivery technique to replace a lateral POP full bolus technique. The new technique is done with VMAT arc delivery, without bolus, treating the patient prone and supine. The benefits of the arc technique include: increased patient experience and safety, better dose conformity, better organ at risk sparing, decreased therapist time and reduction of therapist injuries. Methods: In this work we build on a technique developed by Jahnke et al. We use standard arc fields with gantry speeds corrected for varying distance to the patient followed by a single step VMAT optimization on amore » patient CT to increase dose inhomogeneity and to reduce dose to the lungs (vs. blocks). To compare the arc TBI technique to our full bolus technique, we produced plans on patient CTs for both techniques and evaluated several dosimetric parameters using an ANOVA test. Results and Conclusions: The arc technique is able reduce both the hot areas to the body (D2% reduced from 122.2% to 111.8% p<0.01) and the lungs (mean lung dose reduced from 107.5% to 99.1%, p<0.01), both statistically significant, while maintaining coverage (D98% = 97.8% vs. 94.6%, p=0.313, not statistically significant). We developed a more patient and therapist-friendly TBI treatment technique that utilizes single-step optimized VMAT plans. It was found that this technique was dosimetrically equivalent to our previous lateral technique in terms of coverage and statistically superior in terms of reduced lung dose.« less
Fontana, Ariel R; Patil, Sangram H; Banerjee, Kaushik; Altamirano, Jorgelina C
2010-04-28
A fast and effective microextraction technique is proposed for preconcentration of 2,4,6-trichloroanisole (2,4,6-TCA) from wine samples prior gas chromatography tandem mass spectrometric (GC-MS/MS) analysis. The proposed technique is based on ultrasonication (US) for favoring the emulsification phenomenon during the extraction stage. Several variables influencing the relative response of the target analyte were studied and optimized. Under optimal experimental conditions, 2,4,6-TCA was quantitatively extracted achieving enhancement factors (EF) > or = 400 and limits of detection (LODs) 0.6-0.7 ng L(-1) with relative standard deviations (RSDs) < or = 11.3%, when 10 ng L(-1) 2,4,6-TCA standard-wine sample blend was analyzed. The calibration graphs for white and red wine were linear within the range of 5-1000 ng L(-1), and estimation coefficients (r(2)) were > or = 0.9995. Validation of the methodology was carried out by standard addition method at two concentrations (10 and 50 ng L(-1)) achieving recoveries >80% indicating satisfactory robustness of the method. The methodology was successfully applied for determination of 2,4,6-TCA in different wine samples.
Variability-aware double-patterning layout optimization for analog circuits
NASA Astrophysics Data System (ADS)
Li, Yongfu; Perez, Valerio; Tripathi, Vikas; Lee, Zhao Chuan; Tseng, I.-Lun; Ong, Jonathan Yoong Seang
2018-03-01
The semiconductor industry has adopted multi-patterning techniques to manage the delay in the extreme ultraviolet lithography technology. During the design process of double-patterning lithography layout masks, two polygons are assigned to different masks if their spacing is less than the minimum printable spacing. With these additional design constraints, it is very difficult to find experienced layout-design engineers who have a good understanding of the circuit to manually optimize the mask layers in order to minimize color-induced circuit variations. In this work, we investigate the impact of double-patterning lithography on analog circuits and provide quantitative analysis for our designers to select the optimal mask to minimize the circuit's mismatch. To overcome the problem and improve the turn-around time, we proposed our smart "anchoring" placement technique to optimize mask decomposition for analog circuits. We have developed a software prototype that is capable of providing anchoring markers in the layout, allowing industry standard tools to perform automated color decomposition process.
NASA Astrophysics Data System (ADS)
Ares, A.; Fernández, J. A.; Carballeira, A.; Aboal, J. R.
2014-09-01
The moss bag technique is a simple and economical environmental monitoring tool used to monitor air quality. However, routine use of the method is not possible because the protocols involved have not yet been standardized. Some of the most variable methodological aspects include (i) selection of moss species, (ii) ratio of moss weight to surface area of the bag, (iii) duration of exposure, and (iv) height of exposure. In the present study, the best option for each of these aspects was selected on the basis of the mean concentrations and data replicability of Cd, Cu, Hg, Pb and Zn measured during at least two exposure periods in environments affected by different degrees of contamination. The optimal choices for the studied aspects were the following: (i) Sphagnum denticulatum, (ii) 5.68 mg of moss tissue for each cm-2 of bag surface, (iii) 8 weeks of exposure, and (iv) 4 m height of exposure. Duration of exposure and height of exposure accounted for most of the variability in the data. The aim of this methodological study was to provide data to help establish a standardized protocol that will enable use of the moss bag technique by public authorities.
A modified form of conjugate gradient method for unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa
2016-06-01
Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.
Solutions for medical databases optimal exploitation
Branescu, I; Purcarea, VL; Dobrescu, R
2014-01-01
The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, “multimodel" federated system for extending OLAP querying to external object databases. PMID:24653769
PSO-tuned PID controller for coupled tank system via priority-based fitness scheme
NASA Astrophysics Data System (ADS)
Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal
2015-05-01
The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.
Research Interests Optimization and modeling techniques Economic impacts of energy sector transformation . Transportation Research Record. Caron, J, S Cohen, J Reilly, M Brown. 2018. Exploring the Impacts of a National : Economic and GHG Impacts of a National Low Carbon Fuel Standard. Transportation Research Record: Journal of
SEEK: A FORTRAN optimization program using a feasible directions gradient search
NASA Technical Reports Server (NTRS)
Savage, M.
1995-01-01
This report describes the use of computer program 'SEEK' which works in conjunction with two user-written subroutines and an input data file to perform an optimization procedure on a user's problem. The optimization method uses a modified feasible directions gradient technique. SEEK is written in ANSI standard Fortran 77, has an object size of about 46K bytes, and can be used on a personal computer running DOS. This report describes the use of the program and discusses the optimizing method. The program use is illustrated with four example problems: a bushing design, a helical coil spring design, a gear mesh design, and a two-parameter Weibull life-reliability curve fit.
SCCT guidelines on radiation dose and dose-optimization strategies in cardiovascular CT
Halliburton, Sandra S.; Abbara, Suhny; Chen, Marcus Y.; Gentry, Ralph; Mahesh, Mahadevappa; Raff, Gilbert L.; Shaw, Leslee J.; Hausleiter, Jörg
2012-01-01
Over the last few years, computed tomography (CT) has developed into a standard clinical test for a variety of cardiovascular conditions. The emergence of cardiovascular CT during a period of dramatic increase in radiation exposure to the population from medical procedures and heightened concern about the subsequent potential cancer risk has led to intense scrutiny of the radiation burden of this new technique. This has hastened the development and implementation of dose reduction tools and prompted closer monitoring of patient dose. In an effort to aid the cardiovascular CT community in incorporating patient-centered radiation dose optimization and monitoring strategies into standard practice, the Society of Cardiovascular Computed Tomography has produced a guideline document to review available data and provide recommendations regarding interpretation of radiation dose indices and predictors of risk, appropriate use of scanner acquisition modes and settings, development of algorithms for dose optimization, and establishment of procedures for dose monitoring. PMID:21723512
Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; ...
2015-03-12
In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. Inmore » this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.« less
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Rosen, I. G.
1988-01-01
In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.
Visually guided tube thoracostomy insertion comparison to standard of care in a large animal model.
Hernandez, Matthew C; Vogelsang, David; Anderson, Jeff R; Thiels, Cornelius A; Beilman, Gregory; Zielinski, Martin D; Aho, Johnathon M
2017-04-01
Tube thoracostomy (TT) is a lifesaving procedure for a variety of thoracic pathologies. The most commonly utilized method for placement involves open dissection and blind insertion. Image guided placement is commonly utilized but is limited by an inability to see distal placement location. Unfortunately, TT is not without complications. We aim to demonstrate the feasibility of a disposable device to allow for visually directed TT placement compared to the standard of care in a large animal model. Three swine were sequentially orotracheally intubated and anesthetized. TT was conducted utilizing a novel visualization device, tube thoracostomy visual trocar (TTVT) and standard of care (open technique). Position of the TT in the chest cavity were recorded using direct thoracoscopic inspection and radiographic imaging with the operator blinded to results. Complications were evaluated using a validated complication grading system. Standard descriptive statistical analyses were performed. Thirty TT were placed, 15 using TTVT technique, 15 using standard of care open technique. All of the TT placed using TTVT were without complication and in optimal position. Conversely, 27% of TT placed using standard of care open technique resulted in complications. Necropsy revealed no injury to intrathoracic organs. Visual directed TT placement using TTVT is feasible and non-inferior to the standard of care in a large animal model. This improvement in instrumentation has the potential to greatly improve the safety of TT. Further study in humans is required. Therapeutic Level II. Copyright © 2017 Elsevier Ltd. All rights reserved.
Next-generation acceleration and code optimization for light transport in turbid media using GPUs
Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar
2010-01-01
A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498
High speed civil transport: Sonic boom softening and aerodynamic optimization
NASA Technical Reports Server (NTRS)
Cheung, Samson
1994-01-01
An improvement in sonic boom extrapolation techniques has been the desire of aerospace designers for years. This is because the linear acoustic theory developed in the 60's is incapable of predicting the nonlinear phenomenon of shock wave propagation. On the other hand, CFD techniques are too computationally expensive to employ on sonic boom problems. Therefore, this research focused on the development of a fast and accurate sonic boom extrapolation method that solves the Euler equations for axisymmetric flow. This new technique has brought the sonic boom extrapolation techniques up to the standards of the 90's. Parallel computing is a fast growing subject in the field of computer science because of its promising speed. A new optimizer (IIOWA) for the parallel computing environment has been developed and tested for aerodynamic drag minimization. This is a promising method for CFD optimization making use of the computational resources of workstations, which unlike supercomputers can spend most of their time idle. Finally, the OAW concept is attractive because of its overall theoretical performance. In order to fully understand the concept, a wind-tunnel model was built and is currently being tested at NASA Ames Research Center. The CFD calculations performed under this cooperative agreement helped to identify the problem of the flow separation, and also aided the design by optimizing the wing deflection for roll trim.
NASA Astrophysics Data System (ADS)
Paramanandham, Nirmala; Rajendiran, Kishore
2018-01-01
A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.
Optimization Based Efficiencies in First Order Reliability Analysis
NASA Technical Reports Server (NTRS)
Peck, Jeffrey A.; Mahadevan, Sankaran
2003-01-01
This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGarry, Conor K., E-mail: conor.mcgarry@belfasttrust.hscni.net; Bokrantz, Rasmus; RaySearch Laboratories, Stockholm
2014-10-01
Efficacy of inverse planning is becoming increasingly important for advanced radiotherapy techniques. This study’s aims were to validate multicriteria optimization (MCO) in RayStation (v2.4, RaySearch Laboratories, Sweden) against standard intensity-modulated radiation therapy (IMRT) optimization in Oncentra (v4.1, Nucletron BV, the Netherlands) and characterize dose differences due to conversion of navigated MCO plans into deliverable multileaf collimator apertures. Step-and-shoot IMRT plans were created for 10 patients with localized prostate cancer using both standard optimization and MCO. Acceptable standard IMRT plans with minimal average rectal dose were chosen for comparison with deliverable MCO plans. The trade-off was, for the MCO plans, managedmore » through a user interface that permits continuous navigation between fluence-based plans. Navigated MCO plans were made deliverable at incremental steps along a trajectory between maximal target homogeneity and maximal rectal sparing. Dosimetric differences between navigated and deliverable MCO plans were also quantified. MCO plans, chosen as acceptable under navigated and deliverable conditions resulted in similar rectal sparing compared with standard optimization (33.7 ± 1.8 Gy vs 35.5 ± 4.2 Gy, p = 0.117). The dose differences between navigated and deliverable MCO plans increased as higher priority was placed on rectal avoidance. If the best possible deliverable MCO was chosen, a significant reduction in rectal dose was observed in comparison with standard optimization (30.6 ± 1.4 Gy vs 35.5 ± 4.2 Gy, p = 0.047). Improvements were, however, to some extent, at the expense of less conformal dose distributions, which resulted in significantly higher doses to the bladder for 2 of the 3 tolerance levels. In conclusion, similar IMRT plans can be created for patients with prostate cancer using MCO compared with standard optimization. Limitations exist within MCO regarding conversion of navigated plans to deliverable apertures, particularly for plans that emphasize avoidance of critical structures. Minimizing these differences would result in better quality treatments for patients with prostate cancer who were treated with radiotherapy using MCO plans.« less
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
Relaxation-optimized transfer of spin order in Ising spin chains
NASA Astrophysics Data System (ADS)
Stefanatos, Dionisis; Glaser, Steffen J.; Khaneja, Navin
2005-12-01
In this paper, we present relaxation optimized methods for the transfer of bilinear spin correlations along Ising spin chains. These relaxation optimized methods can be used as a building block for the transfer of polarization between distant spins on a spin chain, a problem that is ubiquitous in multidimensional nuclear magnetic resonance spectroscopy of proteins. Compared to standard techniques, significant reduction in relaxation losses is achieved by these optimized methods when transverse relaxation rates are much larger than the longitudinal relaxation rates and comparable to couplings between spins. We derive an upper bound on the efficiency of the transfer of the spin order along a chain of spins in the presence of relaxation and show that this bound can be approached by the relaxation optimized pulse sequences presented in the paper.
NASA Astrophysics Data System (ADS)
Huang, Bo; Hsieh, Chen-Yu; Golnaraghi, Farid; Moallem, Mehrdad
2015-11-01
In this paper a vehicle suspension system with energy harvesting capability is developed, and an analytical methodology for the optimal design of the system is proposed. The optimization technique provides design guidelines for determining the stiffness and damping coefficients aimed at the optimal performance in terms of ride comfort and energy regeneration. The corresponding performance metrics are selected as root-mean-square (RMS) of sprung mass acceleration and expectation of generated power. The actual road roughness is considered as the stochastic excitation defined by ISO 8608:1995 standard road profiles and used in deriving the optimization method. An electronic circuit is proposed to provide variable damping in the real-time based on the optimization rule. A test-bed is utilized and the experiments under different driving conditions are conducted to verify the effectiveness of the proposed method. The test results suggest that the analytical approach is credible in determining the optimality of system performance.
Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.
Bardhan, Jaydeep P.; Altman, Michael D.
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839
Pohlheim, Hartmut
2006-01-01
Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).
Price, Travis K.; Dune, Tanaka; Hilt, Evann E.; Thomas-White, Krystal J.; Kliethermes, Stephanie; Brincat, Cynthia; Brubaker, Linda; Wolfe, Alan J.
2016-01-01
Enhanced quantitative urine culture (EQUC) detects live microorganisms in the vast majority of urine specimens reported as “no growth” by the standard urine culture protocol. Here, we evaluated an expanded set of EQUC conditions (expanded-spectrum EQUC) to identify an optimal version that provides a more complete description of uropathogens in women experiencing urinary tract infection (UTI)-like symptoms. One hundred fifty adult urogynecology patient-participants were characterized using a self-completed validated UTI symptom assessment (UTISA) questionnaire and asked “Do you feel you have a UTI?” Women responding negatively were recruited into the no-UTI cohort, while women responding affirmatively were recruited into the UTI cohort; the latter cohort was reassessed with the UTISA questionnaire 3 to 7 days later. Baseline catheterized urine samples were plated using both standard urine culture and expanded-spectrum EQUC protocols: standard urine culture inoculated at 1 μl onto 2 agars incubated aerobically; expanded-spectrum EQUC inoculated at three different volumes of urine onto 7 combinations of agars and environments. Compared to expanded-spectrum EQUC, standard urine culture missed 67% of uropathogens overall and 50% in participants with severe urinary symptoms. Thirty-six percent of participants with missed uropathogens reported no symptom resolution after treatment by standard urine culture results. Optimal detection of uropathogens could be achieved using the following: 100 μl of urine plated onto blood (blood agar plate [BAP]), colistin-nalidixic acid (CNA), and MacConkey agars in 5% CO2 for 48 h. This streamlined EQUC protocol achieved 84% uropathogen detection relative to 33% detection by standard urine culture. The streamlined EQUC protocol improves detection of uropathogens that are likely relevant for symptomatic women, giving clinicians the opportunity to receive additional information not currently reported using standard urine culture techniques. PMID:26962083
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
Project research optimized the quantification technique for carbohydrates that also allows quantification of other non-polar molecular markers based on using an isotopically labeled internal standard (D-glucose-1,2,3,4,5,6,6-d7) to monitor extraction efficiency, extraction usi...
Malavera, Alejandra; Vasquez, Alejandra; Fregni, Felipe
2015-01-01
Transcranial direct current stimulation (tDCS) is a neuromodulatory technique that has been extensively studied. While there have been initial positive results in some clinical trials, there is still variability in tDCS results. The aim of this article is to review and discuss patents assessing novel methods to optimize the use of tDCS. A systematic review was performed using Google patents database with tDCS as the main technique, with patents filling date between 2010 and 2015. Twenty-two patents met our inclusion criteria. These patents attempt to address current tDCS limitations. Only a few of them have been investigated in clinical trials (i.e., high-definition tDCS), and indeed most of them have not been tested before in human trials. Further clinical testing is required to assess which patents are more likely to optimize the effects of tDCS. We discuss the potential optimization of tDCS based on these patents and the current experience with standard tDCS.
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
NASA Astrophysics Data System (ADS)
Budiyono, T.; Budi, W. S.; Hidayanto, E.
2016-03-01
Radiation therapy for brain malignancy is done by giving a dose of radiation to a whole volume of the brain (WBRT) followed by a booster at the primary tumor with more advanced techniques. Two external radiation fields given from the right and left side. Because the shape of the head, there will be an unavoidable hotspot radiation dose of greater than 107%. This study aims to optimize planning of radiation therapy using field in field multi-leaf collimator technique. A study of 15 WBRT samples with CT slices is done by adding some segments of radiation in each field of radiation and delivering appropriate dose weighting using a TPS precise plan Elekta R 2.15. Results showed that this optimization a more homogeneous radiation on CTV target volume, lower dose in healthy tissue, and reduced hotspots in CTV target volume. Comparison results of field in field multi segmented MLC technique with standard conventional technique for WBRT are: higher average minimum dose (77.25% ± 0:47%) vs (60% ± 3:35%); lower average maximum dose (110.27% ± 0.26%) vs (114.53% ± 1.56%); lower hotspot volume (5.71% vs 27.43%); and lower dose on eye lenses (right eye: 9.52% vs 18.20%); (left eye: 8.60% vs 16.53%).
A novel approach for dimension reduction of microarray.
Aziz, Rabia; Verma, C K; Srivastava, Namita
2017-12-01
This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.
NASA Astrophysics Data System (ADS)
Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan
2017-09-01
Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.
NASA Astrophysics Data System (ADS)
Ctvrtnickova, T.; Mateo, M. P.; Yañez, A.; Nicolas, G.
2011-04-01
Presented work brings results of Laser-Induced Breakdown Spectroscopy (LIBS) and Thermo-Mechanical Analysis (TMA) of coals and coal blends used in coal fired power plants all over Spain. Several coal specimens, its blends and corresponding laboratory ash were analyzed by mentioned techniques and results were compared to standard laboratory methods. The indices of slagging, which predict the tendency of coal ash deposition on the boiler walls, were determined by means of standard chemical analysis, LIBS and TMA. The optimal coal suitable to be blended with the problematic national lignite coal was suggested in order to diminish the slagging problems. Used techniques were evaluated based on the precision, acquisition time, extension and quality of information they could provide. Finally, the applicability of LIBS and TMA to the successful calculation of slagging indices is discussed and their substitution of time-consuming and instrumentally difficult standard methods is considered.
Simple adaptation of the Bridgman high pressure technique for use with liquid media
NASA Astrophysics Data System (ADS)
Colombier, E.; Braithwaite, D.
2007-09-01
We present a simple novel technique to adapt a standard Bridgman cell for the use of a liquid pressure transmitting medium. The technique has been implemented in a compact cell, able to fit in a commercial Quantum Design PPMS system, and would also be easily adaptable to extreme conditions of very low temperatures or high magnetic fields. Several media have been tested and a mix of fluorinert FC84:FC87 has been shown to produce a considerable improvement over the pressure conditions in the standard steatite solid medium, while allowing a relatively easy setup procedure. For optimized hydrostatic conditions, the success rate is about 80% and the maximum pressure achieved so far is 7.1GPa. Results are shown for the heavy fermion system YbAl3 and for NaV6O15, an insulator showing charge order.
Surface texture measurement for dental wear applications
NASA Astrophysics Data System (ADS)
Austin, R. S.; Mullen, F.; Bartlett, D. W.
2015-06-01
The application of surface topography measurement and characterization within dental materials science is highly active and rapidly developing, in line with many modern industries. Surface measurement and structuring is used extensively within oral and dental science to optimize the optical, tribological and biological performance of natural and biomimetic dental materials. Although there has historically been little standardization in the use and reporting of surface metrology instrumentation and software, the dental industry is beginning to adopt modern areal measurement and characterization techniques, especially as the dental industry is increasingly adopting digital impressioning techniques in order to leverage CAD/CAM technologies for the design and construction of dental restorations. As dental treatment becomes increasingly digitized and reliant on advanced technologies such as dental implants, wider adoption of standardized surface topography and characterization techniques will become evermore essential. The dental research community welcomes the advances that are being made in surface topography measurement science towards realizing this ultimate goal.
Color standardization and optimization in whole slide imaging.
Yagi, Yukako
2011-03-30
Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.
First uncertainty evaluation of the FoCS-2 primary frequency standard
NASA Astrophysics Data System (ADS)
Jallageas, A.; Devenoges, L.; Petersen, M.; Morel, J.; Bernier, L. G.; Schenker, D.; Thomann, P.; Südmeyer, T.
2018-06-01
We report the uncertainty evaluation of the Swiss continuous primary frequency standard FoCS-2 (Fontaine Continue Suisse). Unlike other primary frequency standards which are working with clouds of cold atoms, this fountain uses a continuous beam of cold caesium atoms bringing a series of metrological advantages and specific techniques for the evaluation of the uncertainty budget. Recent improvements of FoCS-2 have made possible the evaluation of the frequency shifts and of their uncertainties in the order of . When operating in an optimal regime a relative frequency instability of is obtained. The relative standard uncertainty reported in this article, , is strongly dominated by the statistics of the frequency measurements.
Design optimization of steel frames using an enhanced firefly algorithm
NASA Astrophysics Data System (ADS)
Carbas, Serdar
2016-12-01
Mathematical modelling of real-world-sized steel frames under the Load and Resistance Factor Design-American Institute of Steel Construction (LRFD-AISC) steel design code provisions, where the steel profiles for the members are selected from a table of steel sections, turns out to be a discrete nonlinear programming problem. Finding the optimum design of such design optimization problems using classical optimization techniques is difficult. Metaheuristic algorithms provide an alternative way of solving such problems. The firefly algorithm (FFA) belongs to the swarm intelligence group of metaheuristics. The standard FFA has the drawback of being caught up in local optima in large-sized steel frame design problems. This study attempts to enhance the performance of the FFA by suggesting two new expressions for the attractiveness and randomness parameters of the algorithm. Two real-world-sized design examples are designed by the enhanced FFA and its performance is compared with standard FFA as well as with particle swarm and cuckoo search algorithms.
Kurihara, Miki; Ikeda, Koji; Izawa, Yoshinori; Deguchi, Yoshihiro; Tarui, Hitoshi
2003-10-20
A laser-induced breakdown spectroscopy (LIBS) technique has been applied for detection of unburned carbon in fly ash, and an automated LIBS unit has been developed and applied in a 1000-MW pulverized-coal-fired power plant for real-time measurement, specifically of unburned carbon in fly ash. Good agreement was found between measurement results from the LIBS method and those from the conventional method (Japanese Industrial Standard 8815), with a standard deviation of 0.27%. This result confirms that the measurement of unburned carbon in fly ash by use of LIBS is sufficiently accurate for boiler control. Measurements taken by this apparatus were also integrated into a boiler-control system with the objective of achieving optimal and stable combustion. By control of the rotating speed of a mill rotary separator relative to measured unburned-carbon content, it has been demonstrated that boiler control is possible in an optimized manner by use of the value of the unburned-carbon content of fly ash.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach
NASA Technical Reports Server (NTRS)
Das, Santanu; Oza, Nikunj C.
2011-01-01
In this paper we propose an innovative learning algorithm - a variation of One-class nu Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class nu SVMs while reducing both training time and test time by several factors.
NASA Astrophysics Data System (ADS)
Chanda, Sandip; De, Abhinandan
2016-12-01
A social welfare optimization technique has been proposed in this paper with a developed state space based model and bifurcation analysis to offer substantial stability margin even in most inadvertent states of power system networks. The restoration of the power market dynamic price equilibrium has been negotiated in this paper, by forming Jacobian of the sensitivity matrix to regulate the state variables for the standardization of the quality of solution in worst possible contingencies of the network and even with co-option of intermittent renewable energy sources. The model has been tested in IEEE 30 bus system and illustrious particle swarm optimization has assisted the fusion of the proposed model and methodology.
Interface design for CMOS-integrated Electrochemical Impedance Spectroscopy (EIS) biosensors.
Manickam, Arun; Johnson, Christopher Andrew; Kavusi, Sam; Hassibi, Arjang
2012-10-29
Electrochemical Impedance Spectroscopy (EIS) is a powerful electrochemical technique to detect biomolecules. EIS has the potential of carrying out label-free and real-time detection, and in addition, can be easily implemented using electronic integrated circuits (ICs) that are built through standard semiconductor fabrication processes. This paper focuses on the various design and optimization aspects of EIS ICs, particularly the bio-to-semiconductor interface design. We discuss, in detail, considerations such as the choice of the electrode surface in view of IC manufacturing, surface linkers, and development of optimal bio-molecular detection protocols. We also report experimental results, using both macro- and micro-electrodes to demonstrate the design trade-offs and ultimately validate our optimization procedures.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas
2003-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.
2000-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
Advanced techniques and technology for efficient data storage, access, and transfer
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Miller, Warner
1991-01-01
Advanced techniques for efficiently representing most forms of data are being implemented in practical hardware and software form through the joint efforts of three NASA centers. These techniques adapt to local statistical variations to continually provide near optimum code efficiency when representing data without error. Demonstrated in several earlier space applications, these techniques are the basis of initial NASA data compression standards specifications. Since the techniques clearly apply to most NASA science data, NASA invested in the development of both hardware and software implementations for general use. This investment includes high-speed single-chip very large scale integration (VLSI) coding and decoding modules as well as machine-transferrable software routines. The hardware chips were tested in the laboratory at data rates as high as 700 Mbits/s. A coding module's definition includes a predictive preprocessing stage and a powerful adaptive coding stage. The function of the preprocessor is to optimally process incoming data into a standard form data source that the second stage can handle.The built-in preprocessor of the VLSI coder chips is ideal for high-speed sampled data applications such as imaging and high-quality audio, but additionally, the second stage adaptive coder can be used separately with any source that can be externally preprocessed into the 'standard form'. This generic functionality assures that the applicability of these techniques and their recent high-speed implementations should be equally broad outside of NASA.
A multilevel control system for the large space telescope. [numerical analysis/optimal control
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Sundareshan, S. K.; Vukcevic, M. B.
1975-01-01
A multilevel scheme was proposed for control of Large Space Telescope (LST) modeled by a three-axis-six-order nonlinear equation. Local controllers were used on the subsystem level to stabilize motions corresponding to the three axes. Global controllers were applied to reduce (and sometimes nullify) the interactions among the subsystems. A multilevel optimization method was developed whereby local quadratic optimizations were performed on the subsystem level, and global control was again used to reduce (nullify) the effect of interactions. The multilevel stabilization and optimization methods are presented as general tools for design and then used in the design of the LST Control System. The methods are entirely computerized, so that they can accommodate higher order LST models with both conceptual and numerical advantages over standard straightforward design techniques.
Interdisciplinary Distinguished Seminar Series
2014-08-29
official Department of the Army position, policy or decision, unless so designated by other documentation. 9. SPONSORING/MONITORING AGENCY NAME(S) AND...Received Book TOTAL: Patents Submitted Patents Awarded Awards Graduate Students Names of Post Doctorates Names of Faculty Supported Names of Under...capabilities, estimation and optimization techniques, image and color standards, efficient programming methods and efficient ASIC designs . This seminar will
Blind retrospective motion correction of MR images.
Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard
2013-12-01
Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.
Niedermayr, Thomas R; Nguyen, Paul L; Murciano-Goroff, Yonina R; Kovtun, Konstantin A; Neubauer Sugar, Emily; Cail, Daniel W; O'Farrell, Desmond A; Hansen, Jorgen L; Cormack, Robert A; Buzurovic, Ivan; Wolfsberger, Luciant T; O'Leary, Michael P; Steele, Graeme S; Devlin, Philip M; Orio, Peter F
2014-01-01
We sought to determine whether placing empty catheters within the prostate and then inverse planning iodine-125 seed locations within those catheters (High Dose Rate-Emulating Low Dose Rate Prostate Brachytherapy [HELP] technique) would improve concordance between planned and achieved dosimetry compared with a standard intraoperative technique. We examined 30 consecutive low dose rate prostate cases performed by standard intraoperative technique of planning followed by needle placement/seed deposition and compared them to 30 consecutive low dose rate prostate cases performed by the HELP technique. The primary endpoint was concordance between planned percentage of the clinical target volume that receives at least 100% of the prescribed dose/dose that covers 90% of the volume of the clinical target volume (V100/D90) and the actual V100/D90 achieved at Postoperative Day 1. The HELP technique had superior concordance between the planned target dosimetry and what was actually achieved at Day 1 and Day 30. Specifically, target D90 at Day 1 was on average 33.7 Gy less than planned for the standard intraoperative technique but was only 10.5 Gy less than planned for the HELP technique (p < 0.001). Day 30 values were 16.6 Gy less vs. 2.2 Gy more than planned, respectively (p = 0.028). Day 1 target V100 was 6.3% less than planned with standard vs. 2.8% less for HELP (p < 0.001). There was no significant difference between the urethral and rectal concordance (all p > 0.05). Placing empty needles first and optimizing the plan to the known positions of the needles resulted in improved concordance between the planned and the achieved dosimetry to the target, possibly because of elimination of errors in needle placement. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Andriani, Dian; Wresta, Arini; Atmaja, Tinton Dwi; Saepudin, Aep
2014-02-01
Biogas from anaerobic digestion of organic materials is a renewable energy resource that consists mainly of CH4 and CO2. Trace components that are often present in biogas are water vapor, hydrogen sulfide, siloxanes, hydrocarbons, ammonia, oxygen, carbon monoxide, and nitrogen. Considering the biogas is a clean and renewable form of energy that could well substitute the conventional source of energy (fossil fuels), the optimization of this type of energy becomes substantial. Various optimization techniques in biogas production process had been developed, including pretreatment, biotechnological approaches, co-digestion as well as the use of serial digester. For some application, the certain purity degree of biogas is needed. The presence of CO2 and other trace components in biogas could affect engine performance adversely. Reducing CO2 content will significantly upgrade the quality of biogas and enhancing the calorific value. Upgrading is generally performed in order to meet the standards for use as vehicle fuel or for injection in the natural gas grid. Different methods for biogas upgrading are used. They differ in functioning, the necessary quality conditions of the incoming gas, and the efficiency. Biogas can be purified from CO2 using pressure swing adsorption, membrane separation, physical or chemical CO2 absorption. This paper reviews the various techniques, which could be used to optimize the biogas production as well as to upgrade the biogas quality.
NASA Astrophysics Data System (ADS)
Wells, Kelley C.; Millet, Dylan B.; Bousserez, Nicolas; Henze, Daven K.; Griffis, Timothy J.; Chaliyakunnel, Sreelekha; Dlugokencky, Edward J.; Saikawa, Eri; Xiang, Gao; Prinn, Ronald G.; O'Doherty, Simon; Young, Dickon; Weiss, Ray F.; Dutton, Geoff S.; Elkins, James W.; Krummel, Paul B.; Langenfelds, Ray; Steele, L. Paul
2018-01-01
We present top-down constraints on global monthly N2O emissions for 2011 from a multi-inversion approach and an ensemble of surface observations. The inversions employ the GEOS-Chem adjoint and an array of aggregation strategies to test how well current observations can constrain the spatial distribution of global N2O emissions. The strategies include (1) a standard 4D-Var inversion at native model resolution (4° × 5°), (2) an inversion for six continental and three ocean regions, and (3) a fast 4D-Var inversion based on a novel dimension reduction technique employing randomized singular value decomposition (SVD). The optimized global flux ranges from 15.9 Tg N yr-1 (SVD-based inversion) to 17.5-17.7 Tg N yr-1 (continental-scale, standard 4D-Var inversions), with the former better capturing the extratropical N2O background measured during the HIAPER Pole-to-Pole Observations (HIPPO) airborne campaigns. We find that the tropics provide a greater contribution to the global N2O flux than is predicted by the prior bottom-up inventories, likely due to underestimated agricultural and oceanic emissions. We infer an overestimate of natural soil emissions in the extratropics and find that predicted emissions are seasonally biased in northern midlatitudes. Here, optimized fluxes exhibit a springtime peak consistent with the timing of spring fertilizer and manure application, soil thawing, and elevated soil moisture. Finally, the inversions reveal a major emission underestimate in the US Corn Belt in the bottom-up inventory used here. We extensively test the impact of initial conditions on the analysis and recommend formally optimizing the initial N2O distribution to avoid biasing the inferred fluxes. We find that the SVD-based approach provides a powerful framework for deriving emission information from N2O observations: by defining the optimal resolution of the solution based on the information content of the inversion, it provides spatial information that is lost when aggregating to political or geographic regions, while also providing more temporal information than a standard 4D-Var inversion.
Molecular Diagnostic Testing for Aspergillus
Powers-Fletcher, Margaret V.
2016-01-01
The direct detection of Aspergillus nucleic acid in clinical specimens has the potential to improve the diagnosis of aspergillosis by offering more rapid and sensitive identification of invasive infections than is possible with traditional techniques, such as culture or histopathology. Molecular tests for Aspergillus have been limited historically by lack of standardization and variable sensitivities and specificities. Recent efforts have been directed at addressing these limitations and optimizing assay performance using a variety of specimen types. This review provides a summary of standardization efforts and outlines the complexities of molecular testing for Aspergillus in clinical mycology. PMID:27487954
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
An efficient auto TPT stitch guidance generation for optimized standard cell design
NASA Astrophysics Data System (ADS)
Samboju, Nagaraj C.; Choi, Soo-Han; Arikati, Srini; Cilingir, Erdem
2015-03-01
As the technology continues to shrink below 14nm, triple patterning lithography (TPT) is a worthwhile lithography methodology for printing dense layers such as Metal1. However, this increases the complexity of standard cell design, as it is very difficult to develop a TPT compliant layout without compromising on the area. Hence, this emphasizes the importance to have an accurate stitch generation methodology to meet the standard cell area requirement as defined by the technology shrink factor. In this paper, we present an efficient auto TPT stitch guidance generation technique for optimized standard cell design. The basic idea here is to first identify the conflicting polygons based on the Fix Guidance [1] solution developed by Synopsys. Fix Guidance is a reduced sub-graph containing minimum set of edges along with the connecting polygons; by eliminating these edges in a design 3-color conflicts can be resolved. Once the conflicting polygons are identified using this method, they are categorized into four types [2] - (Type 1 to 4). The categorization is based on number of interactions a polygon has with the coloring links and the triangle loops of fix guidance. For each type a certain criteria for keep-out region is defined, based on which the final stitch guidance locations are generated. This technique provides various possible stitch locations to the user and helps the user to select the best stitch location considering both design flexibility (max. pin access/small area) and process-preferences. Based on this technique, a standard cell library for place and route (P and R) can be developed with colorless data and a stitch marker defined by designer using our proposed method. After P and R, the full chip (block) would contain the colorless data and standard cell stitch markers only. These stitch markers are considered as "must be stitch" candidates. Hence during full chip decomposition it is not required to generate and select the stitch markers again for the complete data; therefore, the proposed method reduces the decomposition time significantly.
T700 power turbine rotor multiplane/multispeed balancing demonstration
NASA Technical Reports Server (NTRS)
Burgess, G.; Rio, R.
1979-01-01
Research was conducted to demonstrate the ability of influence coefficient based multispeed balancing to control rotor vibration through bending criticals. Rotor dynamic analyses were conducted of the General Electric T700 power turbine rotor. The information was used to generate expected rotor behavior for optimal considerations in designing a balance rig and a balance technique. The rotor was successfully balanced 9500 rpm. Uncontrollable coupling behavior prevented observations through the 16,000 rpm service speed. The balance technique is practical and with additional refinement it can meet production standards.
Investigation of optimization-based reconstruction with an image-total-variation constraint in PET
NASA Astrophysics Data System (ADS)
Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan
2016-08-01
Interest remains in reconstruction-algorithm research and development for possible improvement of image quality in current PET imaging and for enabling innovative PET systems to enhance existing, and facilitate new, preclinical and clinical applications. Optimization-based image reconstruction has been demonstrated in recent years of potential utility for CT imaging applications. In this work, we investigate tailoring the optimization-based techniques to image reconstruction for PET systems with standard and non-standard scan configurations. Specifically, given an image-total-variation (TV) constraint, we investigated how the selection of different data divergences and associated parameters impacts the optimization-based reconstruction of PET images. The reconstruction robustness was explored also with respect to different data conditions and activity up-takes of practical relevance. A study was conducted particularly for image reconstruction from data collected by use of a PET configuration with sparsely populated detectors. Overall, the study demonstrates the robustness of the TV-constrained, optimization-based reconstruction for considerably different data conditions in PET imaging, as well as its potential to enable PET configurations with reduced numbers of detectors. Insights gained in the study may be exploited for developing algorithms for PET-image reconstruction and for enabling PET-configuration design of practical usefulness in preclinical and clinical applications.
Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model
NASA Technical Reports Server (NTRS)
Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.
2010-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.
Weak Value Amplification is Suboptimal for Estimation and Detection
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-01-01
We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.
Source term identification in atmospheric modelling via sparse optimization
NASA Astrophysics Data System (ADS)
Adam, Lukas; Branda, Martin; Hamburger, Thomas
2015-04-01
Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.
[Development of a digital chest phantom for studies on energy subtraction techniques].
Hayashi, Norio; Taniguchi, Anna; Noto, Kimiya; Shimosegawa, Masayuki; Ogura, Toshihiro; Doi, Kunio
2014-03-01
Digital chest phantoms continue to play a significant role in optimizing imaging parameters for chest X-ray examinations. The purpose of this study was to develop a digital chest phantom for studies on energy subtraction techniques under ideal conditions without image noise. Computed tomography (CT) images from the LIDC (Lung Image Database Consortium) were employed to develop a digital chest phantom. The method consisted of the following four steps: 1) segmentation of the lung and bone regions on CT images; 2) creation of simulated nodules; 3) transformation to attenuation coefficient maps from the segmented images; and 4) projection from attenuation coefficient maps. To evaluate the usefulness of digital chest phantoms, we determined the contrast of the simulated nodules in projection images of the digital chest phantom using high and low X-ray energies, soft tissue images obtained by energy subtraction, and "gold standard" images of the soft tissues. Using our method, the lung and bone regions were segmented on the original CT images. The contrast of simulated nodules in soft tissue images obtained by energy subtraction closely matched that obtained using the gold standard images. We thus conclude that it is possible to carry out simulation studies based on energy subtraction techniques using the created digital chest phantoms. Our method is potentially useful for performing simulation studies for optimizing the imaging parameters in chest X-ray examinations.
Miège, C; Dugay, J; Hennion, M C
2003-05-02
There is a need for a better characterization of sludges from wastewater treatment plants which are destined to be spread on agricultural lands. Inorganic pollutants are regularly controlled but organic pollutants have received few attention up to now. On this paper, we have been interested on the analysis of the 16 polycyclic aromatic hydrocarbons (PAHs) listed in the US Environmental Protection Agency (US EPA) priority list and more particularly of the six PAHs listed in the European community list (fluoranthene, benzo[b and k]fluoranthene, benzo[a]pyrene, benzo[ghi]perylene, indeno[1,2,3-cd]pyrene). The analysis step consists on liquid chromatography with both fluorescence and UV detections as described in the EPA Method 8310. As for the extraction step, several techniques such as supercritical fluid extraction, pressurized liquid extraction, focused microwave extraction in open vessels, Soxhlet and ultrasonic extractions are compared after optimization of the experimental conditions (solvent nature and quantity, temperature, pressure, duration, ... ) and validation with certified sludges. When optimized, these five extraction techniques are as much efficient with similar relative standard deviation. Whatever the extraction techniques used, the whole analysis protocol permits to quantify PAHs in the range of 0.09 to 0.9 mg/kg of dried sludges.
Detecting glaucomatous change in visual fields: Analysis with an optimization framework.
Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher
2015-12-01
Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.
Computer analysis of railcar vibrations
NASA Technical Reports Server (NTRS)
Vlaminck, R. R.
1975-01-01
Computer models and techniques for calculating railcar vibrations are discussed along with criteria for vehicle ride optimization. The effect on vibration of car body structural dynamics, suspension system parameters, vehicle geometry, and wheel and rail excitation are presented. Ride quality vibration data collected on the state-of-the-art car and standard light rail vehicle is compared to computer predictions. The results show that computer analysis of the vehicle can be performed for relatively low cost in short periods of time. The analysis permits optimization of the design as it progresses and minimizes the possibility of excessive vibration on production vehicles.
Multi-objective optimization for model predictive control.
Wojsznis, Willy; Mehta, Ashish; Wojsznis, Peter; Thiele, Dirk; Blevins, Terry
2007-06-01
This paper presents a technique of multi-objective optimization for Model Predictive Control (MPC) where the optimization has three levels of the objective function, in order of priority: handling constraints, maximizing economics, and maintaining control. The greatest weights are assigned dynamically to control or constraint variables that are predicted to be out of their limits. The weights assigned for economics have to out-weigh those assigned for control objectives. Control variables (CV) can be controlled at fixed targets or within one- or two-sided ranges around the targets. Manipulated Variables (MV) can have assigned targets too, which may be predefined values or current actual values. This MV functionality is extremely useful when economic objectives are not defined for some or all the MVs. To achieve this complex operation, handle process outputs predicted to go out of limits, and have a guaranteed solution for any condition, the technique makes use of the priority structure, penalties on slack variables, and redefinition of the constraint and control model. An engineering implementation of this approach is shown in the MPC embedded in an industrial control system. The optimization and control of a distillation column, the standard Shell heavy oil fractionator (HOF) problem, is adequately achieved with this MPC.
Modeling Payload Stowage Impacts on Fire Risks On-Board the International Space Station
NASA Technical Reports Server (NTRS)
Anton, Kellie e.; Brown, Patrick F.
2010-01-01
The purpose of this presentation is to determine the risks of fire on-board the ISS due to non-standard stowage. ISS stowage is constantly being reexamined for optimality. Non-standard stowage involves stowing items outside of rack drawers, and fire risk is a key concern and is heavily mitigated. A Methodology is needed to account for fire risk due to non-standard stowage to capture the risk. The contents include: 1) Fire Risk Background; 2) General Assumptions; 3) Modeling Techniques; 4) Event Sequence Diagram (ESD); 5) Qualitative Fire Analysis; 6) Sample Qualitative Results for Fire Risk; 7) Qualitative Stowage Analysis; 8) Sample Qualitative Results for Non-Standard Stowage; and 9) Quantitative Analysis Basic Event Data.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.
Maani, Ehsan; Katsaggelos, Aggelos K
2009-09-01
The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.
Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems
Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.
2016-01-01
Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383
A Standard Platform for Testing and Comparison of MDAO Architectures
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.
2012-01-01
The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.
Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
NASA Astrophysics Data System (ADS)
Alimorad D., H.; Fakharzadeh J., A.
2017-07-01
In this paper, a new approach is proposed for designing the nearly-optimal three dimensional symmetric shapes with desired physical center of mass. Herein, the main goal is to find such a shape whose image in ( r, θ)-plane is a divided region into a fixed and variable part. The nearly optimal shape is characterized in two stages. Firstly, for each given domain, the nearly optimal surface is determined by changing the problem into a measure-theoretical one, replacing this with an equivalent infinite dimensional linear programming problem and approximating schemes; then, a suitable function that offers the optimal value of the objective function for any admissible given domain is defined. In the second stage, by applying a standard optimization method, the global minimizer surface and its related domain will be obtained whose smoothness is considered by applying outlier detection and smooth fitting methods. Finally, numerical examples are presented and the results are compared to show the advantages of the proposed approach.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
NASA Astrophysics Data System (ADS)
Yang, Jia Sheng
2018-06-01
In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.
Topology optimization in acoustics and elasto-acoustics via a level-set method
NASA Astrophysics Data System (ADS)
Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.
2018-04-01
Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.
Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization
NASA Astrophysics Data System (ADS)
Adhikari, Sam
2007-11-01
Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.
Evaluation of digestion methods for analysis of trace metals in mammalian tissues and NIST 1577c.
Binder, Grace A; Metcalf, Rainer; Atlas, Zachary; Daniel, Kenyon G
2018-02-15
Digestion techniques for ICP analysis have been poorly studied for biological samples. This report describes an optimized method for analysis of trace metals that can be used across a variety of sample types. Digestion methods were tested and optimized with the analysis of trace metals in cancerous as compared to normal tissue as the end goal. Anthropological, forensic, oncological and environmental research groups can employ this method reasonably cheaply and safely whilst still being able to compare between laboratories. We examined combined HNO 3 and H 2 O 2 digestion at 170 °C for human, porcine and bovine samples whether they are frozen, fresh or lyophilized powder. Little discrepancy is found between microwave digestion and PFA Teflon pressure vessels. The elements of interest (Cu, Zn, Fe and Ni) yielded consistently higher and more accurate values on standard reference material than samples heated to 75 °C or samples that utilized HNO 3 alone. Use of H 2 SO 4 does not improve homogeneity of the sample and lowers precision during ICP analysis. High temperature digestions (>165 °C) using a combination of HNO 3 and H 2 O 2 as outlined are proposed as a standard technique for all mammalian tissues, specifically, human tissues and yield greater than 300% higher values than samples digested at 75 °C regardless of the acid or acid combinations used. The proposed standardized technique is designed to accurately quantify potential discrepancies in metal loads between cancerous and healthy tissues and applies to numerous tissue studies requiring quick, effective and safe digestions. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, M; Ramaseshan, R
2016-06-15
Purpose: In this project, we compared the conventional tangent pair technique to IMRT technique by analyzing the dose distribution. We also investigated the effect of respiration on planning target volume (PTV) dose coverage in both techniques. Methods: In order to implement IMRT technique a template based planning protocol, dose constrains and treatment process was developed. Two open fields with optimized field weights were combined with two beamlet optimization fields in IMRT plans. We compared the dose distribution between standard tangential pair and IMRT. The improvement in dose distribution was measured by parameters such as conformity index, homogeneity index and coveragemore » index. Another end point was the IMRT technique will reduce the planning time for staff. The effect of patient’s respiration on dose distribution was also estimated. The four dimensional computed tomography (4DCT) for different phase of breathing cycle was used to evaluate the effect of respiration on IMRT planned dose distribution. Results: We have accumulated 10 patients that acquired 4DCT and planned by both techniques. Based on the preliminary analysis, the dose distribution in IMRT technique was better than conventional tangent pair technique. Furthermore, the effect of respiration in IMRT plan was not significant as evident from the 95% isodose line coverage of PTV drawn on all phases of 4DCT. Conclusion: Based on the 4DCT images, the breathing effect on dose distribution was smaller than what we expected. We suspect that there are two reasons. First, the PTV movement due to respiration was not significant. It might be because we used a tilted breast board to setup patients. Second, the open fields with optimized field weights in IMRT technique might reduce the breathing effect on dose distribution. A further investigation is necessary.« less
Saturation pulse design for quantitative myocardial T1 mapping.
Chow, Kelvin; Kellman, Peter; Spottiswoode, Bruce S; Nielles-Vallespin, Sonia; Arai, Andrew E; Salerno, Michael; Thompson, Richard B
2015-10-01
Quantitative saturation-recovery based T1 mapping sequences are less sensitive to systematic errors than the Modified Look-Locker Inversion recovery (MOLLI) technique but require high performance saturation pulses. We propose to optimize adiabatic and pulse train saturation pulses for quantitative T1 mapping to have <1 % absolute residual longitudinal magnetization (|MZ/M0|) over ranges of B0 and [Formula: see text] (B1 scale factor) inhomogeneity found at 1.5 T and 3 T. Design parameters for an adiabatic BIR4-90 pulse were optimized for improved performance within 1.5 T B0 (±120 Hz) and [Formula: see text] (0.7-1.0) ranges. Flip angles in hard pulse trains of 3-6 pulses were optimized for 1.5 T and 3 T, with consideration of T1 values, field inhomogeneities (B0 = ±240 Hz and [Formula: see text]=0.4-1.2 at 3 T), and maximum achievable B1 field strength. Residual MZ/M0 was simulated and measured experimentally for current standard and optimized saturation pulses in phantoms and in-vivo human studies. T1 maps were acquired at 3 T in human subjects and a swine using a SAturation recovery single-SHot Acquisition (SASHA) technique with a standard 90°-90°-90° and an optimized 6-pulse train. Measured residual MZ/M0 in phantoms had excellent agreement with simulations over a wide range of B0 and [Formula: see text]. The optimized BIR4-90 reduced the maximum residual |MZ/M0| to <1 %, a 5.8× reduction compared to a reference BIR4-90. An optimized 3-pulse train achieved a maximum residual |MZ/M0| <1 % for the 1.5 T optimization range compared to 11.3 % for a standard 90°-90°-90° pulse train, while a 6-pulse train met this target for the wider 3 T ranges of B0 and [Formula: see text]. The 6-pulse train demonstrated more uniform saturation across both the myocardium and entire field of view than other saturation pulses in human studies. T1 maps were more spatially homogeneous with 6-pulse train SASHA than the reference 90°-90°-90° SASHA in both human and animal studies. Adiabatic and pulse train saturation pulses optimized for different constraints found at 1.5 T and 3 T achieved <1 % residual |MZ/M0| in phantom experiments, enabling greater accuracy in quantitative saturation recovery T1 imaging.
Overlay metrology for double patterning processes
NASA Astrophysics Data System (ADS)
Leray, Philippe; Cheng, Shaunee; Laidler, David; Kandel, Daniel; Adel, Mike; Dinu, Berta; Polli, Marco; Vasconi, Mauro; Salski, Bartlomiej
2009-03-01
The double patterning (DPT) process is foreseen by the industry to be the main solution for the 32 nm technology node and even beyond. Meanwhile process compatibility has to be maintained and the performance of overlay metrology has to improve. To achieve this for Image Based Overlay (IBO), usually the optics of overlay tools are improved. It was also demonstrated that these requirements are achievable with a Diffraction Based Overlay (DBO) technique named SCOLTM [1]. In addition, we believe that overlay measurements with respect to a reference grid are required to achieve the required overlay control [2]. This induces at least a three-fold increase in the number of measurements (2 for double patterned layers to the reference grid and 1 between the double patterned layers). The requirements of process compatibility, enhanced performance and large number of measurements make the choice of overlay metrology for DPT very challenging. In this work we use different flavors of the standard overlay metrology technique (IBO) as well as the new technique (SCOL) to address these three requirements. The compatibility of the corresponding overlay targets with double patterning processes (Litho-Etch-Litho-Etch (LELE); Litho-Freeze-Litho-Etch (LFLE), Spacer defined) is tested. The process impact on different target types is discussed (CD bias LELE, Contrast for LFLE). We compare the standard imaging overlay metrology with non-standard imaging techniques dedicated to double patterning processes (multilayer imaging targets allowing one overlay target instead of three, very small imaging targets). In addition to standard designs already discussed [1], we investigate SCOL target designs specific to double patterning processes. The feedback to the scanner is determined using the different techniques. The final overlay results obtained are compared accordingly. We conclude with the pros and cons of each technique and suggest the optimal metrology strategy for overlay control in double patterning processes.
Optimization of Craniospinal Irradiation for Pediatric Medulloblastoma Using VMAT and IMRT.
Al-Wassia, Rolina K; Ghassal, Noor M; Naga, Adly; Awad, Nesreen A; Bahadur, Yasir A; Constantinescu, Camelia
2015-10-01
Intensity-modulated radiotherapy (IMRT) and volumetric-modulated arc therapy (VMAT) provide highly conformal target radiation doses, but also expose large volumes of healthy tissue to low-dose radiation. With improving survival, more children with medulloblastoma (MB) are at risk of late adverse effects of radiotherapy, including secondary cancers. We evaluated the characteristics of IMRT and VMAT craniospinal irradiation treatment plans in children with standard-risk MB to compare radiation dose delivery to target organs and organs at risk (OAR). Each of 10 children with standard-risk MB underwent both IMRT and VMAT treatment planning. Dose calculations used inverse planning optimization with a craniospinal dose of 23.4 Gy followed by a posterior fossa boost to 55.8 Gy. Clinical and planning target volumes were demarcated on axial computed tomography images. Dose distributions to target organs and OAR for each planning technique were measured and compared with published dose-volume toxicity data for pediatric patients. All patients completed treatment planning for both techniques. Analyses and comparisons of dose distributions and dose-volume histograms for the planned target volumes, and dose delivery to the OAR for each technique demonstrated the following: (1) VMAT had a modest, but significantly better, planning target volume-dose coverage and homogeneity compared with IMRT; (2) there were different OAR dose-sparing profiles for IMRT versus VMAT; and (3) neither IMRT nor VMAT demonstrated dose reductions to the published pediatric dose limits for the eyes, the lens, the cochlea, the pituitary, and the brain. The use of both IMRT and VMAT provides good target tissue coverage and sparing of the adjacent tissue for MB. Both techniques resulted in OAR dose delivery within published pediatric dose guidelines, except those mentioned above. Pediatric patients with standard-risk MB remain at risk for late endocrinologic, sensory (auditory and visual), and brain functional impairments.
Beam shaping as an enabler for new applications
NASA Astrophysics Data System (ADS)
Guertler, Yvonne; Kahmann, Max; Havrilla, David
2017-02-01
For many years, laser beam shaping has enabled users to achieve optimized process results as well as manage challenging applications. The latest advancements in industrial lasers and processing optics have taken this a step further as users are able to adapt the beam shape to meet specific application requirements in a very flexible way. TRUMPF has developed a wide range of experience in creating beam profiles at the work piece for optimized material processing. This technology is based on the physical model of wave optics and can be used with ultra short pulse lasers as well as multi-kW cw lasers. Basically, the beam shape can be adapted in all three dimensions in space, which allows maximum flexibility. Besides adaption of intensity profile, even multi-spot geometries can be produced. This approach is very cost efficient, because a standard laser source and (in the case of cw lasers) a standard fiber can be used without any special modifications. Based on this innovative beam shaping technology, TRUMPF has developed new and optimized processes. Two of the most recent application developments using these techniques are cutting glass and synthetic sapphire with ultra-short pulse lasers and enhanced brazing of hot dip zinc coated steel for automotive applications. Both developments lead to more efficient and flexible production processes, enabled by laser technology and open the door to new opportunities. They also indicate the potential of beam shaping techniques since they can be applied to both single-mode laser sources (TOP Cleave) and multi-mode laser sources (brazing).
2014-01-01
Background Laparoscopic appendectomy (LA) has become one of the most common surgical procedures to date. To improve and standardize this technique further, cost-effective and reliable animal models are needed. Methods In a pilot study, 30 Wistar rats underwent laparoscopic caecum resection (as rats do not have an appendix vermiformis), to optimize the instrumental and surgical parameters. A subsequent test study was performed in another 30 rats to compare three different techniques for caecum resection and bowel closure. Results Bipolar coagulation led to an insufficiency of caecal stump closure in all operated rats (Group 1, n = 10). Endoloop ligation followed by bipolar coagulation and resection (Group 2, n = 10) or resection with a LigaSure™ device (Group 3, n = 10) resulted in sufficient caecal stump closure. Conclusions We developed a LA model enabling us to compare three different caecum resection techniques in rats. In conclusion, only endoloop closure followed by bipolar coagulation proved to be a secure and cost-effective surgical approach. PMID:24934381
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob A.
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASA's Dawn mission. The Dawn trajectory was designed with the DDP-based Static/Dynamic Optimal Control algorithm used in the Mystic software.1 Another recently developed method, Hybrid Differential Dynamic Programming (HDDP),2, 3 is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
Moreno-Vilet, Lorena; Bostyn, Stéphane; Flores-Montaño, Jose-Luis; Camacho-Ruiz, Rosa-María
2017-12-15
Agave fructans are increasingly important in food industry and nutrition sciences as a potential ingredient of functional food, thus practical analysis tools to characterize them are needed. In view of the importance of the molecular weight on the functional properties of agave fructans, this study has the purpose to optimize a method to determine their molecular weight distribution by HPLC-SEC for industrial application. The optimization was carried out using a simplex method. The optimum conditions obtained were at column temperature of 61.7°C using tri-distilled water without salt, adjusted pH of 5.4 and a flow rate of 0.36mL/min. The exclusion range is from 1 to 49 of polymerization degree (180-7966Da). This proposed method represents an accurate and fast alternative to standard methods involving multiple-detection or hydrolysis of fructans. The industrial applications of this technique might be for quality control, study of fractionation processes and determination of purity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Oludemi, Taofiq; Barros, Lillian; Prieto, M A; Heleno, Sandrina A; Barreiro, Maria F; Ferreira, Isabel C F R
2018-01-24
The extraction of triterpenoids and phenolic compounds from Ganoderma lucidum was optimized by using the response surface methodology (RSM), using heat and ultrasound assisted extraction techniques (HAE and UAE). The obtained results were compared with that of the standard Soxhlet procedure. RSM was applied using a circumscribed central composite design with three variables (time, ethanol content, and temperature or ultrasonic power) and five levels. The conditions that maximize the responses (extraction yield, triterpenoids and total phenolics) were: 78.9 min, 90.0 °C and 62.5% ethanol and 40 min, 100.0 W and 89.5% ethanol for HAE and UAE, respectively. The latter was the most effective, resulting in an extraction yield of 4.9 ± 0.6% comprising a content of 435.6 ± 21.1 mg g -1 of triterpenes and 106.6 ± 16.2 mg g -1 of total phenolics. The optimized extracts were fully characterized in terms of individual phenolic compounds and triterpenoids by HPLC-DAD-ESI/MS. The recovery of the above-mentioned bioactive compounds was markedly enhanced using the UAE technique.
Metal stack optimization for low-power and high-density for N7-N5
NASA Astrophysics Data System (ADS)
Raghavan, P.; Firouzi, F.; Matti, L.; Debacker, P.; Baert, R.; Sherazi, S. M. Y.; Trivkovic, D.; Gerousis, V.; Dusa, M.; Ryckaert, J.; Tokei, Z.; Verkest, D.; McIntyre, G.; Ronse, K.
2016-03-01
One of the key challenges while scaling logic down to N7 and N5 is the requirement of self-aligned multiple patterning for the metal stack. This comes with a large cost of the backend cost and therefore a careful stack optimization is required. Various layers in the stack have different purposes and therefore their choice of pitch and number of layers is critical. Furthermore, when in ultra scaled dimensions of N7 or N5, the number of patterning options are also much larger ranging from multiple LE, EUV to SADP/SAQP. The right choice of these are also needed patterning techniques that use a full grating of wires like SADP/SAQP techniques introduce high level of metal dummies into the design. This implies a large capacitance penalty to the design therefore having large performance and power penalties. This is often mitigated with extra masking strategies. This paper discusses a holistic view of metal stack optimization from standard cell level all the way to routing and the corresponding trade-off that exist for this space.
Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.
Chang, Joshua; Paydarfar, David
2014-12-01
Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.
Applied Computational Electromagnetics Society Journal and Newletter, Volume 14 No. 1
1999-03-01
code validation, performance analysis, and input/output standardization; code or technique optimization and error minimization; innovations in...SOUTH AFRICA Alamo, CA, 94507-0516 USA Washington, DC 20330 USA MANAGING EDITOR Kueichien C. Hill Krishna Naishadham Richard W. Adler Wright Laboratory...INSTITUTIONAL MEMBERS ALLGON DERA Nasvagen 17 Common Road, Funtington INNOVATIVE DYNAMICS Akersberga, SWEDEN S-18425 Chichester, P018 9PD UK 2560 N. Triphammer
Dark matter constraints from a joint analysis of dwarf Spheroidal galaxy observations with VERITAS
Archambault, S.; Archer, A.; Benbow, W.; ...
2017-04-05
We present constraints on the annihilation cross section of weakly interacting massive particles dark matter based on the joint statistical analysis of four dwarf galaxies with VERITAS. These results are derived from an optimized photon weighting statistical technique that improves on standard imaging atmospheric Cherenkov telescope (IACT) analyses by utilizing the spectral and spatial properties of individual photon events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akiyama, Kazunori; Fish, Vincent L.; Doeleman, Sheperd S.
We propose a new imaging technique for radio and optical/infrared interferometry. The proposed technique reconstructs the image from the visibility amplitude and closure phase, which are standard data products of short-millimeter very long baseline interferometers such as the Event Horizon Telescope (EHT) and optical/infrared interferometers, by utilizing two regularization functions: the ℓ {sub 1}-norm and total variation (TV) of the brightness distribution. In the proposed method, optimal regularization parameters, which represent the sparseness and effective spatial resolution of the image, are derived from data themselves using cross-validation (CV). As an application of this technique, we present simulated observations of M87more » with the EHT based on four physically motivated models. We confirm that ℓ {sub 1} + TV regularization can achieve an optimal resolution of ∼20%–30% of the diffraction limit λ / D {sub max}, which is the nominal spatial resolution of a radio interferometer. With the proposed technique, the EHT can robustly and reasonably achieve super-resolution sufficient to clearly resolve the black hole shadow. These results make it promising for the EHT to provide an unprecedented view of the event-horizon-scale structure in the vicinity of the supermassive black hole in M87 and also the Galactic center Sgr A*.« less
Vesapogu, Joshi Manohar; Peddakotla, Sujatha; Kuppa, Seetha Rama Anjaneyulu
2013-01-01
With the advancements in semiconductor technology, high power medium voltage (MV) Drives are extensively used in numerous industrial applications. Challenging technical requirements of MV Drives is to control multilevel inverter (MLI) with less Total harmonic distortion (%THD) which satisfies IEEE standard 519-1992 harmonic guidelines and less switching losses. Among all modulation control strategies for MLI, Selective harmonic elimination (SHE) technique is one of the traditionally preferred modulation control technique at fundamental switching frequency with better harmonic profile. On the other hand, the equations which are formed by SHE technique are highly non-linear in nature, may exist multiple, single or even no solution at particular modulation index (MI). However, in some MV Drive applications, it is required to operate over a range of MI. Providing analytical solutions for SHE equations during the whole range of MI from 0 to 1, has been a challenging task for researchers. In this paper, an attempt is made to solve SHE equations by using deterministic and stochastic optimization methods and comparative harmonic analysis has been carried out. An effective algorithm which minimizes %THD with less computational effort among all optimization algorithms has been presented. To validate the effectiveness of proposed MPSO technique, an experiment is carried out on a low power proto type of three phase CHB 11- level Inverter using FPGA based Xilinx's Spartan -3A DSP Controller. The experimental results proved that MPSO technique has successfully solved SHE equations over all range of MI from 0 to 1, the %THD obtained over major range of MI also satisfies IEEE 519-1992 harmonic guidelines too.
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1974-01-01
Digital multiplication of two waveforms using delta modulation (DM) is discussed. It is shown that while conventional multiplication of two N bit words requires N2 complexity, multiplication using DM requires complexity which increases linearly with N. Bounds on the signal-to-quantization noise ratio (SNR) resulting from this multiplication are determined and compared with the SNR obtained using standard multiplication techniques. The phase locked loop (PLL) system, consisting of a phase detector, voltage controlled oscillator, and a linear loop filter, is discussed in terms of its design and system advantages. Areas requiring further research are identified.
Intramedullary nailing: evolutions of femoral intramedullary nailing: first to fourth generations.
Russell, Thomas A
2011-12-01
Intramedullary femoral nailing is the gold standard for femoral shaft fixation but only in the past 27 years. This rapid replacement of closed traction and cast techniques in North America was a controversial and contentious evolution in surgery. As we enter the fourth generation of implant design, capabilities, and surgical technique, it is important to understand the driving forces for this technology. These forces included changes in radiographic imaging capabilities, biomaterial design and computer-assisted manufacturing, and the recognition of the importance of mobilization of the trauma patient to avoid systemic complications and optimize functional recovery.
Application of da Vinci(®) Robot in simple or radical hysterectomy: Tips and tricks.
Iavazzo, Christos; Gkegkes, Ioannis D
2016-01-01
The first robotic simple hysterectomy was performed more than 10 years ago. These days, robotic-assisted hysterectomy is accepted as an alternative surgical approach and is applied both in benign and malignant surgical entities. The two important points that should be taken into account to optimize postoperative outcomes in the early period of a surgeon's training are how to achieve optimal oncological and functional results. Overcoming any technical challenge, as with any innovative surgical method, leads to an improved surgical operation timewise as well as for patients' safety. The standardization of the technique and recognition of critical anatomical landmarks are essential for optimal oncological and clinical outcomes on both simple and radical robotic-assisted hysterectomy. Based on our experience, our intention is to present user-friendly tips and tricks to optimize the application of a da Vinci® robot in simple or radical hysterectomies.
Design of an optimal preview controller for linear discrete-time descriptor systems with state delay
NASA Astrophysics Data System (ADS)
Cao, Mengjuan; Liao, Fucheng
2015-04-01
In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2003-01-01
This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.
A new technique for measuring aerosols with moonlight observations and a sky background model
NASA Astrophysics Data System (ADS)
Jones, Amy; Noll, Stefan; Kausch, Wolfgang; Kimeswenger, Stefan; Szyszka, Ceszary; Unterguggenberger, Stefanie
2014-05-01
There have been an ample number of studies on aerosols in urban, daylight conditions, but few for remote, nocturnal aerosols. We have developed a new technique for investigating such aerosols using our sky background model and astronomical observations. With a dedicated observing proposal we have successfully tested this technique for nocturnal, remote aerosol studies. This technique relies on three requirements: (a) sky background model, (b) observations taken with scattered moonlight, and (c) spectrophotometric standard star observations for flux calibrations. The sky background model was developed for the European Southern Observatory and is optimized for the Very Large Telescope at Cerro Paranal in the Atacama desert in Chile. This is a remote location with almost no urban aerosols. It is well suited for studying remote background aerosols that are normally difficult to detect. Our sky background model has an uncertainty of around 20 percent and the scattered moonlight portion is even more accurate. The last two requirements are having astronomical observations with moonlight and of standard stars at different airmasses, all during the same night. We had a dedicated observing proposal at Cerro Paranal with the instrument X-Shooter to use as a case study for this method. X-Shooter is a medium resolution, echelle spectrograph which covers the wavelengths from 0.3 to 2.5 micrometers. We observed plain sky at six different distances (7, 13, 20, 45, 90, and 110 degrees) to the Moon for three different Moon phases (between full and half). Also direct observations of spectrophotometric standard stars were taken at two different airmasses for each night to measure the extinction curve via the Langley method. This is an ideal data set for testing this technique. The underlying assumption is that all components, other than the atmospheric conditions (specifically aerosols and airglow), can be calculated with the model for the given observing parameters. The scattered moonlight model is designed for the average atmospheric conditions at Cerro Paranal. The Mie scattering is calculated for the average distribution of aerosol particles, but this input can be modified. We can avoid the airglow emission lines, and near full Moon the airglow continuum can be ignored. In the case study, by comparing the scattered moonlight for the various angles and wavelengths along with the extinction curve from the standard stars, we can iteratively find the optimal aerosol size distribution for the time of observation. We will present this new technique, the results from this case study, and how it can be implemented for investigating aerosols using the X-Shooter archive and other astronomical archives.
Incorporating uncertainty and motion in Intensity Modulated Radiation Therapy treatment planning
NASA Astrophysics Data System (ADS)
Martin, Benjamin Charles
In radiation therapy, one seeks to destroy a tumor while minimizing the damage to surrounding healthy tissue. Intensity Modulated Radiation Therapy (IMRT) uses overlapping beams of x-rays that add up to a high dose within the target and a lower dose in the surrounding healthy tissue. IMRT relies on optimization techniques to create high quality treatments. Unfortunately, the possible conformality is limited by the need to ensure coverage even if there is organ movement or deformation. Currently, margins are added around the tumor to ensure coverage based on an assumed motion range. This approach does not ensure high quality treatments. In the standard IMRT optimization problem, an objective function measures the deviation of the dose from the clinical goals. The optimization then finds the beamlet intensities that minimize the objective function. When modeling uncertainty, the dose delivered from a given set of beamlet intensities is a random variable. Thus the objective function is also a random variable. In our stochastic formulation we minimize the expected value of this objective function. We developed a problem formulation that is both flexible and fast enough for use on real clinical cases. While working on accelerating the stochastic optimization, we developed a technique of voxel sampling. Voxel sampling is a randomized algorithms approach to a steepest descent problem based on estimating the gradient by only calculating the dose to a fraction of the voxels within the patient. When combined with an automatic sampling rate adaptation technique, voxel sampling produced an order of magnitude speed up in IMRT optimization. We also develop extensions of our results to Intensity Modulated Proton Therapy (IMPT). Due to the physics of proton beams the stochastic formulation yields visibly different and better plans than normal optimization. The results of our research have been incorporated into a software package OPT4D, which is an IMRT and IMPT optimization tool that we developed.
Weighted least squares techniques for improved received signal strength based localization.
Tarrío, Paula; Bernardos, Ana M; Casar, José R
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.
Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
NASA Technical Reports Server (NTRS)
Rawat, Banmali
2000-01-01
The multimode fiber bandwidth enhancement techniques to meet the Gigabit Ethernet standards for local area networks (LAN) of the Kennedy Space Center and other NASA centers have been discussed. Connector with lateral offset coupling between single mode launch fiber cable and the multimode fiber cable has been thoroughly investigated. An optimization of connector position offset for 8 km long optical fiber link at 1300 nm with 9 micrometer diameter single mode fiber (SMF) and 50 micrometer diameter multimode fiber (MMF) coupling has been obtained. The optimization is done in terms of bandwidth, eye-pattern, and bit pattern measurements. It is simpler, is a highly practical approach and is cheaper as no additional cost to manufacture the offset type of connectors is involved.
NASA Technical Reports Server (NTRS)
Burns, John A.; Marrekchi, Hamadi
1993-01-01
The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.
Akhlaq, Muhammad; Khan, Gul Majid; Jan, Syed Umer; Wahab, Abdul; Hussain, Abid; Nawaz, Asif; Abdelkader, Hamdy
2014-11-01
Diclofenac sodium (DCL-Na) conventional oral tablets exhibit serious side effects when given for a longer period leading to noncompliance. Controlled release matrix tablets of diclofenac sodium were formulated using simple blending (F-1), solvent evaporation (F-2) and co-precipitation techniques (F-3). Ethocel® Standard 7 FP Premium Polymer (15%) was used as a release controlling agent. Drug release study was conducted in 7.4 pH phosphate buffer solutions as dissolution medium in vitro. Pharmacokinetic parameters were evaluated using albino rabbits. Solvent evaporation technique was found to be the best release controlling technique thereby prolonging the release rate up to 24 hours. Accelerated stability studies of the optimized test formulation (F-2) did not show any significant change (p<0.05) in the physicochemical characteristics and release rate when stored for six months. A simple and rapid method was developed for DCL-Na active moiety using HPLC-UV at 276nm. The optimized test tablets (F-2) significantly (p<0.05) exhibited peaks plasma concentration (cmax=237.66±1.98) and extended the peak time (tmax=4.63±0.24). Good in-vitro in vivo correlation was found (R(2)=0.9883) against drug absorption and drug release. The study showed that once-daily controlled release matrix tablets of DCL-Na were successfully developed using Ethocel® Standard 7 FP Premium.
Zhang, Zach; Stein, Michael; Mercer, Nigel; Malic, Claudia
2017-03-09
There is a lack of high-level evidence on the surgical management of cleft palate. An appreciation of the differences in the complication rates between different surgical techniques and timing of repair is essential in optimizing cleft palate management. A comprehensive electronic database search will be conducted on the complication rates associated with cleft palate repair using MEDLINE, EMBASE, and the Cochrane Central Register of Controlled Trials. Two independent reviewers with expertise in cleft pathology will screen all appropriate titles, abstracts, and full-text publications prior to deciding whether each meet the predetermined inclusion criteria. The study findings will be tabulated and summarized. The primary outcomes will be the rate of palatal fistula, the incidence and severity of velopharyngeal insufficiency, and the rate of maxillary hypoplasia with different techniques and also the timing of the repair. A meta-analysis will be conducted using a random effects model. The evidence behind the optimal surgical approach to cleft palate repair is minimal, with no gold standard technique identified to date for a certain type of cleft palate. It is essential to appreciate how the complication rates differ between each surgical technique and each time point of repair, in order to optimize the management of these patients. A more critical evaluation of the outcomes of different cleft palate repair methods may also provide insight into more effective surgical approaches for different types of cleft palates.
Multi-technique comparison of troposphere zenith delays and gradients during CONT08
NASA Astrophysics Data System (ADS)
Teke, Kamil; Böhm, Johannes; Nilsson, Tobias; Schuh, Harald; Steigenberger, Peter; Dach, Rolf; Heinkelmann, Robert; Willis, Pascal; Haas, Rüdiger; García-Espada, Susana; Hobiger, Thomas; Ichikawa, Ryuichi; Shimizu, Shingo
2011-07-01
CONT08 was a 15 days campaign of continuous Very Long Baseline Interferometry (VLBI) sessions during the second half of August 2008 carried out by the International VLBI Service for Geodesy and Astrometry (IVS). In this study, VLBI estimates of troposphere zenith total delays (ZTD) and gradients during CONT08 were compared with those derived from observations with the Global Positioning System (GPS), Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), and water vapor radiometers (WVR) co-located with the VLBI radio telescopes. Similar geophysical models were used for the analysis of the space geodetic data, whereas the parameterization for the least-squares adjustment of the space geodetic techniques was optimized for each technique. In addition to space geodetic techniques and WVR, ZTD and gradients from numerical weather models (NWM) were used from the European Centre for Medium-Range Weather Forecasts (ECMWF) (all sites), the Japan Meteorological Agency (JMA) and Cloud Resolving Storm Simulator (CReSS) (Tsukuba), and the High Resolution Limited Area Model (HIRLAM) (European sites). Biases, standard deviations, and correlation coefficients were computed between the troposphere estimates of the various techniques for all eleven CONT08 co-located sites. ZTD from space geodetic techniques generally agree at the sub-centimetre level during CONT08, and—as expected—the best agreement is found for intra-technique comparisons: between the Vienna VLBI Software and the combined IVS solutions as well as between the Center for Orbit Determination (CODE) solution and an IGS PPP time series; both intra-technique comparisons are with standard deviations of about 3-6 mm. The best inter space geodetic technique agreement of ZTD during CONT08 is found between the combined IVS and the IGS solutions with a mean standard deviation of about 6 mm over all sites, whereas the agreement with numerical weather models is between 6 and 20 mm. The standard deviations are generally larger at low latitude sites because of higher humidity, and the latter is also the reason why the standard deviations are larger at northern hemisphere stations during CONT08 in comparison to CONT02 which was observed in October 2002. The assessment of the troposphere gradients from the different techniques is not as clear because of different time intervals, different estimation properties, or different observables. However, the best inter-technique agreement is found between the IVS combined gradients and the GPS solutions with standard deviations between 0.2 and 0.7 mm.
Backwards compatible high dynamic range video compression
NASA Astrophysics Data System (ADS)
Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.
2014-02-01
This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.
On the Optimization of Aerospace Plane Ascent Trajectory
NASA Astrophysics Data System (ADS)
Al-Garni, Ahmed; Kassem, Ayman Hamdy
A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.
NASA Astrophysics Data System (ADS)
Shupp, Aaron M.; Rodier, Dan; Rowley, Steven
2007-03-01
Monitoring and controlling Airborne Molecular Contamination (AMC) has become essential in deep ultraviolet (DUV) photolithography for both optimizing yields and protecting tool optics. A variety of technologies have been employed for both real-time and grab-sample monitoring. Real-time monitoring has the advantage of quickly identifying "spikes" and upset conditions, while 2 - 24 hour plus grab sampling allows for extremely low detection limits by concentrating the mass of the target contaminant over a period of time. Employing a combination of both monitoring techniques affords the highest degree of control, lowest detection limits, and the most detailed data possible in terms of speciation. As happens with many technologies, there can be concern regarding the accuracy and agreement between real-time and grab-sample methods. This study utilizes side by side comparisons of two different real-time monitors operating in parallel with both liquid impingers and dry sorbent tubes to measure NIST traceable gas standards as well as real world samples. By measuring in parallel, a truly valid comparison is made between methods while verifying the results against a certified standard. The final outcome for this investigation is that a dry sorbent tube grab-sample technique produced results that agreed in terms of accuracy with NIST traceable standards as well as the two real-time techniques Ion Mobility Spectrometry (IMS) and Pulsed Fluorescence Detection (PFD) while a traditional liquid impinger technique showed discrepancies.
NASA Astrophysics Data System (ADS)
Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A. Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah
2017-04-01
This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele’s (ZDT’s) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.
NASA Astrophysics Data System (ADS)
Gunda, Naga Siva Kumar; Singh, Minashree; Norman, Lana; Kaur, Kamaljit; Mitra, Sushanta K.
2014-06-01
In the present work, we developed and optimized a technique to produce a thin, stable silane layer on silicon substrate in a controlled environment using (3-aminopropyl)triethoxysilane (APTES). The effect of APTES concentration and silanization time on the formation of silane layer is studied using spectroscopic ellipsometry and Fourier transform infrared spectroscopy (FTIR). Biomolecules of interest are immobilized on optimized silane layer formed silicon substrates using glutaraldehyde linker. Surface analytical techniques such as ellipsometry, FTIR, contact angle measurement system, and atomic force microscopy are employed to characterize the bio-chemically modified silicon surfaces at each step of the biomolecule immobilization process. It is observed that a uniform, homogenous and highly dense layer of biomolecules are immobilized with optimized silane layer on the silicon substrate. The developed immobilization method is successfully implemented on different silicon substrates (flat and pillar). Also, different types of biomolecules such as anti-human IgG (rabbit monoclonal to human IgG), Listeria monocytogenes, myoglobin and dengue capture antibodies were successfully immobilized. Further, standard sandwich immunoassay (antibody-antigen-antibody) is employed on respective capture antibody coated silicon substrates. Fluorescence microscopy is used to detect the respective FITC tagged detection antibodies bound to the surface after immunoassay.
Wood, Jessica L; Steiner, Robert R
2011-06-01
Forensic analysis of pharmaceutical preparations requires a comparative analysis with a standard of the suspected drug in order to identify the active ingredient. Purchasing analytical standards can be expensive or unattainable from the drug manufacturers. Direct Analysis in Real Time (DART™) is a novel, ambient ionization technique, typically coupled with a JEOL AccuTOF™ (accurate mass) mass spectrometer. While a fast and easy technique to perform, a drawback of using DART™ is the lack of component separation of mixtures prior to ionization. Various in-house pharmaceutical preparations were purified using thin-layer chromatography (TLC) and mass spectra were subsequently obtained using the AccuTOF™- DART™ technique. Utilizing TLC prior to sample introduction provides a simple, low-cost solution to acquiring mass spectra of the purified preparation. Each spectrum was compared against an in-house molecular formula list to confirm the accurate mass elemental compositions. Spectra of purified ingredients of known pharmaceuticals were added to an in-house library for use as comparators for casework samples. Resolving isomers from one another can be accomplished using collision-induced dissociation after ionization. Challenges arose when the pharmaceutical preparation required an optimized TLC solvent to achieve proper separation and purity of the standard. Purified spectra were obtained for 91 preparations and included in an in-house drug standard library. Primary standards would only need to be purchased when pharmaceutical preparations not previously encountered are submitted for comparative analysis. TLC prior to DART™ analysis demonstrates a time efficient and cost saving technique for the forensic drug analysis community. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.
PROPELLER technique to improve image quality of MRI of the shoulder.
Dietrich, Tobias J; Ulbrich, Erika J; Zanetti, Marco; Fucentese, Sandro F; Pfirrmann, Christian W A
2011-12-01
The purpose of this article is to evaluate the use of the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) technique for artifact reduction and overall image quality improvement for intermediate-weighted and T2-weighted MRI of the shoulder. One hundred eleven patients undergoing MR arthrography of the shoulder were included. A coronal oblique intermediate-weighted turbo spin-echo (TSE) sequence with fat suppression and a sagittal oblique T2-weighted TSE sequence with fat suppression were obtained without (standard) and with the PROPELLER technique. Scanning time increased from 3 minutes 17 seconds to 4 minutes 17 seconds (coronal oblique plane) and from 2 minutes 52 seconds to 4 minutes 10 seconds (sagittal oblique) using PROPELLER. Two radiologists graded image artifacts, overall image quality, and delineation of several anatomic structures on a 5-point scale (5, no artifact, optimal diagnostic quality; and 1, severe artifacts, diagnostically not usable). The Wilcoxon signed rank test was used to compare the data of the standard and PROPELLER images. Motion artifacts were significantly reduced in PROPELLER images (p < 0.001). Observer 1 rated motion artifacts with diagnostic impairment in one patient on coronal oblique PROPELLER images compared with 33 patients on standard images. Ratings for the sequences with PROPELLER were significantly better for overall image quality (p < 0.001). Observer 1 noted an overall image quality with diagnostic impairment in nine patients on sagittal oblique PROPELLER images compared with 23 patients on standard MRI. The PROPELLER technique for MRI of the shoulder reduces the number of sequences with diagnostic impairment as a result of motion artifacts and increases image quality compared with standard TSE sequences. PROPELLER sequences increase the acquisition time.
Corteville, D M R; Kjïrstad, Å; Henzler, T; Zöllner, F G; Schad, L R
2015-05-01
Fourier decomposition (FD) is a noninvasive method for assessing ventilation and perfusion-related information in the lungs. However, the technique has a low signal-to-noise ratio (SNR) in the lung parenchyma. We present an approach to increase the SNR in both morphological and functional images. The data used to create functional FD images are usually acquired using a standard balanced steady-state free precession (bSSFP) sequence. In the standard sequence, the possible range of the flip angle is restricted due to specific absorption rate (SAR) limitations. Thus, using a variable flip angle approach as an optimization is possible. This was validated using measurements from a phantom and six healthy volunteers. The SNR in both the morphological and functional FD images was increased by 32%, while the SAR restrictions were kept unchanged. Furthermore, due to the higher SNR, the effective resolution of the functional images was increased visibly. The variable flip angle approach did not introduce any new transient artifacts, and blurring artifacts were minimized. Both a gain in SNR and an effective resolution gain in functional lung images can be obtained using the FD method in conjunction with a variable flip angle optimized bSSFP sequence. © 2014 Wiley Periodicals, Inc.
Steering of Frequency Standards by the Use of Linear Quadratic Gaussian Control Theory
NASA Technical Reports Server (NTRS)
Koppang, Paul; Leland, Robert
1996-01-01
Linear quadratic Gaussian control is a technique that uses Kalman filtering to estimate a state vector used for input into a control calculation. A control correction is calculated by minimizing a quadratic cost function that is dependent on both the state vector and the control amount. Different penalties, chosen by the designer, are assessed by the controller as the state vector and control amount vary from given optimal values. With this feature controllers can be designed to force the phase and frequency differences between two standards to zero either more or less aggressively depending on the application. Data will be used to show how using different parameters in the cost function analysis affects the steering and the stability of the frequency standards.
NASA Astrophysics Data System (ADS)
Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Schneider, Alex; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay
2018-03-01
Differential binary pixel technology is a threshold-based timing, readout, and image reconstruction method that utilizes the subframe partial charge transfer technique in a standard four-transistor (4T) pixel CMOS image sensor to achieve a high dynamic range video with stop motion. This technology improves low light signal-to-noise ratio (SNR) by up to 21 dB. The method is verified in silicon using a Taiwan Semiconductor Manufacturing Company's 65 nm 1.1 μm pixel technology 1 megapixel test chip array and is compared with a traditional 4 × oversampling technique using full charge transfer to show low light SNR superiority of the presented technology.
NASA Astrophysics Data System (ADS)
Fitri, Noor; Yandi, Nefri; Hermawati, Julianto, Tatang Shabur
2017-03-01
A comparative study of the quality of patchouli oil using Water-Steam Distillation (WSD) and Water Bubble Distillation (WBD) techniques has been studied. The raw materials were Patchouli plants from Samigaluh village, Kulon Progo district, Yogyakarta. This study is aimed to compare two distillation techniques in order to find out the optimal distillation technique to increase the content of patchouli alcohol (patchoulol) and the quality of patchouli oil. Pretreatment such as withering, drying, size reduction and light fermentation were intended to increase the yield. One kilogramm of patchouli was moisturized with 500 mL of aquadest. The light fermentation process was carried out for 20 hours in a dark container. Fermented patchouli was extracted for 6 hours using Water-Steam and Water Bubble Distillation techniques. Physical and chemical properties test of patchouli oil were performed using SNI standard No. SNI-06-2385-2006 and the chemical composition of patchouli oil was analysed by GC-MS. As the results, the higher yield oil is obtained using Water-Steam Distillation, i.e. 5.9% versus 2.4%. Spesific gravity, refractive index and acid number of patchouli oil in Water-Steam Distillation results did not meet the SNI standard, i.e. 0.991; 1.623 and 13.19, while the Water Bubble Distillation met the standard, i.e. 0.955; 1.510 and 6.61. The patchoulol content using Water Bubble Distillation technique is 61.53%, significant higher than those using Water-Steam Distillation, i.e. 38.24%. Thus, Water Bubble Distillation promises a potential technique to increase the content of patchoulol in the patchouli oil.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Chirag; Vicini, Frank A., E-mail: fvicini@beaumont.edu
As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer-related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially withmore » a range of 2-65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.« less
Standardizing Quality Assessment of Fused Remotely Sensed Images
NASA Astrophysics Data System (ADS)
Pohl, C.; Moellmann, J.; Fries, K.
2017-09-01
The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.
Stalder, Aurelien F; Schmidt, Michaela; Quick, Harald H; Schlamann, Marc; Maderwald, Stefan; Schmitt, Peter; Wang, Qiu; Nadar, Mariappan S; Zenge, Michael O
2015-12-01
To integrate, optimize, and evaluate a three-dimensional (3D) contrast-enhanced sparse MRA technique with iterative reconstruction on a standard clinical MR system. Data were acquired using a highly undersampled Cartesian spiral phyllotaxis sampling pattern and reconstructed directly on the MR system with an iterative SENSE technique. Undersampling, regularization, and number of iterations of the reconstruction were optimized and validated based on phantom experiments and patient data. Sparse MRA of the whole head (field of view: 265 × 232 × 179 mm(3) ) was investigated in 10 patient examinations. High-quality images with 30-fold undersampling, resulting in 0.7 mm isotropic resolution within 10 s acquisition, were obtained. After optimization of the regularization factor and of the number of iterations of the reconstruction, it was possible to reconstruct images with excellent quality within six minutes per 3D volume. Initial results of sparse contrast-enhanced MRA (CEMRA) in 10 patients demonstrated high-quality whole-head first-pass MRA for both the arterial and venous contrast phases. While sparse MRI techniques have not yet reached clinical routine, this study demonstrates the technical feasibility of high-quality sparse CEMRA of the whole head in a clinical setting. Sparse CEMRA has the potential to become a viable alternative where conventional CEMRA is too slow or does not provide sufficient spatial resolution. © 2014 Wiley Periodicals, Inc.
Inter-slice Leakage Artifact Reduction Technique for Simultaneous Multi-Slice Acquisitions
Cauley, Stephen F.; Polimeni, Jonathan R.; Bhat, Himanshu; Wang, Dingxin; Wald, Lawrence L.; Setsompop, Kawin
2015-01-01
Purpose Controlled aliasing techniques for simultaneously acquired EPI slices have been shown to significantly increase the temporal efficiency for both diffusion-weighted imaging (DWI) and fMRI studies. The “slice-GRAPPA” (SG) method has been widely used to reconstruct such data. We investigate robust optimization techniques for SG to ensure image reconstruction accuracy through a reduction of leakage artifacts. Methods Split slice-GRAPPA (SP-SG) is proposed as an alternative kernel optimization method. The performance of SP-SG is compared to standard SG using data collected on a spherical phantom and in-vivo on two subjects at 3T. Slice accelerated and non-accelerated data were collected for a spin-echo diffusion weighted acquisition. Signal leakage metrics and time-series SNR were used to quantify the performance of the kernel fitting approaches. Results The SP-SG optimization strategy significantly reduces leakage artifacts for both phantom and in-vivo acquisitions. In addition, a significant boost in time-series SNR for in-vivo diffusion weighted acquisitions with in-plane 2× and slice 3× accelerations was observed with the SP-SG approach. Conclusion By minimizing the influence of leakage artifacts during the training of slice-GRAPPA kernels, we have significantly improved reconstruction accuracy. Our robust kernel fitting strategy should enable better reconstruction accuracy and higher slice-acceleration across many applications. PMID:23963964
How to optimize the lung donor.
Sales, Gabriele; Costamagna, Andrea; Fanelli, Vito; Boffini, Massimo; Pugliese, Francesco; Mascia, Luciana; Brazzi, Luca
2018-02-01
Over the last two decades, lung transplantation emerged as the standard of care for patients with advanced and terminal lung disease. Despite the increment in lung transplantation rates, in 2016 the overall mortality while on waiting list in Italy reached 10%, whereas only 39% of the wait-list patients were successfully transplanted. A number of approaches, including protective ventilatory strategy, accurate management of fluid balance, and administration of a hormonal resuscitation therapy, have been reported to improve lung donor performance before organ retrieval. These approaches, in conjunction with the use of ex-vivo lung perfusion technique contributed to expand the lung donor pool, without affecting the harvest of other organs and the outcomes of lung recipients. However, the efficacy of issues related to the ex-vivo lung perfusion technique, such as the optimal ventilation strategy, the ischemia-reperfusion induced lung injury management, the prophylaxis of germs transmission from donor to recipient and the application of targeted pharmacologic therapies to treat specific donor lung injuries are still to be explored. The main objective of the present review is to summarize the "state-of-art" strategies to optimize the donor lungs and to present the actual role of ex-vivo lung perfusion in the process of lung transplant. Moreover, different approaches about the technique reported in literature and several issues that are under investigation to treat specific donor lung injury will be discussed.
Level-set techniques for facies identification in reservoir modeling
NASA Astrophysics Data System (ADS)
Iglesias, Marco A.; McLaughlin, Dennis
2011-03-01
In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.
NASA Astrophysics Data System (ADS)
Zourabian, Anna; Boas, David A.
2001-06-01
Pulse oximetry (oxygen saturation monitoring) has markedly improved medical care in many fields, including anesthesiology, intensive care, and newborn intensive care. In obstetrics, fetal heart rate monitoring remains the standard for intrapartum assessment of fetal well being. Fetal oxygen saturation monitoring is a new technique currently under development. It is potentially superior to electronic fetal heart rate monitoring (cardiotocography) because it allows direct assessment of both fetal oxygen status and fetal tissue perfusion. Here we present the analysis for determining the most optimal wavelength selection for pulse oximetry. The wavelengths we chose as the most optimal are: the first in the range of 670-720nm and the second in the range of 825-925nm. Further we discuss the possible systematic errors during our measurements, and their contribution to the obtained saturation results.
New approaches to some methodological problems of meteor science
NASA Technical Reports Server (NTRS)
Meisel, David D.
1987-01-01
Several low cost approaches to continuous radioscatter monitoring of the incoming meteor flux are described. Preliminary experiments were attempted using standard time frequency stations WWVH and CHU (on frequencies near 15 MHz) during nighttime hours. Around-the-clock monitoring using the international standard aeronautical beacon frequency of 75 MHz was also attempted. The techniques are simple and can be managed routinely by amateur astronomers with relatively little technical expertise. Time series analysis can now be performed using relatively inexpensive microcomputers. Several algorithmic approaches to the analysis of meteor rates are discussed. Methods of obtaining optimal filter predictions of future meteor flux are also discussed.
Combustion and fires in low gravity
NASA Technical Reports Server (NTRS)
Friedman, Robert
1994-01-01
Fire safety always receives priority attention in NASA mission designs and operations, with emphasis on fire prevention and material acceptance standards. Recently, interest in spacecraft fire-safety research and development has increased because improved understanding of the significant differences between low-gravity and normal-gravity combustion suggests that present fire-safety techniques may be inadequate or, at best, non-optimal; and the complex and permanent orbital operations in Space Station Freedom demand a higher level of safety standards and practices. This presentation outlines current practices and problems in fire prevention and detection for spacecraft, specifically the Space Station Freedom's fire protection. Also addressed are current practices and problems in fire extinguishment for spacecraft.
Applying a Genetic Algorithm to Reconfigurable Hardware
NASA Technical Reports Server (NTRS)
Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim
2004-01-01
This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.
CCD charge collection efficiency and the photon transfer technique
NASA Technical Reports Server (NTRS)
Janesick, J.; Klaasen, K.; Elliott, T.
1985-01-01
The charge-coupled device (CCD) has shown unprecendented performance as a photon detector in the areas of spectral response, charge transfer, and readout noise. Recent experience indicates, however, that the full potential for the CCD's charge collection efficiency (CCE) lies well beyond that which is realized in currently available devices. A definition of CCE performance is presented and a standard test tool (the photon transfer technique) for measuring and optimizing this important CCD parameter is introduced. CCE characteristics for different types of CCDs are compared; the primary limitations in achieving high CCE performance are discussed, and the prospects for future improvement are outlined.
Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen
2018-05-01
The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Acoustic-noise-optimized diffusion-weighted imaging.
Ott, Martin; Blaimer, Martin; Grodzki, David M; Breuer, Felix A; Roesch, Julie; Dörfler, Arnd; Heismann, Björn; Jakob, Peter M
2015-12-01
This work was aimed at reducing acoustic noise in diffusion-weighted MR imaging (DWI) that might reach acoustic noise levels of over 100 dB(A) in clinical practice. A diffusion-weighted readout-segmented echo-planar imaging (EPI) sequence was optimized for acoustic noise by utilizing small readout segment widths to obtain low gradient slew rates and amplitudes instead of faster k-space coverage. In addition, all other gradients were optimized for low slew rates. Volunteer and patient imaging experiments were conducted to demonstrate the feasibility of the method. Acoustic noise measurements were performed and analyzed for four different DWI measurement protocols at 1.5T and 3T. An acoustic noise reduction of up to 20 dB(A) was achieved, which corresponds to a fourfold reduction in acoustic perception. The image quality was preserved at the level of a standard single-shot (ss)-EPI sequence, with a 27-54% increase in scan time. The diffusion-weighted imaging technique proposed in this study allowed a substantial reduction in the level of acoustic noise compared to standard single-shot diffusion-weighted EPI. This is expected to afford considerably more patient comfort, but a larger study would be necessary to fully characterize the subjective changes in patient experience.
2003-03-01
organizations . Reducing attrition rates through optimal selection decisions can “reduce training cost, improve job performance, and enhance...capturing the weights for use in the SNR method is not straightforward. A special VBA application had to be written to capture and organize the network...before the VBA application can be used. Appendix D provides the VBA code used to import and organize the network weights and input standardization
Spatial Statistics of Large Astronomical Databases: An Algorithmic Approach
NASA Technical Reports Server (NTRS)
Szapudi, Istvan
2004-01-01
In this AISRP, the we have demonstrated that the correlation function i) can be calculated for MAP in minutes (about 45 minutes for Planck) on a modest 500Mhz workstation ii) the corresponding method, although theoretically suboptimal, produces nearly optimal results for realistic noise and cut sky. This trillion fold improvement in speed over the standard maximum likelihood technique opens up tremendous new possibilities, which will be persued in the follow up.
NASA Astrophysics Data System (ADS)
Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.
2017-09-01
Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
Directly manipulated free-form deformation image registration.
Tustison, Nicholas J; Avants, Brian B; Gee, James C
2009-03-01
Previous contributions to both the research and open source software communities detailed a generalization of a fast scalar field fitting technique for cubic B-splines based on the work originally proposed by Lee . One advantage of our proposed generalized B-spline fitting approach is its immediate application to a class of nonrigid registration techniques frequently employed in medical image analysis. Specifically, these registration techniques fall under the rubric of free-form deformation (FFD) approaches in which the object to be registered is embedded within a B-spline object. The deformation of the B-spline object describes the transformation of the image registration solution. Representative of this class of techniques, and often cited within the relevant community, is the formulation of Rueckert who employed cubic splines with normalized mutual information to study breast deformation. Similar techniques from various groups provided incremental novelty in the form of disparate explicit regularization terms, as well as the employment of various image metrics and tailored optimization methods. For several algorithms, the underlying gradient-based optimization retained the essential characteristics of Rueckert's original contribution. The contribution which we provide in this paper is two-fold: 1) the observation that the generic FFD framework is intrinsically susceptible to problematic energy topographies and 2) that the standard gradient used in FFD image registration can be modified to a well-understood preconditioned form which substantially improves performance. This is demonstrated with theoretical discussion and comparative evaluation experimentation.
TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagerstrom, J; Culberson, W; Bender, E
2016-06-15
Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less
Jan, Show-Li; Shieh, Gwowen
2016-08-31
The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.
Virtually optimized insoles for offloading the diabetic foot: A randomized crossover study.
Telfer, S; Woodburn, J; Collier, A; Cavanagh, P R
2017-07-26
Integration of objective biomechanical measures of foot function into the design process for insoles has been shown to provide enhanced plantar tissue protection for individuals at-risk of plantar ulceration. The use of virtual simulations utilizing numerical modeling techniques offers a potential approach to further optimize these devices. In a patient population at-risk of foot ulceration, we aimed to compare the pressure offloading performance of insoles that were optimized via numerical simulation techniques against shape-based devices. Twenty participants with diabetes and at-risk feet were enrolled in this study. Three pairs of personalized insoles: one based on shape data and subsequently manufactured via direct milling; and two were based on a design derived from shape, pressure, and ultrasound data which underwent a finite element analysis-based virtual optimization procedure. For the latter set of insole designs, one pair was manufactured via direct milling, and a second pair was manufactured through 3D printing. The offloading performance of the insoles was analyzed for forefoot regions identified as having elevated plantar pressures. In 88% of the regions of interest, the use of virtually optimized insoles resulted in lower peak plantar pressures compared to the shape-based devices. Overall, the virtually optimized insoles significantly reduced peak pressures by a mean of 41.3kPa (p<0.001, 95% CI [31.1, 51.5]) for milled and 40.5kPa (p<0.001, 95% CI [26.4, 54.5]) for printed devices compared to shape-based insoles. The integration of virtual optimization into the insole design process resulted in improved offloading performance compared to standard, shape-based devices. ISRCTN19805071, www.ISRCTN.org. Copyright © 2017 Elsevier Ltd. All rights reserved.
Simhon, David; Halpern, Marisa; Brosh, Tamar; Vasilyev, Tamar; Ravid, Avi; Tennenbaum, Tamar; Nevo, Zvi; Katzir, Abraham
2007-02-01
A feedback temperature-controlled laser soldering system (TCLS) was used for bonding skin incisions on the backs of pigs. The study was aimed: 1) to characterize the optimal soldering parameters, and 2) to compare the immediate and long-term wound healing outcomes with other wound closure modalities. A TCLS was used to bond the approximated wound margins of skin incisions on porcine backs. The reparative outcomes were evaluated macroscopically, microscopically, and immunohistochemically. The optimal soldering temperature was found to be 65 degrees C and the operating time was significantly shorter than with suturing. The immediate tight sealing of the wound by the TCLS contributed to rapid, high quality wound healing in comparison to Dermabond or Histoacryl cyanoacrylate glues or standard suturing. TCLS of incisions in porcine skin has numerous advantages, including rapid procedure and high quality reparative outcomes, over the common standard wound closure procedures. Further studies with a variety of skin lesions are needed before advocating this technique for clinical use.
Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus
2015-01-01
Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning. PMID:26217710
Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus
2015-06-01
Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.
Gas-cell atomic clocks for space: new results and alternative schemes
NASA Astrophysics Data System (ADS)
Affolderbach, C.; Breschi, E.; Schori, C.; Mileti, G.
2017-11-01
We present our development activities on compact Rubidium gas-cell atomic frequency standards, for use in space-borne and ground-based applications. We experimentally demonstrate a high-performance laser optically-pumped Rb clock for space applications such as telecommunications, science missions, and satellite navigation systems (e.g. GALILEO). Using a stabilised laser source and optimized gas cells, we reach clock stabilities as low as 1.5·10-12 τ-1/2 up to 103 s and 4·10-14 at 104 s. The results demonstrate the feasibility of a laser-pumped Rb clock reaching < 1·10-12 τ-1/2 in a compact device (<2 liters, 2 kg, 20 W), given optimization of the implemented techniques. A second activity concerns more radically miniaturized gas-cell clocks, aiming for low power consumption and a total volume around 1 cm3 , at the expense of relaxed frequency stability. Here miniaturized "chip-scale" vapour cells and use of coherent laser interrogation techniques are at the heart of the investigations.
Insights from Classifying Visual Concepts with Multiple Kernel Learning
Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki
2012-01-01
Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970
Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.
Selvaraj, Lokesh; Ganesan, Balakrishnan
2014-01-01
Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.
NASA Astrophysics Data System (ADS)
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits of Signal Averaging in Atmospheric Trace-Gas Monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS)," Applied Physics, B57, pp 131-139, April 1993
Warlick, W B; O'Rear, J H; Earley, L; Moeller, J H; Gaffney, D K; Leavitt, D D
1997-01-01
The dose to the contralateral breast has been associated with an increased risk of developing a second breast malignancy. Varying techniques have been devised and described in the literature to minimize this dose. Metal beam modifiers such as standard wedges are used to improve the dose distribution in the treated breast, but unfortunately introduce an increased scatter dose outside the treatment field, in particular to the contralateral breast. The enhanced dynamic wedge is a means of remote wedging created by independently moving one collimator jaw through the treatment field during dose delivery. This study is an analysis of differing doses to the contralateral breast using two common clinical set-up techniques with the enhanced dynamic wedge versus the standard metal wedge. A tissue equivalent block (solid water), modeled to represent a typical breast outline, was designed as an insert in a Rando phantom to simulate a standard patient being treated for breast conservation. Tissue equivalent material was then used to complete the natural contour of the breast and to reproduce appropriate build-up and internal scatter. Thermoluminescent dosimeter (TLD) rods were placed at predetermined distances from the geometric beam's edge to measure the dose to the contralateral breast. A total of 35 locations were used with five TLDs in each location to verify the accuracy of the measured dose. The radiation techniques used were an isocentric set-up with co-planar, non divergent posterior borders and an isocentric set-up with a half beam block technique utilizing the asymmetric collimator jaw. Each technique used compensating wedges to optimize the dose distribution. A comparison of the dose to the contralateral breast was then made with the enhanced dynamic wedge vs. the standard metal wedge. The measurements revealed a significant reduction in the contralateral breast dose with the enhanced dynamic wedge compared to the standard metal wedge in both set-up techniques. The dose was measured at varying distances from the geometric field edge, ranging from 2 to 8 cm. The average dose with the enhanced dynamic wedge was 2.7-2.8%. The average dose with the standard wedge was 4.0-4.7%. Thermoluminescent dosimeter measurements suggest an increase in both scattered electrons and photons with metal wedges. The enhanced dynamic wedge is a practical clinical advance which improves the dose distribution in patients undergoing breast conservation while at the same time minimizing dose to the contralateral breast, thereby reducing the potential carcinogenic effects.
Optimal Design of an Automotive Exhaust Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Fagehi, Hassan; Attar, Alaa; Lee, Hosung
2018-07-01
The consumption of energy continues to increase at an exponential rate, especially in terms of conventional automobiles. Approximately 40% of the applied fuel into a vehicle is lost as waste exhausted to the environment. The desire for improved fuel efficiency by recovering the exhaust waste heat in automobiles has become an important subject. A thermoelectric generator (TEG) has the potential to convert exhaust waste heat into electricity as long as it is improving fuel economy. The remarkable amount of research being conducted on TEGs indicates that this technology will have a bright future in terms of power generation. The current study discusses the optimal design of the automotive exhaust TEG. An experimental study has been conducted to verify the model that used the ideal (standard) equations along with effective material properties. The model is reasonably verified by experimental work, mainly due to the utilization of the effective material properties. Hence, the thermoelectric module that was used in the experiment was optimized by using a developed optimal design theory (dimensionless analysis technique).
Optimal Design of an Automotive Exhaust Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Fagehi, Hassan; Attar, Alaa; Lee, Hosung
2018-04-01
The consumption of energy continues to increase at an exponential rate, especially in terms of conventional automobiles. Approximately 40% of the applied fuel into a vehicle is lost as waste exhausted to the environment. The desire for improved fuel efficiency by recovering the exhaust waste heat in automobiles has become an important subject. A thermoelectric generator (TEG) has the potential to convert exhaust waste heat into electricity as long as it is improving fuel economy. The remarkable amount of research being conducted on TEGs indicates that this technology will have a bright future in terms of power generation. The current study discusses the optimal design of the automotive exhaust TEG. An experimental study has been conducted to verify the model that used the ideal (standard) equations along with effective material properties. The model is reasonably verified by experimental work, mainly due to the utilization of the effective material properties. Hence, the thermoelectric module that was used in the experiment was optimized by using a developed optimal design theory (dimensionless analysis technique).
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
Picheny, Victor; Trépos, Ronan; Casadebaig, Pierre
2017-01-01
Accounting for the interannual climatic variations is a well-known issue for simulation-based studies of environmental systems. It often requires intensive sampling (e.g., averaging the simulation outputs over many climatic series), which hinders many sequential processes, in particular optimization algorithms. We propose here an approach based on a subset selection in a large basis of climatic series, using an ad-hoc similarity function and clustering. A non-parametric reconstruction technique is introduced to estimate accurately the distribution of the output of interest using only the subset sampling. The proposed strategy is non-intrusive and generic (i.e. transposable to most models with climatic data inputs), and can be combined to most “off-the-shelf” optimization solvers. We apply our approach to sunflower ideotype design using the crop model SUNFLO. The underlying optimization problem is formulated as a multi-objective one to account for risk-aversion. Our approach achieves good performances even for limited computational budgets, outperforming significantly standard strategies. PMID:28542198
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-01-01
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-04-07
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
Performance evaluation of a digital mammography unit using a contrast-detail phantom
NASA Astrophysics Data System (ADS)
Elizalde-Cabrera, J.; Brandan, M.-E.
2015-01-01
The relation between image quality and mean glandular dose (MGD) has been studied for a Senographe 2000D mammographic unit used for research in our laboratory. The magnitudes were evaluated for a clinically relevant range of acrylic thicknesses and radiological techniques. The CDMAM phantom was used to determine the contrast-detail curve. Also, an alternative method based on the analysis of signal-to-noise (SNR) and contrast-to-noise (CNR) ratios from the CDMAM image was proposed and applied. A simple numerical model was utilized to successfully interpret the results. Optimum radiological techniques were determined using the figures-of-merit FOMSNR=SNR2/MGD and FOMCNR=CNR2/MGD. Main results were: the evaluation of the detector response flattening process (it reduces by about one half the spatial non-homogeneities due to the X- ray field), MGD measurements (the values comply with standards), and verification of the automatic exposure control performance (it is sensitive to fluence attenuation, not to contrast). For 4-5 cm phantom thicknesses, the optimum radiological techniques were Rh/Rh 34 kV to optimize SNR, and Rh/Rh 28 kV to optimize CNR.
Localized analysis of paint-coat drying using dynamic speckle interferometry
NASA Astrophysics Data System (ADS)
Sierra-Sosa, Daniel; Tebaldi, Myrian; Grumel, Eduardo; Rabal, Hector; Elmaghraby, Adel
2018-07-01
The paint-coating is part of several industrial processes, including the automotive industry, architectural coatings, machinery and appliances. These paint-coatings must comply with high quality standards, for this reason evaluation techniques from paint-coatings are in constant development. One important factor from the paint-coating process is the drying, as it has influence on the quality of final results. In this work we present an assessment technique based on the optical dynamic speckle interferometry, this technique allows for the temporal activity evaluation of the paint-coating drying process, providing localized information from drying. This localized information is relevant in order to address the drying homogeneity, optimal drying, and quality control. The technique relies in the definition of a new temporal history of the speckle patterns to obtain the local activity; this information is then clustered to provide a convenient indicative of different drying process stages. The experimental results presented were validated using the gravimetric drying curves
Vacas, Susana; Van de Wiele, Barbara
2017-01-01
Background: Craniotomy is a relatively common surgical procedure with a high incidence of postoperative pain. Development of standardized pain management and enhanced recovery after surgery (ERAS) protocols are necessary and crucial to optimize outcomes and patient satisfaction and reduce health care costs. Methods: This work is based upon a literature search of published manuscripts (between 1996 and 2017) from Pubmed, Cochrane Central Register, and Google Scholar. It seeks to both synthesize and review our current scientific understanding of postcraniotomy pain and its part in neurosurgical ERAS protocols. Results: Strategies to ameliorate craniotomy pain demand interventions during all phases of patient care: preoperative, intraoperative, and postoperative interventions. Pain management should begin in the perioperative period with risk assessment, patient education, and premedication. In the intraoperative period, modifications in anesthesia technique, choice of opioids, acetaminophen and nonsteroidal anti-inflammatory drugs (NSAIDs), regional techniques, dexmedetomidine, ketamine, lidocaine, corticosteroids, and interdisciplinary communication are all strategies to consider and possibly deploy. Opioids remain the mainstay for pain relief, but patient-controlled analgesia, NSAIDs, standardization of pain management, bio/behavioral interventions, modification of head dressings as well as patient-centric management are useful opportunities that potentially improve patient care. Conclusions: Future research on mechanisms, predictors, treatments, and pain management pathways will help define the combinations of interventions that optimize pain outcomes. PMID:29285407
Encoding probabilistic brain atlases using Bayesian inference.
Van Leemput, Koen
2009-06-01
This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. Probabilistic atlases are typically constructed by counting the relative frequency of occurrence of labels in corresponding locations across the training images. However, such an "averaging" approach generalizes poorly to unseen cases when the number of training images is limited, and provides no principled way of aligning the training datasets using deformable registration. In this paper, we generalize the generative image model implicitly underlying standard "average" atlases, using mesh-based representations endowed with an explicit deformation model. Bayesian inference is used to infer the optimal model parameters from the training data, leading to a simultaneous group-wise registration and atlas estimation scheme that encompasses standard averaging as a special case. We also use Bayesian inference to compare alternative atlas models in light of the training data, and show how this leads to a data compression problem that is intuitive to interpret and computationally feasible. Using this technique, we automatically determine the optimal amount of spatial blurring, the best deformation field flexibility, and the most compact mesh representation. We demonstrate, using 2-D training datasets, that the resulting models are better at capturing the structure in the training data than conventional probabilistic atlases. We also present experiments of the proposed atlas construction technique in 3-D, and show the resulting atlases' potential in fully-automated, pulse sequence-adaptive segmentation of 36 neuroanatomical structures in brain MRI scans.
NASA Technical Reports Server (NTRS)
Peille, Phillip; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; Den Haretog, Roland; de Plaa, Jelle;
2016-01-01
The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.
Chopped random-basis quantum optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone
2011-08-15
In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.
Strategies for Fermentation Medium Optimization: An In-Depth Review
Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.
2017-01-01
Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
Perceptual Image Compression in Telemedicine
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)
1996-01-01
The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.
Control theory based airfoil design using the Euler equations
NASA Technical Reports Server (NTRS)
Jameson, Antony; Reuther, James
1994-01-01
This paper describes the implementation of optimization techniques based on control theory for airfoil design. In our previous work it was shown that control theory could be employed to devise effective optimization procedures for two-dimensional profiles by using the potential flow equation with either a conformal mapping or a general coordinate system. The goal of our present work is to extend the development to treat the Euler equations in two-dimensions by procedures that can readily be generalized to treat complex shapes in three-dimensions. Therefore, we have developed methods which can address airfoil design through either an analytic mapping or an arbitrary grid perturbation method applied to a finite volume discretization of the Euler equations. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented for both the inverse problem and drag minimization problem.
Stark, Michael; Mynbaev, Ospan; Vassilevski, Yuri; Rozenberg, Patrick
2016-01-01
Until today, there is no standardized Cesarean Section method and many variations exist. The main variations concern the type of abdominal incision, usage of abdominal packs, suturing the uterus in one or two layers, and suturing the peritoneal layers or leaving them open. One of the questions is the optimal location of opening the uterus. Recently, omission of the bladder flap was recommended. The anatomy and histology as results from the embryological knowledge might help to solve this question. The working thesis is that the higher the incision is done, the more damage to muscle tissue can take place contrary to incision in the lower segment, where fibrous tissue prevails. In this perspective, a call for participation in a two-armed prospective study is included, which could result in an optimal, evidence-based Cesarean Section for universal use. PMID:28078171
Control theory based airfoil design for potential flow and a finite volume discretization
NASA Technical Reports Server (NTRS)
Reuther, J.; Jameson, A.
1994-01-01
This paper describes the implementation of optimization techniques based on control theory for airfoil design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for two-dimensional profiles in which the shape is determined by a conformal transformation from a unit circle, and the control is the mapping function. The goal of our present work is to develop a method which does not depend on conformal mapping, so that it can be extended to treat three-dimensional problems. Therefore, we have developed a method which can address arbitrary geometric shapes through the use of a finite volume method to discretize the potential flow equation. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented, where both target speed distributions and minimum drag are used as objective functions.
NASA Astrophysics Data System (ADS)
Fragkoulis, Alexandros; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2015-03-01
We propose a method for the fair and efficient allocation of wireless resources over a cognitive radio system network to transmit multiple scalable video streams to multiple users. The method exploits the dynamic architecture of the Scalable Video Coding extension of the H.264 standard, along with the diversity that OFDMA networks provide. We use a game-theoretic Nash Bargaining Solution (NBS) framework to ensure that each user receives the minimum video quality requirements, while maintaining fairness over the cognitive radio system. An optimization problem is formulated, where the objective is the maximization of the Nash product while minimizing the waste of resources. The problem is solved by using a Swarm Intelligence optimizer, namely Particle Swarm Optimization. Due to the high dimensionality of the problem, we also introduce a dimension-reduction technique. Our experimental results demonstrate the fairness imposed by the employed NBS framework.
Further developments in the controlled growth approach for optimal structural synthesis
NASA Technical Reports Server (NTRS)
Hajela, P.
1982-01-01
It is pointed out that the use of nonlinear programming methods in conjunction with finite element and other discrete analysis techniques have provided a powerful tool in the domain of optimal structural synthesis. The present investigation is concerned with new strategies which comprise an extension to the controlled growth method considered by Hajela and Sobieski-Sobieszczanski (1981). This method proposed an approach wherein the standard nonlinear programming (NLP) methodology of working with a very large number of design variables was replaced by a sequence of smaller optimization cycles, each involving a single 'dominant' variable. The current investigation outlines some new features. Attention is given to a modified cumulative constraint representation which is defined in both the feasible and infeasible domain of the design space. Other new features are related to the evaluation of the 'effectiveness measure' on which the choice of the dominant variable and the linking strategy is based.
Optimized MLAA for quantitative non-TOF PET/MR of the brain
NASA Astrophysics Data System (ADS)
Benoit, Didier; Ladefoged, Claes N.; Rezaei, Ahmadreza; Keller, Sune H.; Andersen, Flemming L.; Højgaard, Liselotte; Hansen, Adam E.; Holm, Søren; Nuyts, Johan
2016-12-01
For quantitative tracer distribution in positron emission tomography, attenuation correction is essential. In a hybrid PET/CT system the CT images serve as a basis for generation of the attenuation map, but in PET/MR, the MR images do not have a similarly simple relationship with the attenuation map. Hence attenuation correction in PET/MR systems is more challenging. Typically either of two MR sequences are used: the Dixon or the ultra-short time echo (UTE) techniques. However these sequences have some well-known limitations. In this study, a reconstruction technique based on a modified and optimized non-TOF MLAA is proposed for PET/MR brain imaging. The idea is to tune the parameters of the MLTR applying some information from an attenuation image computed from the UTE sequences and a T1w MR image. In this MLTR algorithm, an {αj} parameter is introduced and optimized in order to drive the algorithm to a final attenuation map most consistent with the emission data. Because the non-TOF MLAA is used, a technique to reduce the cross-talk effect is proposed. In this study, the proposed algorithm is compared to the common reconstruction methods such as OSEM using a CT attenuation map, considered as the reference, and OSEM using the Dixon and UTE attenuation maps. To show the robustness and the reproducibility of the proposed algorithm, a set of 204 [18F]FDG patients, 35 [11C]PiB patients and 1 [18F]FET patient are used. The results show that by choosing an optimized value of {αj} in MLTR, the proposed algorithm improves the results compared to the standard MR-based attenuation correction methods (i.e. OSEM using the Dixon or the UTE attenuation maps), and the cross-talk and the scale problem are limited.
Strong stabilization servo controller with optimization of performance criteria.
Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor
2011-07-01
Synthesis of a simple robust controller with a pole placement technique and a H(∞) metrics is the method used for control of a servo mechanism with BLDC and BDC electric motors. The method includes solving a polynomial equation on the basis of the chosen characteristic polynomial using the Manabe standard polynomial form and parametric solutions. Parametric solutions are introduced directly into the structure of the servo controller. On the basis of the chosen parametric solutions the robustness of a closed-loop system is assessed through uncertainty models and assessment of the norm ‖•‖(∞). The design procedure and the optimization are performed with a genetic algorithm differential evolution - DE. The DE optimization method determines a suboptimal solution throughout the optimization on the basis of a spectrally square polynomial and Šiljak's absolute stability test. The stability of the designed controller during the optimization is being checked with Lipatov's stability condition. Both utilized approaches: Šiljak's test and Lipatov's condition, check the robustness and stability characteristics on the basis of the polynomial's coefficients, and are very convenient for automated design of closed-loop control and for application in optimization algorithms such as DE. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guida, K; Qamar, K; Thompson, M
Purpose: The RTOG 1005 trial offered a hypofractionated arm in delivering WBRT+SIB. Traditionally, treatments were planned at our institution using field-in-field (FiF) tangents with a concurrent 3D conformal boost. With the availability of VMAT, it is possible that a hybrid VMAT-3D planning technique could provide another avenue in treating WBRT+SIB. Methods: A retrospective study of nine patients previously treated using RTOG 1005 guidelines was performed to compare FiF+3D plans with the hybrid technique. A combination of static tangents and partial VMAT arcs were used in base-dose optimization. The hybrid plans were optimized to deliver 4005cGy to the breast PTVeval andmore » 4800cGy to the lumpectomy PTVeval over 15 fractions. Plans were optimized to meet the planning goals dictated by RTOG 1005. Results: Hybrid plans yielded similar coverage of breast and lumpectomy PTVs (average D95 of 4013cGy compared to 3990cGy for conventional), while reducing the volume of high dose within the breast; the average D30 and D50 for the hybrid technique were 4517cGy and 4288cGy, compared to 4704cGy and 4377cGy for conventional planning. Hybrid plans increased conformity as well, yielding CI95% values of 1.22 and 1.54 for breast and lumpectomy PTVeval volumes; in contrast, conventional plans averaged 1.49 and 2.27, respectively. The nearby organs at risk (OARs) received more low dose with the hybrid plans due to low dose spray from the partial arcs, but all hybrid plans did meet the acceptable constraints, at a minimum, from the protocol. Treatment planning time was also reduced, as plans were inversely optimized (VMAT) rather than forward optimized. Conclusion: Hybrid-VMAT could be a solution in delivering WB+SIB, as plans yield very conformal treatment plans and maintain clinical standards in OAR sparing. For treating breast cancer patients with a simultaneously-integrated boost, Hybrid-VMAT offers superiority in dosimetric conformity and planning time as compared to FIF techniques.« less
Luiz Oenning, Anderson; Lopes, Daniela; Neves Dias, Adriana; Merib, Josias; Carasek, Eduardo
2017-11-01
In this study, the viability of two membrane-based microextraction techniques for the determination of endocrine disruptors by high-performance liquid chromatography with diode array detection was evaluated: hollow fiber microporous membrane liquid-liquid extraction and hollow-fiber-supported dispersive liquid-liquid microextraction. The extraction efficiencies obtained for methylparaben, ethylparaben, bisphenol A, benzophenone, and 2-ethylhexyl-4-methoxycinnamate from aqueous matrices obtained using both approaches were compared and showed that hollow fiber microporous membrane liquid-liquid extraction exhibited higher extraction efficiency for most of the compounds studied. Therefore, a detailed optimization of the extraction procedure was carried out with this technique. The optimization of the extraction conditions and liquid desorption were performed by univariate analysis. The optimal conditions for the method were supported liquid membrane with 1-octanol for 10 s, sample pH 7, addition of 15% w/v of NaCl, extraction time of 30 min, and liquid desorption in 150 μL of acetonitrile/methanol (50:50 v/v) for 5 min. The linear correlation coefficients were higher than 0.9936. The limits of detection were 0.5-4.6 μg/L and the limits of quantification were 2-16 μg/L. The analyte relative recoveries were 67-116%, and the relative standard deviations were less than 15.5%. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
Precision calibration of the silicon doping level in gallium arsenide epitaxial layers
NASA Astrophysics Data System (ADS)
Mokhov, D. V.; Berezovskaya, T. N.; Kuzmenkov, A. G.; Maleev, N. A.; Timoshnev, S. N.; Ustinov, V. M.
2017-10-01
An approach to precision calibration of the silicon doping level in gallium arsenide epitaxial layers is discussed that is based on studying the dependence of the carrier density in the test GaAs layer on the silicon- source temperature using the Hall-effect and CV profiling techniques. The parameters are measured by standard or certified measuring techniques and approved measuring instruments. It is demonstrated that the use of CV profiling for controlling the carrier density in the test GaAs layer at the thorough optimization of the measuring procedure ensures the highest accuracy and reliability of doping level calibration in the epitaxial layers with a relative error of no larger than 2.5%.
Summary of Optimization Techniques That Can Be Applied to Suspension System Design
DOT National Transportation Integrated Search
1973-03-01
Summaries are presented of the analytic techniques available for three levitated vehicle suspension optimization problems: optimization of passive elements for fixed configuration; optimization of a free passive configuration; optimization of a free ...
Jiang, Yuan; Liese, Eric; Zitney, Stephen E.; ...
2018-02-25
This paper presents a baseline design and optimization approach developed in Aspen Custom Modeler (ACM) for microtube shell-and-tube exchangers (MSTEs) used for high- and low-temperature recuperation in a 10 MWe indirect supercritical carbon dioxide (sCO 2) recompression closed Brayton cycle (RCBC). The MSTE-type recuperators are designed using one-dimensional models with thermal-hydraulic correlations appropriate for sCO 2 and properties models that capture considerable nonlinear changes in CO 2 properties near the critical and pseudo-critical points. Using the successive quadratic programming (SQP) algorithm in ACM, optimal recuperator designs are obtained for either custom or industry-standard microtubes considering constraints based on current advancedmore » manufacturing techniques. The three decision variables are the number of tubes, tube pitch-to-diameter ratio, and tube diameter. Five different objective functions based on different key design measures are considered: minimization of total heat transfer area, heat exchanger volume, metal weight, thermal residence time, and maximization of compactness. Sensitivities studies indicate the constraint on the maximum number of tubes per shell does affect the number of parallel heat exchanger trains but not the tube selection, total number of tubes, tube length and other key design measures in the final optimal design when considering industry-standard tubes. In this study, the optimally designed high- and low-temperature recuperators have 47,000 3/32 inch tubes and 63,000 1/16 inch tubes, respectively. In addition, sensitivities to the design temperature approach and maximum allowable pressure drop are studied, since these specifications significantly impact the optimal design of the recuperators as well as the thermal efficiency and the economic performance of the entire sCO 2 Brayton cycle.« less
Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping
2016-02-11
Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Yuan; Liese, Eric; Zitney, Stephen E.
This paper presents a baseline design and optimization approach developed in Aspen Custom Modeler (ACM) for microtube shell-and-tube exchangers (MSTEs) used for high- and low-temperature recuperation in a 10 MWe indirect supercritical carbon dioxide (sCO 2) recompression closed Brayton cycle (RCBC). The MSTE-type recuperators are designed using one-dimensional models with thermal-hydraulic correlations appropriate for sCO 2 and properties models that capture considerable nonlinear changes in CO 2 properties near the critical and pseudo-critical points. Using the successive quadratic programming (SQP) algorithm in ACM, optimal recuperator designs are obtained for either custom or industry-standard microtubes considering constraints based on current advancedmore » manufacturing techniques. The three decision variables are the number of tubes, tube pitch-to-diameter ratio, and tube diameter. Five different objective functions based on different key design measures are considered: minimization of total heat transfer area, heat exchanger volume, metal weight, thermal residence time, and maximization of compactness. Sensitivities studies indicate the constraint on the maximum number of tubes per shell does affect the number of parallel heat exchanger trains but not the tube selection, total number of tubes, tube length and other key design measures in the final optimal design when considering industry-standard tubes. In this study, the optimally designed high- and low-temperature recuperators have 47,000 3/32 inch tubes and 63,000 1/16 inch tubes, respectively. In addition, sensitivities to the design temperature approach and maximum allowable pressure drop are studied, since these specifications significantly impact the optimal design of the recuperators as well as the thermal efficiency and the economic performance of the entire sCO 2 Brayton cycle.« less
Wavelet-Bayesian inference of cosmic strings embedded in the cosmic microwave background
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Peiris, H. V.; Wiaux, Y.; Ringeval, C.; Bouchet, F. R.
2017-12-01
Cosmic strings are a well-motivated extension to the standard cosmological model and could induce a subdominant component in the anisotropies of the cosmic microwave background (CMB), in addition to the standard inflationary component. The detection of strings, while observationally challenging, would provide a direct probe of physics at very high-energy scales. We develop a framework for cosmic string inference from observations of the CMB made over the celestial sphere, performing a Bayesian analysis in wavelet space where the string-induced CMB component has distinct statistical properties to the standard inflationary component. Our wavelet-Bayesian framework provides a principled approach to compute the posterior distribution of the string tension Gμ and the Bayesian evidence ratio comparing the string model to the standard inflationary model. Furthermore, we present a technique to recover an estimate of any string-induced CMB map embedded in observational data. Using Planck-like simulations, we demonstrate the application of our framework and evaluate its performance. The method is sensitive to Gμ ∼ 5 × 10-7 for Nambu-Goto string simulations that include an integrated Sachs-Wolfe contribution only and do not include any recombination effects, before any parameters of the analysis are optimized. The sensitivity of the method compares favourably with other techniques applied to the same simulations.
Role of slack variables in quasi-Newton methods for constrained optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tapia, R.A.
In constrained optimization the technique of converting an inequality constraint into an equality constraint by the addition of a squared slack variable is well known but rarely used. In choosing an active constraint philosophy over the slack variable approach, researchers quickly justify their choice with the standard criticisms: the slack variable approach increases the dimension of the problem, is numerically unstable, and gives rise to singular systems. It is shown that these criticisms of the slack variable approach need not apply and the two seemingly distinct approaches are actually very closely related. In fact, the squared slack variable formulation canmore » be used to develop a superior and more comprehensive active constraint philosophy.« less
Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R.; Anagnostopoulos, Christoforos; Faisal, Aldo A.; Montana, Giovanni; Leech, Robert
2016-01-01
Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. PMID:26804778
Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R; Anagnostopoulos, Christoforos; Faisal, Aldo A; Montana, Giovanni; Leech, Robert
2016-04-01
Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Outcome of Vaginoplasty in Male-to-Female Transgenders: A Systematic Review of Surgical Techniques.
Horbach, Sophie E R; Bouman, Mark-Bram; Smit, Jan Maerten; Özer, Müjde; Buncamper, Marlon E; Mullender, Margriet G
2015-06-01
Gender reassignment surgery is the keystone of the treatment of transgender patients. For male-to-female transgenders, this involves the creation of a neovagina. Many surgical methods for vaginoplasty have been opted. The penile skin inversion technique is the method of choice for most gender surgeons. However, the optimal surgical technique for vaginoplasty in transgender women has not yet been identified, as outcomes of the different techniques have never been compared. With this systematic review, we aim to give a detailed overview of the published outcomes of all currently available techniques for vaginoplasty in male-to-female transgenders. A PubMed and EMBASE search for relevant publications (1995-present), which provided data on the outcome of techniques for vaginoplasty in male-to-female transgender patients. Main outcome measures are complications, neovaginal depth and width, sexual function, patient satisfaction, and improvement in quality of life (QoL). Twenty-six studies satisfied the inclusion criteria. The majority of these studies were retrospective case series of low to intermediate quality. Outcome of the penile skin inversion technique was reported in 1,461 patients, bowel vaginoplasty in 102 patients. Neovaginal stenosis was the most frequent complication in both techniques. Sexual function and patient satisfaction were overall acceptable, but many different outcome measures were used. QoL was only reported in one study. Comparison between techniques was difficult due to the lack of standardization. The penile skin inversion technique is the most researched surgical procedure. Outcome of bowel vaginoplasty has been reported less frequently but does not seem to be inferior. The available literature is heterogeneous in patient groups, surgical procedure, outcome measurement tools, and follow-up. Standardized protocols and prospective study designs are mandatory for correct interpretation and comparability of data. © 2015 International Society for Sexual Medicine.
Sparse Reconstruction Techniques in MRI: Methods, Applications, and Challenges to Clinical Adoption
Yang, Alice Chieh-Yu; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-01-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in Magnetic Resonance Imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be employed to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MR imaging, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold-standards, are discussed. PMID:27003227
Mohammed, Nazmi A; Solaiman, Mohammad; Aly, Moustafa H
2014-10-10
In this work, various dispersion compensation methods are designed and evaluated to search for a cost-effective technique with remarkable dispersion compensation and a good pulse shape. The techniques consist of different chirp functions applied to a tanh fiber Bragg grating (FBG), a dispersion compensation fiber (DCF), and a DCF merged with an optimized linearly chirped tanh FBG (joint technique). The techniques are evaluated using a standard 10 Gb/s optical link over a 100 km long haul. The linear chirp function is the most appropriate choice of chirping function, with a pulse width reduction percentage (PWRP) of 75.15%, lower price, and poor pulse shape. The DCF yields an enhanced PWRP of 93.34% with a better pulse quality; however, it is the most costly of the evaluated techniques. Finally, the joint technique achieved the optimum PWRP (96.36%) among all the evaluated techniques and exhibited a remarkable pulse shape; it is less costly than the DCF, but more expensive than the chirped tanh FBG.
Simultaneous detection of resolved glutamate, glutamine, and γ-aminobutyric acid at 4 T
NASA Astrophysics Data System (ADS)
Hu, Jiani; Yang, Shaolin; Xuan, Yang; Jiang, Quan; Yang, Yihong; Haacke, E. Mark
2007-04-01
A new approach is introduced to simultaneously detect resolved glutamate (Glu), glutamine (Gln), and γ-aminobutyric acid (GABA) using a standard STEAM localization pulse sequence with the optimized sequence timing parameters. This approach exploits the dependence of the STEAM spectra of the strongly coupled spin systems of Glu, Gln, and GABA on the echo time TE and the mixing time TM at 4 T to find an optimized sequence parameter set, i.e., {TE, TM}, where the outer-wings of the Glu C4 multiplet resonances around 2.35 ppm, the Gln C4 multiplet resonances around 2.45 ppm, and the GABA C2 multiplet resonance around 2.28 ppm are significantly suppressed and the three resonances become virtual singlets simultaneously and thus resolved. Spectral simulation and optimization were conducted to find the optimized sequence parameters, and phantom and in vivo experiments (on normal human brains, one patient with traumatic brain injury, and one patient with brain tumor) were carried out for verification. The results have demonstrated that the Gln, Glu, and GABA signals at 2.2-2.5 ppm can be well resolved using a standard STEAM sequence with the optimized sequence timing parameters around {82 ms, 48 ms} at 4 T, while the other main metabolites, such as N-acetyl aspartate (NAA), choline (tCho), and creatine (tCr), are still preserved in the same spectrum. The technique can be easily implemented and should prove to be a useful tool for the basic and clinical studies associated with metabolism of Glu, Gln, and/or GABA.
Scaling maximal oxygen uptake to predict performance in elite-standard men cross-country skiers.
Carlsson, Tomas; Carlsson, Magnus; Felleki, Majbritt; Hammarström, Daniel; Heil, Daniel; Malm, Christer; Tonkonogi, Michail
2013-01-01
The purpose of this study was to: 1) establish the optimal body-mass exponent for maximal oxygen uptake (VO(2)max) to indicate performance in elite-standard men cross-country skiers; and 2) evaluate the influence of course inclination on the body-mass exponent. Twelve elite-standard men skiers completed an incremental treadmill roller-skiing test to determine VO(2)max and performance data came from the 2008 Swedish National Championship 15-km classic-technique race. Log-transformation of power-function models was used to predict skiing speeds. The optimal models were found to be: Race speed = 7.86 · VO(2)max · m(-0.48) and Section speed = 5.96 · [VO(2)max · m(-(0.38 + 0.03 · α)) · e(-0.003 · Δ) (where m is body mass, α is the section's inclination and Δ is the altitude difference of the previous section), that explained 68% and 84% of the variance in skiing speed, respectively. A body-mass exponent of 0.48 (95% confidence interval: 0.19 to 0.77) best described VO(2)max as an indicator of performance in elite-standard men skiers. The confidence interval did not support the use of either "1" (simple ratio-standard scaled) or "0" (absolute expression) as body-mass exponents for expressing VO(2)max as an indicator of performance. Moreover, results suggest that course inclination increases the body-mass exponent for VO(2)max.
A homotopy algorithm for digital optimal projection control GASD-HADOC
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.
1993-01-01
The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.
Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S
2017-03-01
Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jolivet, S.; Mezghani, S.; El Mansori, M.
2016-09-01
The replication of topography has been generally restricted to optimizing material processing technologies in terms of statistical and single-scale features such as roughness. By contrast, manufactured surface topography is highly complex, irregular, and multiscale. In this work, we have demonstrated the use of multiscale analysis on replicates of surface finish to assess the precise control of the finished replica. Five commercial resins used for surface replication were compared. The topography of five standard surfaces representative of common finishing processes were acquired both directly and by a replication technique. Then, they were characterized using the ISO 25178 standard and multiscale decomposition based on a continuous wavelet transform, to compare the roughness transfer quality at different scales. Additionally, atomic force microscope force modulation mode was used in order to compare the resins’ stiffness properties. The results showed that less stiff resins are able to replicate the surface finish along a larger wavelength band. The method was then tested for non-destructive quality control of automotive gear tooth surfaces.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
Partial Thickness Rotator Cuff Tears: Current Concepts
Matthewson, Graeme; Beach, Cara J.; Nelson, Atiba A.; Woodmass, Jarret M.; Ono, Yohei; Boorman, Richard S.; Lo, Ian K. Y.; Thornton, Gail M.
2015-01-01
Partial thickness rotator cuff tears are a common cause of pain in the adult shoulder. Despite their high prevalence, the diagnosis and treatment of partial thickness rotator cuff tears remains controversial. While recent studies have helped to elucidate the anatomy and natural history of disease progression, the optimal treatment, both nonoperative and operative, is unclear. Although the advent of arthroscopy has improved the accuracy of the diagnosis of partial thickness rotator cuff tears, the number of surgical techniques used to repair these tears has also increased. While multiple repair techniques have been described, there is currently no significant clinical evidence supporting more complex surgical techniques over standard rotator cuff repair. Further research is required to determine the clinical indications for surgical and nonsurgical management, when formal rotator cuff repair is specifically indicated and when biologic adjunctive therapy may be utilized. PMID:26171251
Design of optimal groundwater remediation systems under flexible environmental-standard constraints.
Fan, Xing; He, Li; Lu, Hong-Wei; Li, Jing
2015-01-01
In developing optimal groundwater remediation strategies, limited effort has been exerted to solve the uncertainty in environmental quality standards. When such uncertainty is not considered, either over optimistic or over pessimistic optimization strategies may be developed, probably leading to the formulation of rigid remediation strategies. This study advances a mathematical programming modeling approach for optimizing groundwater remediation design. This approach not only prevents the formulation of over optimistic and over pessimistic optimization strategies but also provides a satisfaction level that indicates the degree to which the environmental quality standard is satisfied. Therefore the approach may be expected to be significantly more acknowledged by the decision maker than those who do not consider standard uncertainty. The proposed approach is applied to a petroleum-contaminated site in western Canada. Results from the case study show that (1) the peak benzene concentrations can always satisfy the environmental standard under the optimal strategy, (2) the pumping rates of all wells decrease under a relaxed standard or long-term remediation approach, (3) the pumping rates are less affected by environmental quality constraints under short-term remediation, and (4) increased flexible environmental standards have a reduced effect on the optimal remediation strategy.
2014-01-22
FTEs would need to be reduced by 12.16, expenses reduced by $184.63K, and the number of interns adjusted by 7.67. The reference set for DMU D includes...efficiency: concept, measurement techniques and review of hospital efficiency studies. Malaysian Journal of Public Health Medicine, 10(2), 35-43...REPORT DOCUMENTATION PAGE Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 Form Approved OMB No. 0704-0188 The public reporting burden
Using Genotype Abundance to Improve Phylogenetic Inference
Mesin, Luka; Victora, Gabriel D; Minin, Vladimir N; Matsen, Frederick A
2018-01-01
Abstract Modern biological techniques enable very dense genetic sampling of unfolding evolutionary histories, and thus frequently sample some genotypes multiple times. This motivates strategies to incorporate genotype abundance information in phylogenetic inference. In this article, we synthesize a stochastic process model with standard sequence-based phylogenetic optimality, and show that tree estimation is substantially improved by doing so. Our method is validated with extensive simulations and an experimental single-cell lineage tracing study of germinal center B cell receptor affinity maturation. PMID:29474671
NASA Technical Reports Server (NTRS)
Hague, D. S.; Merz, A. W.
1976-01-01
Atmospheric sampling has been carried out by flights using an available high-performance supersonic aircraft. Altitude potential of an off-the-shelf F-15 aircraft is examined. It is shown that the standard F-15 has a maximum altitude capability in excess of 100,000 feet for routine flight operation by NASA personnel. This altitude is well in excess of the minimum altitudes which must be achieved for monitoring the possible growth of suspected aerosol contaminants.
Cost considerations in selecting coronary artery revascularization therapy in the elderly.
Maziarz, David M; Koutlas, Theodore C
2004-01-01
This article presents some of the cost factors involved in selecting coronary artery revascularization therapy in an elderly patient. With the percentage of gross national product allocated to healthcare continuing to rise in the US, resource allocation has become an issue. Percutaneous coronary intervention continues to be a viable option for many patients, with lower initial costs. However, long-term angina-free results often require further interventions or eventual surgery. Once coronary artery revascularization therapy is selected, it is worthwhile to evaluate the cost considerations inherent to various techniques. Off-pump coronary artery bypass graft surgery has seen a resurgence, with improved technology and lower hospital costs than on-pump bypass surgery. Numerous factors contributing to cost in coronary surgery have been studied and several are documented here, including the potential benefits of early extubation and the use of standardized optimal care pathways. A wide range of hospital-level cost variation has been noted, and standardization issues remain. With the advent of advanced computer-assisted robotic techniques, a push toward totally endoscopic bypass surgery has begun, with the eventual hope of reducing hospital stays to a minimum while maximizing outcomes, thus reducing intensive care unit and stepdown care times, which contribute a great deal toward overall cost. At the present time, these techniques add a significant premium to hospital charges, outweighing any potential length-of-stay benefits from a cost standpoint. As our elderly population continues to grow, use of healthcare resource dollars will continue to be heavily scrutinized. Although the clinical outcome remains the ultimate benchmark, cost containment and optimization of resources will take on a larger role in the future. Copyright 2004 Adis Data Information BV
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
Gentilini, Fabio; Turba, Maria E
2014-01-01
A novel technique, called Divergent, for single-tube real-time PCR genotyping of point mutations without the use of fluorescently labeled probes has recently been reported. This novel PCR technique utilizes a set of four primers and a particular denaturation temperature for simultaneously amplifying two different amplicons which extend in opposite directions from the point mutation. The two amplicons can readily be detected using the melt curve analysis downstream to a closed-tube real-time PCR. In the present study, some critical aspects of the original method were specifically addressed to further implement the technique for genotyping the DNM1 c.G767T mutation responsible for exercise-induced collapse in Labrador retriever dogs. The improved Divergent assay was easily set up using a standard two-step real-time PCR protocol. The melting temperature difference between the mutated and the wild-type amplicons was approximately 5°C which could be promptly detected by all the thermal cyclers. The upgraded assay yielded accurate results with 157pg of genomic DNA per reaction. This optimized technique represents a flexible and inexpensive alternative to the minor grove binder fluorescently labeled method and to high resolution melt analysis for high-throughput, robust and cheap genotyping of single nucleotide variations. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rauscher, Bernard J.; Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Wen, Yiting; Wilson, Donna V.; Xenophontos, Christos
2017-10-01
Near-infrared array detectors, like the James Webb Space Telescope (JWST) NIRSpec’s Teledyne’s H2RGs, often provide reference pixels and a reference output. These are used to remove correlated noise. Improved reference sampling and subtraction (IRS2) is a statistical technique for using this reference information optimally in a least-squares sense. Compared with the traditional H2RG readout, IRS2 uses a different clocking pattern to interleave many more reference pixels into the data than is otherwise possible. Compared with standard reference correction techniques, IRS2 subtracts the reference pixels and reference output using a statistically optimized set of frequency-dependent weights. The benefits include somewhat lower noise variance and much less obvious correlated noise. NIRSpec’s IRS2 images are cosmetically clean, with less 1/f banding than in traditional data from the same system. This article describes the IRS2 clocking pattern and presents the equations needed to use IRS2 in systems other than NIRSpec. For NIRSpec, applying these equations is already an option in the calibration pipeline. As an aid to instrument builders, we provide our prototype IRS2 calibration software and sample JWST NIRSpec data. The same techniques are applicable to other detector systems, including those based on Teledyne’s H4RG arrays. The H4RG’s interleaved reference pixel readout mode is effectively one IRS2 pattern.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venghaus, Florian; Eisfeld, Wolfgang, E-mail: wolfgang.eisfeld@uni-bielefeld.de
2016-03-21
Robust diabatization techniques are key for the development of high-dimensional coupled potential energy surfaces (PESs) to be used in multi-state quantum dynamics simulations. In the present study we demonstrate that, besides the actual diabatization technique, common problems with the underlying electronic structure calculations can be the reason why a diabatization fails. After giving a short review of the theoretical background of diabatization, we propose a method based on the block-diagonalization to analyse the electronic structure data. This analysis tool can be used in three different ways: First, it allows to detect issues with the ab initio reference data and ismore » used to optimize the setup of the electronic structure calculations. Second, the data from the block-diagonalization are utilized for the development of optimal parametrized diabatic model matrices by identifying the most significant couplings. Third, the block-diagonalization data are used to fit the parameters of the diabatic model, which yields an optimal initial guess for the non-linear fitting required by standard or more advanced energy based diabatization methods. The new approach is demonstrated by the diabatization of 9 electronic states of the propargyl radical, yielding fully coupled full-dimensional (12D) PESs in closed form.« less
Krüger, Marie T; Coenen, Volker A; Egger, Karl; Shah, Mukesch; Reinacher, Peter C
2018-06-13
In recent years, simulations based on phantom models have become increasingly popular in the medical field. In the field of functional and stereotactic neurosurgery, a cranial phantom would be useful to train operative techniques, such as stereo-electroencephalography (SEEG), to establish new methods as well as to develop and modify radiological techniques. In this study, we describe the construction of a cranial phantom and show examples for it in stereotactic and functional neurosurgery and its applicability with different radiological modalities. We prepared a plaster skull filled with agar. A complete operation for deep brain stimulation (DBS) was simulated using directional leads. Moreover, a complete SEEG operation including planning, implantation of the electrodes, and intraoperative and postoperative imaging was simulated. An optimally customized cranial phantom is filled with 10% agar. At 7°C, it can be stored for approximately 4 months. A DBS and an SEEG procedure could be realistically simulated. Lead artifacts can be studied in CT, X-ray, rotational fluoroscopy, and MRI. This cranial phantom is a simple and effective model to simulate functional and stereotactic neurosurgical operations. This might be useful for teaching and training of neurosurgeons, establishing operations in a new center and for optimization of radiological examinations. © 2018 S. Karger AG, Basel.
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
Complex-energy approach to sum rules within nuclear density functional theory
Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; ...
2015-04-27
The linear response of the nucleus to an external field contains unique information about the effective interaction, correlations governing the behavior of the many-body system, and properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. To establish anmore » efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random- phase approximation. The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method is very efficient and well-adaptable to parallel computing. As a result, the FAM formulation is especially useful when standard theorems based on commutation relations involving the nuclear Hamiltonian and external field cannot be used.« less
Conforto, Egle; Joguet, Nicolas; Buisson, Pierre; Vendeville, Jean-Eudes; Chaigneau, Carine; Maugard, Thierry
2015-02-01
The aim of this paper is to describe an optimized methodology to study the surface characteristics and internal structure of biopolymer capsules using scanning electron microscopy (SEM) in environmental mode. The main advantage of this methodology is that no preparation is required and, significantly, no metallic coverage is deposited on the surface of the specimen, thus preserving the original capsule shape and its surface morphology. This avoids introducing preparation artefacts which could modify the capsule surface and mask information concerning important feature like porosities or roughness. Using this method gelatin and mainly fatty coatings, difficult to be analyzed by standard SEM technique, unambiguously show fine details of their surface morphology without damage. Furthermore, chemical contrast is preserved in backscattered electron images of unprepared samples, allowing visualizing the internal organization of the capsule, the quality of the envelope, etc... This study provides pointers on how to obtain optimal conditions for the analysis of biological or sensitive material, as this is not always studied using appropriate techniques. A reliable evaluation of the parameters used in capsule elaboration for research and industrial applications, as well as that of capsule functionality is provided by this methodology, which is essential for the technological progress in this domain. Copyright © 2014 Elsevier B.V. All rights reserved.
Muhogora, Wilbroad E; Msaki, Peter; Padovani, Renato
2015-03-08
The objective of this study was to improve the visibility of anatomical details by applying off-line postimage processing in chest computed radiography (CR). Four spatial domain-based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann-Whitney U-test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005 ≤ p ≤ 0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60 ≤ kVp ≤ 70. However, there was no improvement for images acquired using 102 ≤ kVp ≤ 107 (0.127 ≤ p ≤ 0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists.
Msaki, Peter; Padovani, Renato
2015-01-01
The objective of this study was to improve the visibility of anatomical details by applying off‐line postimage processing in chest computed radiography (CR). Four spatial domain‐based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann‐Whitney U‐test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005≤p≤0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60≤kVp≤70. However, there was no improvement for images acquired using 102≤kVp≤107 (0.127≤p≤0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists. PACS number: 87.59.−e, 87.59.−B, 87.59.−bd PMID:26103165
NASA Astrophysics Data System (ADS)
Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain
2009-09-01
In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.
Algin, Oktay
2018-05-21
Phase-contrast cine magnetic resonance imaging (PC-MRI) is a widely used technique for determination of possible communication of arachnoid cysts (ACs). Three-dimensional (3D) sampling perfection with application-optimized contrasts using different flip-angle evolutions (3D-SPACE) technique is a relatively new method for 3D isotropic scanning of the entire cranium within a short time. In this research, the usage of the 3D-SPACE technique in differentiation of communicating or noncommunicating type ACs was evaluated. Thirty-five ACs in 34 patients were retrospectively examined. The 3D-SPACE, PC-MRI, and contrast material-enhanced cisternography (if present) images of the patients were analyzed. Each cyst was described according to cyst size/location, third ventricle diameter, Evans index, and presence of hydrocephalus. Communication was defined as absent (score 0), suspected (score 1), or present (score 2) on each sequence. Results of PC-MRI or cisternography (if available) examinations were used as criterion standard techniques to categorize all cysts as communicating or noncommunicating type. The results of 3D-SPACE were compared with criterion standard techniques. The comparisons between groups were performed using Mann-Whitney and Fisher exact tests. For demonstration of communication status of the cysts, criterion standard test results and 3D-SPACE findings were almost in perfect harmony (κ[95% confidence interval: 0.94]; P < 0.001). When evaluating the communicative properties, 3D-SPACE findings correlated with other final results at a rate of 97%. There is a positive correlation with third ventricular diameters and Evans index for all patients (r = 0.77, P < 0.001). For other analyzed variables, there is no significant difference or correlation between the groups. The 3D-SPACE technique is an easy, useful, and noninvasive alternative for the evaluation of morphology, topographical relationships, and communication status of ACs.
Isotropic three-dimensional T2 mapping of knee cartilage: Development and validation.
Colotti, Roberto; Omoumi, Patrick; Bonanno, Gabriele; Ledoux, Jean-Baptiste; van Heeswijk, Ruud B
2018-02-01
1) To implement a higher-resolution isotropic 3D T 2 mapping technique that uses sequential T 2 -prepared segmented gradient-recalled echo (Iso3DGRE) images for knee cartilage evaluation, and 2) to validate it both in vitro and in vivo in healthy volunteers and patients with knee osteoarthritis. The Iso3DGRE sequence with an isotropic 0.6 mm spatial resolution was developed on a clinical 3T MR scanner. Numerical simulations were performed to optimize the pulse sequence parameters. A phantom study was performed to validate the T 2 estimation accuracy. The repeatability of the sequence was assessed in healthy volunteers (n = 7). T 2 values were compared with those from a clinical standard 2D multislice multiecho (MSME) T 2 mapping sequence in knees of healthy volunteers (n = 13) and in patients with knee osteoarthritis (OA, n = 5). The numerical simulations resulted in 100 excitations per segment and an optimal radiofrequency (RF) excitation angle of 15°. The phantom study demonstrated a good correlation of the technique with the reference standard (slope 0.9 ± 0.05, intercept 0.2 ± 1.7 msec, R 2 ≥ 0.99). Repeated measurements of cartilage T 2 values in healthy volunteers showed a coefficient of variation of 5.6%. Both Iso3DGRE and MSME techniques found significantly higher cartilage T 2 values (P < 0.03) in OA patients. Iso3DGRE precision was equal to that of the MSME T 2 mapping in healthy volunteers, and significantly higher in OA (P = 0.01). This study successfully demonstrated that high-resolution isotropic 3D T 2 mapping for knee cartilage characterization is feasible, accurate, repeatable, and precise. The technique allows for multiplanar reformatting and thus T 2 quantification in any plane of interest. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:362-371. © 2017 International Society for Magnetic Resonance in Medicine.
Characterizing dielectric tensors of anisotropic materials from a single measurement
NASA Astrophysics Data System (ADS)
Smith, Paula Kay
Ellipsometry techniques look at changes in polarization states to measure optical properties of thin film materials. A beam reflected from a substrate measures the real and imaginary parts of the index of the material represented as n and k, respectively. Measuring the substrate at several angles gives additional information that can be used to measure multilayer thin film stacks. However, the outstanding problem in standard ellipsometry is that it uses a limited number of incident polarization states (s and p). This limits the technique to isotropic materials. The technique discussed in this paper extends the standard process to measure anisotropic materials by using a larger set of incident polarization states. By using a polarimeter to generate several incident polarization states and measure the polarization properties of the sample, ellipsometry can be performed on biaxial materials. Use of an optimization algorithm in conjunction with biaxial ellipsometry can more accurately determine the dielectric tensor of individual layers in multilayer structures. Biaxial ellipsometry is a technique that measures the dielectric tensors of a biaxial substrate, single-layer thin film, or multi-layer structure. The dielectric tensor of a biaxial material consists of the real and imaginary parts of the three orthogonal principal indices (n x + ikx, ny +iky and nz + i kz) as well as three Euler angles (alpha, beta and gamma) to describe its orientation. The method utilized in this work measures an angle-of-incidence Mueller matrix from a Mueller matrix imaging polarimeter equipped with a pair of microscope objectives that have low polarization properties. To accurately determine the dielectric tensors for multilayer samples, the angle-of-incidence Mueller matrix images are collected for multiple wavelengths. This is done in either a transmission mode or a reflection mode, each incorporates an appropriate dispersion model. Given approximate a priori knowledge of the dielectric tensor and film thickness, a Jones reflectivity matrix is calculated by solving Maxwell's equations at each surface. Converting the Jones matrix into a Mueller matrix provides a starting point for optimization. An optimization algorithm then finds the best fit dielectric tensor based on the measured angle-of-incidence Mueller matrix image. This process can be applied to polarizing materials, birefringent crystals and the multilayer structures of liquid crystal displays. In particular, the need for such accuracy in liquid crystal displays is growing as their applications in industry evolve.
Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.
Jiménez, Fernando; Sánchez, Gracia; Juárez, José M
2014-03-01
This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Somavarapu, Dhathri H.
This thesis proposes a new parallel computing genetic algorithm framework for designing fuel-optimal trajectories for interplanetary spacecraft missions. The framework can capture the deep search space of the problem with the use of a fixed chromosome structure and hidden-genes concept, can explore the diverse set of candidate solutions with the use of the adaptive and twin-space crowding techniques and, can execute on any high-performance computing (HPC) platform with the adoption of the portable message passing interface (MPI) standard. The algorithm is implemented in C++ with the use of the MPICH implementation of the MPI standard. The algorithm uses a patched-conic approach with two-body dynamics assumptions. New procedures are developed for determining trajectories in the Vinfinity-leveraging legs of the flight from the launch and non-launch planets and, deep-space maneuver legs of the flight from the launch and non-launch planets. The chromosome structure maintains the time of flight as a free parameter within certain boundaries. The fitness or the cost function of the algorithm uses only the mission Delta V, and does not include time of flight. The optimization is conducted with two variations for the minimum mission gravity-assist sequence, the 4-gravity-assist, and the 3-gravity-assist, with a maximum of 5 gravity-assists allowed in both the cases. The optimal trajectories discovered using the framework in both of the cases demonstrate the success of this framework.
Madu, C N; Quint, D J; Normolle, D P; Marsh, R B; Wang, E Y; Pierce, L J
2001-11-01
To delineate with computed tomography (CT) the anatomic regions containing the supraclavicular (SCV) and infraclavicular (IFV) nodal groups, to define the course of the brachial plexus, to estimate the actual radiation dose received by these regions in a series of patients treated in the traditional manner, and to compare these doses to those received with an optimized dosimetric technique. Twenty patients underwent contrast material-enhanced CT for the purpose of radiation therapy planning. CT scans were used to study the location of the SCV and IFV nodal regions by using outlining of readily identifiable anatomic structures that define the nodal groups. The brachial plexus was also outlined by using similar methods. Radiation therapy doses to the SCV and IFV were then estimated by using traditional dose calculations and optimized planning. A repeated measures analysis of covariance was used to compare the SCV and IFV depths and to compare the doses achieved with the traditional and optimized methods. Coverage by the 90% isodose surface was significantly decreased with traditional planning versus conformal planning as the depth to the SCV nodes increased (P < .001). Significantly decreased coverage by using the 90% isodose surface was demonstrated for traditional planning versus conformal planning with increasing IFV depth (P = .015). A linear correlation was found between brachial plexus depth and SCV depth up to 7 cm. Conformal optimized planning provided improved dosimetric coverage compared with standard techniques.
Optimal full motion video registration with rigorous error propagation
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn
2014-06-01
Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pablant, N. A.; Bell, R. E.; Bitter, M.
2014-11-15
Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less
Pablant, N. A.; Bell, R. E.; Bitter, M.; ...
2014-08-08
Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less
Momen, Awad A; Zachariadis, George A; Anthemidis, Aristidis N; Stratis, John A
2007-01-15
Two digestion procedures have been tested on nut samples for application in the determination of essential (Cr, Cu, Fe, Mg, Mn, Zn) and non-essential (Al, Ba, Cd, Pb) elements by inductively coupled plasma-optical emission spectrometry (ICP-OES). These included wet digestions with HNO(3)/H(2)SO(4) and HNO(3)/H(2)SO(4)/H(2)O(2). The later one is recommended for better analytes recoveries (relative error<11%). Two calibrations (aqueous standard and standard addition) procedures were studied and proved that standard addition was preferable for all analytes. Experimental designs for seven factors (HNO(3), H(2)SO(4) and H(2)O(2) volumes, digestion time, pre-digestion time, temperature of the hot plate and sample weight) were used for optimization of sample digestion procedures. For this purpose Plackett-Burman fractional factorial design, which involve eight experiments was adopted. The factors HNO(3) and H(2)O(2) volume, and the digestion time were found to be the most important parameters. The instrumental conditions were also optimized (using peanut matrix rather than aqueous standard solutions) considering radio-frequency (rf) incident power, nebulizer argon gas flow rate and sample uptake flow rate. The analytical performance, such as limits of detection (LOD<0.74mugg(-1)), precision of the overall procedures (relative standard deviation between 2.0 and 8.2%) and accuracy (relative errors between 0.4 and 11%) were assessed statistically to evaluate the developed analytical procedures. The good agreement between measured and certified values for all analytes (relative error <11%) with respect to IAEA-331 (spinach leaves) and IAEA-359 (cabbage) indicates that the developed analytical method is well suited for further studies on the fate of major elements in nuts and possibly similar matrices.
NASA Technical Reports Server (NTRS)
Rasmussen, John
1990-01-01
Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are available for the solution of single problems. By implementing collections of the available techniques into general software systems, operational environments for structural optimization have been created. The forthcoming years must bring solutions to the problem of integrating such systems into more general design environments. The result of this work should be CAD systems for rational design in which structural optimization is one important design tool among many others.
Evolutionary optimization methods for accelerator design
NASA Astrophysics Data System (ADS)
Poklonskiy, Alexey A.
Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.
Forced expiratory technique, directed cough, and autogenic drainage.
Fink, James B
2007-09-01
In health, secretions produced in the respiratory tract are cleared by mucociliary transport, cephalad airflow bias, and cough. In disease, increased secretion viscosity and volume, dyskinesia of the cilia, and ineffective cough combine to reduce secretion clearance, leading to increased risk of infection. In obstructive lung disease these conditions are further complicated by early collapse of airways, due to airway compression, which traps both gas and secretions. Techniques have been developed to optimize expiratory flow and promote airway clearance. Directed cough, forced expiratory technique, active cycle of breathing, and autogenic drainage are all more effective than placebo and comparable in therapeutic effects to postural drainage; they require no special equipment or care-provider assistance for routine use. Researchers have suggested that standard chest physical therapy with active cycle of breathing and forced expiratory technique is more effective than chest physical therapy alone. Evidence-based reviews have suggested that, though successful adoption of techniques such as autogenic drainage may require greater control and training, patients with long-term secretion management problems should be taught as many of these techniques as they can master for adoption in their therapeutic routines.
Simhon, David; Halpern, Marisa; Brosh, Tamar; Vasilyev, Tamar; Ravid, Avi; Tennenbaum, Tamar; Nevo, Zvi; Katzir, Abraham
2007-01-01
Background: A feedback temperature-controlled laser soldering system (TCLS) was used for bonding skin incisions on the backs of pigs. The study was aimed: 1) to characterize the optimal soldering parameters, and 2) to compare the immediate and long-term wound healing outcomes with other wound closure modalities. Materials and Methods: A TCLS was used to bond the approximated wound margins of skin incisions on porcine backs. The reparative outcomes were evaluated macroscopically, microscopically, and immunohistochemically. Results: The optimal soldering temperature was found to be 65°C and the operating time was significantly shorter than with suturing. The immediate tight sealing of the wound by the TCLS contributed to rapid, high quality wound healing in comparison to Dermabond or Histoacryl cyanoacrylate glues or standard suturing. Conclusions: TCLS of incisions in porcine skin has numerous advantages, including rapid procedure and high quality reparative outcomes, over the common standard wound closure procedures. Further studies with a variety of skin lesions are needed before advocating this technique for clinical use. PMID:17245173
Optimized tokamak power exhaust with double radiative feedback in ASDEX Upgrade
NASA Astrophysics Data System (ADS)
Kallenbach, A.; Bernert, M.; Eich, T.; Fuchs, J. C.; Giannone, L.; Herrmann, A.; Schweinzer, J.; Treutterer, W.; the ASDEX Upgrade Team
2012-12-01
A double radiative feedback technique has been developed on the ASDEX Upgrade tokamak for optimization of power exhaust with a standard vertical target divertor. The main chamber radiation is measured in real time by a subset of three foil bolometer channels and controlled by argon injection in the outer midplane. The target heat flux is in addition controlled by nitrogen injection in the divertor private flux region using either a thermoelectric sensor or the scaled divertor radiation obtained by a bolometer channel in the outer divertor. No negative interference of the two radiation controllers has been observed so far. The combination of main chamber and divertor radiative cooling extends the operational space of a standard divertor configuration towards high values of P/R. Pheat/R = 14 MW m-1 has been achieved so far with nitrogen seeding alone as well as with combined N + Ar injection, with the time-averaged divertor peak heat flux below 5 MW m-2. Good plasma performance can be maintained under these conditions, namely H98(y,2) = 1 and βN = 3.
Superresolution imaging of Drosophila tissues using expansion microscopy.
Jiang, Nan; Kim, Hyeon-Jin; Chozinski, Tyler J; Azpurua, Jorge E; Eaton, Benjamin A; Vaughan, Joshua C; Parrish, Jay Z
2018-06-15
The limited resolving power of conventional diffraction-limited microscopy hinders analysis of small, densely packed structural elements in cells. Expansion microscopy (ExM) provides an elegant solution to this problem, allowing for increased resolution with standard microscopes via physical expansion of the specimen in a swellable polymer hydrogel. Here, we apply, validate, and optimize ExM protocols that enable the study of Drosophila embryos, larval brains, and larval and adult body walls. We achieve a lateral resolution of ∼70 nm in Drosophila tissues using a standard confocal microscope, and we use ExM to analyze fine intracellular structures and intercellular interactions. First, we find that ExM reveals features of presynaptic active zone (AZ) structure that are observable with other superresolution imaging techniques but not with standard confocal microscopy. We further show that synapses known to exhibit age-dependent changes in activity also exhibit age-dependent changes in AZ structure. Finally, we use the significantly improved axial resolution of ExM to show that dendrites of somatosensory neurons are inserted into epithelial cells at a higher frequency than previously reported in confocal microscopy studies. Altogether, our study provides a foundation for the application of ExM to Drosophila tissues and underscores the importance of tissue-specific optimization of ExM procedures.
Conversion coefficients for determining organ doses in paediatric spine radiography.
Seidenbusch, Michael; Schneider, Karl
2014-04-01
Knowledge of organ and effective doses achieved during paediatric x-ray examinations is an important prerequisite for assessment of radiation burden to the patient. Conversion coefficients for reconstruction of organ and effective doses from entrance doses for segmental spine radiographs of 0-, 1-, 5-, 10-, 15- and 30-year-old patients are provided regarding the Guidelines of Good Radiographic Technique of the European Commission. Using the personal computer program PCXMC developed by the Finnish Centre for Radiation and Nuclear Safety (Säteilyturvakeskus STUK), conversion coefficients for conventional segmental spine radiographs were calculated performing Monte Carlo simulations in mathematical hermaphrodite phantom models describing patients of different ages. The clinical variation of beam collimation was taken into consideration by defining optimal and suboptimal radiation field settings. Conversion coefficients for the reconstruction of organ doses in about 40 organs and tissues from measured entrance doses during cervical, thoracic and lumbar spine radiographs of 0-, 1-, 5-, 10-, 15- and 30-year-old patients were calculated for the standard sagittal and lateral beam projections and the standard focus detector distance of 115 cm. The conversion coefficients presented may be used for organ dose assessments from entrance doses measured during spine radiographs of patients of all age groups and all field settings within the optimal and suboptimal standard field settings.
Pushing the speed limit in enantioselective supercritical fluid chromatography.
Regalado, Erik L; Welch, Christopher J
2015-08-01
Chromatographic enantioseparations on the order of a few seconds can be achieved by supercritical fluid chromatography using short columns packed with chiral stationary phases. The evolution of 'world record' speeds for the chromatographic separation of enantiomers has steadily dropped from an industry standard of 20-40 min just two decades ago, to a current ability to perform many enantioseparations in well under a minute. Improvements in instrument and column technologies enabled this revolution, but the ability to predict optimal separation time from an initial method development screening assay using the t(min cc) predictor greatly simplifies the development and optimization of high-speed chiral chromatographic separations. In this study, we illustrate how the use of this simple tool in combination with the workhorse technique of supercritical fluid chromatography on customized short chiral columns (1-2 cm length) allows us to achieve ultrafast enantioseparations of pharmaceutically relevant compounds on the 5-20 s scale, bringing the technique of high-throughput enantiopurity analysis out of the specialist realm and into the laboratories of most researchers. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hensel, Kendi L; Carnes, Michael S; Stoll, Scott T
2016-11-01
The structural and physiologic changes in a woman's body during pregnancy can predispose pregnant women to low back pain and its associated disability, as well as to complications of pregnancy, labor, and delivery. Anecdotal and empirical evidence has indicated that osteopathic manipulative treatment (OMT) may be efficacious in improving pain and functionality in women who are pregnant. Based on that premise, the Pregnancy Research on Osteopathic Manipulation Optimizing Treatment Effects (PROMOTE) study was designed as a prospective, randomized, placebo-controlled, and blinded clinical trial to evaluate the efficacy of an OMT protocol for pain during third-trimester pregnancy. The OMT protocol developed for the PROMOTE study was based on physiologic theory and the concept of the interrelationship of structure and function. The 12 well-defined, standardized OMT techniques used in the protocol are commonly taught at osteopathic medical schools in the United States. These techniques can be easily replicated as a 20-minute protocol applied in conjunction with usual prenatal care, thus making it feasible to implement into clinical practice. This article presents an overview of the study design and treatment protocols used in the PROMOTE study.
Optimal control in adaptive optics modeling of nonlinear systems
NASA Astrophysics Data System (ADS)
Herrmann, J.
The problem of using an adaptive optics system to correct for nonlinear effects like thermal blooming is addressed using a model containing nonlinear lenses through which Gaussian beams are propagated. The best correction of this nonlinear system can be formulated as a deterministic open loop optimal control problem. This treatment gives a limit for the best possible correction. Aspects of adaptive control and servo systems are not included at this stage. An attempt is made to determine that control in the transmitter plane which minimizes the time averaged area or maximizes the fluence in the target plane. The standard minimization procedure leads to a two-point-boundary-value problem, which is ill-conditioned in the case. The optimal control problem was solved using an iterative gradient technique. An instantaneous correction is introduced and compared with the optimal correction. The results of the calculations show that for short times or weak nonlinearities the instantaneous correction is close to the optimal correction, but that for long times and strong nonlinearities a large difference develops between the two types of correction. For these cases the steady state correction becomes better than the instantaneous correction and approaches the optimum correction.
Richardson, Daniel R; Stauffer, Hans U; Roy, Sukesh; Gord, James R
2017-04-10
A comparison is made between two ultrashort-pulse coherent anti-Stokes Raman scattering (CARS) thermometry techniques-hybrid femtosecond/picosecond (fs/ps) CARS and chirped-probe-pulse (CPP) fs-CARS-that have become standards for high-repetition-rate thermometry in the combustion diagnostics community. These two variants of fs-CARS differ only in the characteristics of the ps-duration probe pulse; in hybrid fs/ps CARS a spectrally narrow, time-asymmetric probe pulse is used, whereas a highly chirped, spectrally broad probe pulse is used in CPP fs-CARS. Temperature measurements were performed using both techniques in near-adiabatic flames in the temperature range 1600-2400 K and for probe time delays of 0-30 ps. Under these conditions, both techniques are shown to exhibit similar temperature measurement accuracies and precisions to previously reported values and to each other. However, it is observed that initial calibration fits to the spectrally broad CPP results require more fitting parameters and a more robust optimization algorithm and therefore significantly increased computational cost and complexity compared to the fitting of hybrid fs/ps CARS data. The optimized model parameters varied more for the CPP measurements than for the hybrid fs/ps measurements for different experimental conditions.
Determination of calibration parameters of a VRX CT system using an “Amoeba” algorithm
Jordan, Lawrence M.; DiBianca, Frank A.; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M. Waleed
2008-01-01
Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge “clouds” created by the detected x-ray photons, i.e., the “physics limit.” This paper focuses on implementing a technique called “projective compression.” which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm “variable-resolution x-ray” (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown. PMID:19430581
Determination of calibration parameters of a VRX CT system using an "Amoeba" algorithm.
Jordan, Lawrence M; Dibianca, Frank A; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M Waleed
2004-01-01
Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge "clouds" created by the detected x-ray photons, i.e., the "physics limit." This paper focuses on implementing a technique called "projective compression." which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm "variable-resolution x-ray" (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown.
Meleşcanu Imre, M; Preoteasa, E; Țâncu, AM; Preoteasa, CT
2013-01-01
Rationale. The imaging methods are more and more used in the clinical process of modern dentistry. Once the implant based treatment alternatives are nowadays seen as being the standard of care in edentulous patients, these techniques must be integrated in the complete denture treatment. Aim. The study presents some evaluation techniques for the edentulous patient treated by conventional dentures or mini dental implants (mini SKY Bredent) overdentures, using the profile teleradiography. These offer data useful for an optimal positioning of the artificial teeth and the mini dental implants, favoring to obtain an esthetic and functional treatment outcome. We proposed also a method to conceive a simple surgical guide that allows the prosthetically driven implants placement. Material and method. Clinical case reports were made, highlighting the importance of cephalometric evaluation on lateral teleradiographs in complete edentulous patients. A clinical case that gradually reports the surgical guide preparation (Bredent silicon radio opaque), in order to place the mini dental implants in the best prosthetic and anatomic conditions, was presented. Conclusions. The profile teleradiograph is a useful tool for the practitioner. It allows establishing the optimal site for implant placement, in a good relation with the overdenture. The conventional denture can be easily and relatively costless transformed in a surgical guide used during implant placement. PMID:23599828
Organizational Decision Making
1975-08-01
the lack of formal techniques typically used by large organizations, digress on the advantages of formal over informal... optimization ; for example one might do a number of optimization calculations, each time using a different measure of effectiveness as the optimized ...final decision. The next level of computer application involves the use of computerized optimization techniques. Optimization
Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings
NASA Astrophysics Data System (ADS)
Mader, Charles Alexander
A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.
Development of polypyrrole based solid-state on-chip microactuators using photolithography
NASA Astrophysics Data System (ADS)
Zhong, Yong; Lundemo, Staffan; Jager, Edwin W. H.
2018-07-01
There is a need for soft microactuators, especially for biomedical applications. We have developed a microfabrication process to create such soft, on-chip polymer based microactuators that can operate in air. The on-chip microactuators were fabricated using standard photolithographic techniques and wet etching, combined with special designed process to micropattern the electroactive polymer polypyrrole that drives the microactuators. By immobilizing a UV-patternable gel containing a liquid electrolyte on top of the electroactive polypyrrole layer, actuation in air was achieved although with reduced movement. Further optimization of the processing is currently on-going. The result shows the possibility to batch fabricate complex microsystems such as microrobotics and micromanipulators based on these solid-state on-chip microactuators using microfabrication methods including standard photolithographic processes.
Stencils and problem partitionings: Their influence on the performance of multiple processor systems
NASA Technical Reports Server (NTRS)
Reed, D. A.; Adams, L. M.; Patrick, M. L.
1986-01-01
Given a discretization stencil, partitioning the problem domain is an important first step for the efficient solution of partial differential equations on multiple processor systems. Partitions are derived that minimize interprocessor communication when the number of processors is known a priori and each domain partition is assigned to a different processor. This partitioning technique uses the stencil structure to select appropriate partition shapes. For square problem domains, it is shown that non-standard partitions (e.g., hexagons) are frequently preferable to the standard square partitions for a variety of commonly used stencils. This investigation is concluded with a formalization of the relationship between partition shape, stencil structure, and architecture, allowing selection of optimal partitions for a variety of parallel systems.
Developing a Framework for Evaluating Organizational Information Assurance Metrics Programs
2007-03-01
least cost. Standards such as ISO /IEC 17799 and ISO /IEC 27001 provide guidance on the domains that security management should consider when... ISO /IEC 17799, 2000; ISO /IEC 27001 , 2005). 6 In order to attempt to find this optimal mix, organizations can make risk decisions weighing...Electronic version]. International Organization of Standards. (2000). ISO /IEC 27001 . Information Technology Security Techniques: Information
Sepehrband, Farshid; O'Brien, Kieran; Barth, Markus
2017-12-01
Several diffusion-weighted MRI techniques have been developed and validated during the past 2 decades. While offering various neuroanatomical inferences, these techniques differ in their proposed optimal acquisition design, preventing clinicians and researchers benefiting from all potential inference methods, particularly when limited time is available. This study reports an optimal design that enables for a time-efficient diffusion-weighted MRI acquisition scheme at 7 Tesla. The primary audience of this article is the typical end user, interested in diffusion-weighted microstructural imaging at 7 Tesla. We tested b-values in the range of 700 to 3000 s/mm 2 with different number of angular diffusion-encoding samples, against a data-driven "gold standard." The suggested design is a protocol with b-values of 1000 and 2500 s/mm 2 , with 25 and 50 samples, uniformly distributed over two shells. We also report a range of protocols in which the results of fitting microstructural models to the diffusion-weighted data had high correlation with the gold standard. We estimated minimum acquisition requirements that enable diffusion tensor imaging, higher angular resolution diffusion-weighted imaging, neurite orientation dispersion, and density imaging and white matter tract integrity across whole brain with isotropic resolution of 1.8 mm in less than 11 min. Magn Reson Med 78:2170-2184, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Practical approach to subject-specific estimation of knee joint contact force.
Knarr, Brian A; Higginson, Jill S
2015-08-20
Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data; however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models' predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. Copyright © 2015 Elsevier Ltd. All rights reserved.
Practical approach to subject-specific estimation of knee joint contact force
Knarr, Brian A.; Higginson, Jill S.
2015-01-01
Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data, however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models’ predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. PMID:25952546
Program and Project Management Framework
NASA Technical Reports Server (NTRS)
Butler, Cassandra D.
2002-01-01
The primary objective of this project was to develop a framework and system architecture for integrating program and project management tools that may be applied consistently throughout Kennedy Space Center (KSC) to optimize planning, cost estimating, risk management, and project control. Project management methodology used in building interactive systems to accommodate the needs of the project managers is applied as a key component in assessing the usefulness and applicability of the framework and tools developed. Research for the project included investigation and analysis of industrial practices, KSC standards, policies, and techniques, Systems Management Office (SMO) personnel, and other documented experiences of project management experts. In addition, this project documents best practices derived from the literature as well as new or developing project management models, practices, and techniques.
[Current status and future perspectives of hepatocyte transplantation].
Pareja, Eugenia; Cortés, Miriam; Gómez-Lechón, M José; Maupoey, Javier; San Juan, Fernando; López, Rafael; Mir, Jose
2014-02-01
The imbalance between the number of potential beneficiaries and available organs, originates the search for new therapeutic alternatives, such as Hepatocyte transplantation (HT).Even though this is a treatment option for these patients, the lack of unanimity of criteria regarding indications and technique, different cryopreservation protocols, as well as the different methodology to assess the response to this therapy, highlights the need of a Consensus Conference to standardize criteria and consider future strategies to improve the technique and optimize the results.Our aim is to review and update the current state of hepatocyte transplantation, emphasizing the future research attempting to solve the problems and improve the results of this treatment. Copyright © 2013 AEC. Published by Elsevier Espana. All rights reserved.
Iliac Arteries: How Registries Can Help Improve Outcomes
Tapping, Charles Ross; Uberoi, Raman
2014-01-01
There are many publications reporting excellent short and long-term results with endovascular techniques. Patients included in trials are often highly selected and may not represent real world practice. Registries are important to interventional radiologists for several reasons; they reflect prevailing practice and can be used to establish real world standards of care and safety profiles. This information allows individuals and centers to evaluate their outcomes compared with national norms. The British Iliac Angioplasty and Stenting (BIAS) registry is an example of a mature registry that has been collecting data since 2000 and has been reporting outcomes since 2001. This article discusses the evidence to support both endovascular and surgical intervention for aortoiliac occlusive disease, the role of registries, and optimal techniques for aortoiliac intervention. PMID:25435659
Early driver fatigue detection from electroencephalography signals using artificial neural networks.
King, L M; Nguyen, H T; Lal, S K L
2006-01-01
This paper describes a driver fatigue detection system using an artificial neural network (ANN). Using electroencephalogram (EEG) data sampled from 20 professional truck drivers and 35 non professional drivers, the time domain data are processed into alpha, beta, delta and theta bands and then presented to the neural network to detect the onset of driver fatigue. The neural network uses a training optimization technique called the magnified gradient function (MGF). This technique reduces the time required for training by modifying the standard back propagation (SBP) algorithm. The MGF is shown to classify professional driver fatigue with 81.49% accuracy (80.53% sensitivity, 82.44% specificity) and non-professional driver fatigue with 83.06% accuracy (84.04% sensitivity and 82.08% specificity).
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Particle Swarm Optimization for inverse modeling of solute transport in fractured gneiss aquifer
NASA Astrophysics Data System (ADS)
Abdelaziz, Ramadan; Zambrano-Bigiarini, Mauricio
2014-08-01
Particle Swarm Optimization (PSO) has received considerable attention as a global optimization technique from scientists of different disciplines around the world. In this article, we illustrate how to use PSO for inverse modeling of a coupled flow and transport groundwater model (MODFLOW2005-MT3DMS) in a fractured gneiss aquifer. In particular, the hydroPSO R package is used as optimization engine, because it has been specifically designed to calibrate environmental, hydrological and hydrogeological models. In addition, hydroPSO implements the latest Standard Particle Swarm Optimization algorithm (SPSO-2011), with an adaptive random topology and rotational invariance constituting the main advancements over previous PSO versions. A tracer test conducted in the experimental field at TU Bergakademie Freiberg (Germany) is used as case study. A double-porosity approach is used to simulate the solute transport in the fractured Gneiss aquifer. Tracer concentrations obtained with hydroPSO were in good agreement with its corresponding observations, as measured by a high value of the coefficient of determination and a low sum of squared residuals. Several graphical outputs automatically generated by hydroPSO provided useful insights to assess the quality of the calibration results. It was found that hydroPSO required a small number of model runs to reach the region of the global optimum, and it proved to be both an effective and efficient optimization technique to calibrate the movement of solute transport over time in a fractured aquifer. In addition, the parallel feature of hydroPSO allowed to reduce the total computation time used in the inverse modeling process up to an eighth of the total time required without using that feature. This work provides a first attempt to demonstrate the capability and versatility of hydroPSO to work as an optimizer of a coupled flow and transport model for contaminant migration.
Development and optimization of a miniaturized fiber-optic photoplethysmographic sensor
NASA Astrophysics Data System (ADS)
Morley, Aisha; Davenport, John J.; Hickey, Michelle; Phillips, Justin P.
2017-11-01
Photoplethysmography (PPG) is a widely used technique for measuring blood oxygen saturation, commonly using an external pulse oximeter applied to a finger, toe, or earlobe. Previous research has demonstrated the utility of direct monitoring of the oxygen saturation of internal organs, using optical fibers to transmit light between the photodiode/light emitting diode and internal site. However, little research into the optimization and standardization of such a probe has yet been carried out. This research establishes the relationship between fiber separation distance and PPG signal, and between fiber core width and PPG signal. An ideal setup is suggested: 1000-μm fibers at a separation distance of 3 to 3.5 mm, which was found to produce signals around 0.35 V in amplitude with a low variation coefficient.
Adaptive Power Control for Space Communications
NASA Technical Reports Server (NTRS)
Thompson, Willie L., II; Israel, David J.
2008-01-01
This paper investigates the implementation of power control techniques for crosslinks communications during a rendezvous scenario of the Crew Exploration Vehicle (CEV) and the Lunar Surface Access Module (LSAM). During the rendezvous, NASA requires that the CEV supports two communication links: space-to-ground and crosslink simultaneously. The crosslink will generate excess interference to the space-to-ground link as the distances between the two vehicles decreases, if the output power is fixed and optimized for the worst-case link analysis at the maximum distance range. As a result, power control is required to maintain the optimal power level for the crosslink without interfering with the space-to-ground link. A proof-of-concept will be described and implemented with Goddard Space Flight Center (GSFC) Communications, Standard, and Technology Lab (CSTL).
NASA Astrophysics Data System (ADS)
Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo
2016-11-01
We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.
Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations
Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha
2015-04-30
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less
Optimizing inhaler use by pharmacist-provided education to community-dwelling elderly.
Bouwmeester, Carla; Kraft, Jacqueline; Bungay, Kathleen M
2015-10-01
To assess, using a standard observational tool, the ability of patients to demonstrate and maintain proper inhaled medication administration techniques following pharmacist education. Six-month observational study. Patients' homes or adult day health center. Patients in a Program for All-inclusive Care for the Elderly (PACE) prescribed one or more inhaled medications used at least once daily. Instruction by on-site clinical pharmacist. Hickey's Pharmacies Inhaler Technique assessment (score range: 0-20, higher better). Forty-two patients were evaluated at baseline, taught proper techniques for using inhaled medications, assessed immediately following the education, and re-assessed 4-6 weeks later. The mean pre-assessment score was 14 (SD 4.5, range 0-20), the initial post-assessment score increased to 18 (SD 3, range 10-20). The second post-assessment (4-6 weeks later) score mean was 17.7 (SD 3, range 10-20). Both follow-up scores were significantly improved from baseline (p < 0.05). Multivariable analysis indicated the strongest predictors of second post-training score were: score after initial pharmacist training and being subscribed to auto-refill. These characteristics predicted ∼70% of the variance in the second score (p < 0.001). These results indicate that education by a pharmacist combined with an auto-refill program can improve and sustain appropriate inhaler use by community-dwelling elders in a PACE program. The improved score was maintained 4-6 weeks later indicating a sustained benefit of medication administration education. Optimal inhaler use ensures optimal dosing and supports appropriate inhaler treatment in lieu of oral agents. Copyright © 2015 Elsevier Ltd. All rights reserved.
Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.
Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J
2016-03-01
To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.
A differentiable reformulation for E-optimal design of experiments in nonlinear dynamic biosystems.
Telen, Dries; Van Riet, Nick; Logist, Flip; Van Impe, Jan
2015-06-01
Informative experiments are highly valuable for estimating parameters in nonlinear dynamic bioprocesses. Techniques for optimal experiment design ensure the systematic design of such informative experiments. The E-criterion which can be used as objective function in optimal experiment design requires the maximization of the smallest eigenvalue of the Fisher information matrix. However, one problem with the minimal eigenvalue function is that it can be nondifferentiable. In addition, no closed form expression exists for the computation of eigenvalues of a matrix larger than a 4 by 4 one. As eigenvalues are normally computed with iterative methods, state-of-the-art optimal control solvers are not able to exploit automatic differentiation to compute the derivatives with respect to the decision variables. In the current paper a reformulation strategy from the field of convex optimization is suggested to circumvent these difficulties. This reformulation requires the inclusion of a matrix inequality constraint involving positive semidefiniteness. In this paper, this positive semidefiniteness constraint is imposed via Sylverster's criterion. As a result the maximization of the minimum eigenvalue function can be formulated in standard optimal control solvers through the addition of nonlinear constraints. The presented methodology is successfully illustrated with a case study from the field of predictive microbiology. Copyright © 2015. Published by Elsevier Inc.
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
Optimal Signal Processing of Frequency-Stepped CW Radar Data
NASA Technical Reports Server (NTRS)
Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.
1995-01-01
An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.
Optimal Signal Processing of Frequency-Stepped CW Radar Data
NASA Technical Reports Server (NTRS)
Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.
1995-01-01
An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; ...
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithmmore » for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.« less
Monosomy 3 by FISH in uveal melanoma: variability in techniques and results.
Aronow, Mary; Sun, Yang; Saunthararajah, Yogen; Biscotti, Charles; Tubbs, Raymond; Triozzi, Pierre; Singh, Arun D
2012-09-01
Tumor monosomy 3 confers a poor prognosis in patients with uveal melanoma. We critically review the techniques used for fluorescence in situ hybridization (FISH) detection of monosomy 3 in order to assess variability in practice patterns and to explain differences in results. Significant variability that has likely affected reported results was found in tissue sampling methods, selection of FISH probes, number of cells counted, and the cut-off point used to determine monosomy 3 status. Clinical parameters and specific techniques employed to report FISH results should be specified so as to allow meta-analysis of published studies. FISH-based detection of monosomy 3 in uveal melanoma has not been performed in a standardized manner, which limits conclusions regarding its clinical utility. FISH is a widely available, versatile technology, and when performed optimally has the potential to be a valuable tool for determining the prognosis of uveal melanoma. Copyright © 2012 Elsevier Inc. All rights reserved.
Stochastic Optical Reconstruction Microscopy (STORM).
Xu, Jianquan; Ma, Hongqiang; Liu, Yang
2017-07-05
Super-resolution (SR) fluorescence microscopy, a class of optical microscopy techniques at a spatial resolution below the diffraction limit, has revolutionized the way we study biology, as recognized by the Nobel Prize in Chemistry in 2014. Stochastic optical reconstruction microscopy (STORM), a widely used SR technique, is based on the principle of single molecule localization. STORM routinely achieves a spatial resolution of 20 to 30 nm, a ten-fold improvement compared to conventional optical microscopy. Among all SR techniques, STORM offers a high spatial resolution with simple optical instrumentation and standard organic fluorescent dyes, but it is also prone to image artifacts and degraded image resolution due to improper sample preparation or imaging conditions. It requires careful optimization of all three aspects-sample preparation, image acquisition, and image reconstruction-to ensure a high-quality STORM image, which will be extensively discussed in this unit. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Jones, Siana; Shun-Shin, Matthew J; Cole, Graham D; Sau, Arunashis; March, Katherine; Williams, Suzanne; Kyriacou, Andreas; Hughes, Alun D; Mayet, Jamil; Frenneaux, Michael; Manisty, Charlotte H; Whinnett, Zachary I; Francis, Darrel P
2014-04-01
Full-disclosure study describing Doppler patterns during iterative atrioventricular delay (AVD) optimization of biventricular pacemakers (cardiac resynchronization therapy, CRT). Doppler traces of the first 50 eligible patients undergoing iterative Doppler AVD optimization in the BRAVO trial were examined. Three experienced observers classified conformity to guideline-described patterns. Each observer then selected the optimum AVD on two separate occasions: blinded and unblinded to AVD. Four Doppler E-A patterns occurred: A (always merged, 18% of patients), B (incrementally less fusion at short AVDs, 12%), C (full separation at short AVDs, as described by the guidelines, 28%), and D (always separated, 42%). In Groups A and D (60%), the iterative guidelines therefore cannot specify one single AVD. On the kappa scale (0 = chance alone; 1 = perfect agreement), observer agreement for the ideal AVD in Classes B and C was poor (0.32) and appeared worse in Groups A and D (0.22). Blinding caused the scattering of the AVD selected as optimal to widen (standard deviation rising from 37 to 49 ms, P < 0.001). By blinding 28% of the selected optimum AVDs were ≤60 or ≥200 ms. All 50 Doppler datasets are presented, to support future methodological testing. In most patients, the iterative method does not clearly specify one AVD. In all the patients, agreement on the ideal AVD between skilled observers viewing identical images is poor. The iterative protocol may successfully exclude some extremely unsuitable AVDs, but so might simply accepting factory default. Irreproducibility of the gold standard also prevents alternative physiological optimization methods from being validated honestly.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
The aggregated unfitted finite element method for elliptic problems
NASA Astrophysics Data System (ADS)
Badia, Santiago; Verdugo, Francesc; Martín, Alberto F.
2018-07-01
Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.
Trajectory optimization for dynamic couch rotation during volumetric modulated arc radiotherapy
NASA Astrophysics Data System (ADS)
Smyth, Gregory; Bamber, Jeffrey C.; Evans, Philip M.; Bedford, James L.
2013-11-01
Non-coplanar radiation beams are often used in three-dimensional conformal and intensity modulated radiotherapy to reduce dose to organs at risk (OAR) by geometric avoidance. In volumetric modulated arc radiotherapy (VMAT) non-coplanar geometries are generally achieved by applying patient couch rotations to single or multiple full or partial arcs. This paper presents a trajectory optimization method for a non-coplanar technique, dynamic couch rotation during VMAT (DCR-VMAT), which combines ray tracing with a graph search algorithm. Four clinical test cases (partial breast, brain, prostate only, and prostate and pelvic nodes) were used to evaluate the potential OAR sparing for trajectory-optimized DCR-VMAT plans, compared with standard coplanar VMAT. In each case, ray tracing was performed and a cost map reflecting the number of OAR voxels intersected for each potential source position was generated. The least-cost path through the cost map, corresponding to an optimal DCR-VMAT trajectory, was determined using Dijkstra’s algorithm. Results show that trajectory optimization can reduce dose to specified OARs for plans otherwise comparable to conventional coplanar VMAT techniques. For the partial breast case, the mean heart dose was reduced by 53%. In the brain case, the maximum lens doses were reduced by 61% (left) and 77% (right) and the globes by 37% (left) and 40% (right). Bowel mean dose was reduced by 15% in the prostate only case. For the prostate and pelvic nodes case, the bowel V50 Gy and V60 Gy were reduced by 9% and 45% respectively. Future work will involve further development of the algorithm and assessment of its performance over a larger number of cases in site-specific cohorts.
Performance of Grey Wolf Optimizer on large scale problems
NASA Astrophysics Data System (ADS)
Gupta, Shubham; Deep, Kusum
2017-01-01
For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.
NASA Astrophysics Data System (ADS)
Hassan, Rania A.
In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.
Improving cerebellar segmentation with statistical fusion
NASA Astrophysics Data System (ADS)
Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.
2016-03-01
The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
NASA Astrophysics Data System (ADS)
Zhou, Sheng; Han, Yanling; Li, Bincheng
2018-02-01
Nitric oxide (NO) in exhaled breath has gained increasing interest in recent years mainly driven by the clinical need to monitor inflammatory status in respiratory disorders, such as asthma and other pulmonary conditions. Mid-infrared cavity ring-down spectroscopy (CRDS) using an external cavity, widely tunable continuous-wave quantum cascade laser operating at 5.3 µm was employed for NO detection. The detection pressure was reduced in steps to improve the sensitivity, and the optimal pressure was determined to be 15 kPa based on the fitting residual analysis of measured absorption spectra. A detection limit (1σ, or one time of standard deviation) of 0.41 ppb was experimentally achieved for NO detection in human breath under the optimized condition in a total of 60 s acquisition time (2 s per data point). Diurnal measurement session was conducted for exhaled NO. The experimental results indicated that mid-infrared CRDS technique has great potential for various applications in health diagnosis.
NASA Astrophysics Data System (ADS)
Sun, Ning; Wu, Yiming; Chen, He; Fang, Yongchun
2018-03-01
Underactuated cranes play an important role in modern industry. Specifically, in most situations of practical applications, crane systems exhibit significant double pendulum characteristics, which makes the control problem quite challenging. Moreover, most existing planners/controllers obtained with standard methods/techniques for double pendulum cranes cannot minimize the energy consumption when fulfilling the transportation tasks. Therefore, from a practical perspective, this paper proposes an energy-optimal solution for transportation control of double pendulum cranes. By applying the presented approach, the transportation objective, including fast trolley positioning and swing elimination, is achieved with minimized energy consumption, and the residual oscillations are suppressed effectively with all the state constrains being satisfied during the entire transportation process. As far as we know, this is the first energy-optimal solution for transportation control of underactuated double pendulum cranes with various state and control constraints. Hardware experimental results are included to verify the effectiveness of the proposed approach, whose superior performance is reflected by being experimentally compared with some comparative controllers.
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff
1992-01-01
The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapert, M.; Glaser, S. J.; Assémat, E.
We show to which extent the signal to noise ratio per unit time of a spin 1/2 particle can be maximized. We consider a cyclic repetition of experiments made of a measurement followed by a radio-frequency magnetic field excitation of the system, in the case of unbounded amplitude. In the periodic regime, the objective of the control problem is to design the initial state of the system and the pulse sequence which leads to the best signal to noise performance. We focus on two specific issues relevant in nuclear magnetic resonance, the crusher gradient and the radiation damping cases. Optimalmore » control techniques are used to solve this non-standard control problem. We discuss the optimality of the Ernst angle solution, which is commonly applied in spectroscopic and medical imaging applications. In the radiation damping situation, we show that in some cases, the optimal solution differs from the Ernst one.« less
Mulder, Samuel A; Wunsch, Donald C
2003-01-01
The Traveling Salesman Problem (TSP) is a very hard optimization problem in the field of operations research. It has been shown to be NP-complete, and is an often-used benchmark for new optimization techniques. One of the main challenges with this problem is that standard, non-AI heuristic approaches such as the Lin-Kernighan algorithm (LK) and the chained LK variant are currently very effective and in wide use for the common fully connected, Euclidean variant that is considered here. This paper presents an algorithm that uses adaptive resonance theory (ART) in combination with a variation of the Lin-Kernighan local optimization algorithm to solve very large instances of the TSP. The primary advantage of this algorithm over traditional LK and chained-LK approaches is the increased scalability and parallelism allowed by the divide-and-conquer clustering paradigm. Tours obtained by the algorithm are lower quality, but scaling is much better and there is a high potential for increasing performance using parallel hardware.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
Optimization of freeform surfaces using intelligent deformation techniques for LED applications
NASA Astrophysics Data System (ADS)
Isaac, Annie Shalom; Neumann, Cornelius
2018-04-01
For many years, optical designers have great interests in designing efficient optimization algorithms to bring significant improvement to their initial design. However, the optimization is limited due to a large number of parameters present in the Non-uniform Rationaly b-Spline Surfaces. This limitation was overcome by an indirect technique known as optimization using freeform deformation (FFD). In this approach, the optical surface is placed inside a cubical grid. The vertices of this grid are modified, which deforms the underlying optical surface during the optimization. One of the challenges in this technique is the selection of appropriate vertices of the cubical grid. This is because these vertices share no relationship with the optical performance. When irrelevant vertices are selected, the computational complexity increases. Moreover, the surfaces created by them are not always feasible to manufacture, which is the same problem faced in any optimization technique while creating freeform surfaces. Therefore, this research addresses these two important issues and provides feasible design techniques to solve them. Finally, the proposed techniques are validated using two different illumination examples: street lighting lens and stop lamp for automobiles.
Saito, Atsushi; Nawano, Shigeru; Shimizu, Akinobu
2017-05-01
This paper addresses joint optimization for segmentation and shape priors, including translation, to overcome inter-subject variability in the location of an organ. Because a simple extension of the previous exact optimization method is too computationally complex, we propose a fast approximation for optimization. The effectiveness of the proposed approximation is validated in the context of gallbladder segmentation from a non-contrast computed tomography (CT) volume. After spatial standardization and estimation of the posterior probability of the target organ, simultaneous optimization of the segmentation, shape, and location priors is performed using a branch-and-bound method. Fast approximation is achieved by combining sampling in the eigenshape space to reduce the number of shape priors and an efficient computational technique for evaluating the lower bound. Performance was evaluated using threefold cross-validation of 27 CT volumes. Optimization in terms of translation of the shape prior significantly improved segmentation performance. The proposed method achieved a result of 0.623 on the Jaccard index in gallbladder segmentation, which is comparable to that of state-of-the-art methods. The computational efficiency of the algorithm is confirmed to be good enough to allow execution on a personal computer. Joint optimization of the segmentation, shape, and location priors was proposed, and it proved to be effective in gallbladder segmentation with high computational efficiency.
Fahimian, Benjamin; Yu, Victoria; Horst, Kathleen; Xing, Lei; Hristov, Dimitre
2013-12-01
External beam radiation therapy (EBRT) provides a non-invasive treatment alternative for accelerated partial breast irradiation (APBI), however, limitations in achievable dose conformity of current EBRT techniques have been correlated to reported toxicity. To enhance the conformity of EBRT APBI, a technique for conventional LINACs is developed, which through combined motion of the couch, intensity modulated delivery, and a prone breast setup, enables wide-angular coronal arc irradiation of the ipsilateral breast without irradiating through the thorax and contralateral breast. A couch trajectory optimization technique was developed to determine the trajectories that concurrently avoid collision with the LINAC and maintain the target within the MLC apertures. Inverse treatment planning was performed along the derived trajectory. The technique was experimentally implemented by programming the Varian TrueBeam™ STx in Developer Mode. The dosimetric accuracy of the delivery was evaluated by ion chamber and film measurements in phantom. The resulting optimized trajectory was shown to be necessarily non-isocentric, and contain both translation and rotations of the couch. Film measurements resulted in 93% of the points in the measured two-dimensional dose maps passing the 3%/3mm Gamma criterion. Preliminary treatment plan comparison to 5-field 3D-conformal, IMRT, and VMAT demonstrated enhancement in conformity, and reduction of the normal tissue V50% and V100% parameters that have been correlated with EBRT toxicity. The feasibility of wide-angular intensity modulated partial breast irradiation using motion of the couch has been demonstrated experimentally on a standard LINAC for the first time. For patients eligible for a prone setup, the technique may enable improvement of dose conformity and associated dose-volume parameters correlated with toxicity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Liquifying PLDLLA Anchor Fixation in Achilles Reconstruction for Insertional Tendinopathy.
Boden, Stephanie A; Boden, Allison L; Mignemi, Danielle; Bariteau, Jason T
2018-04-01
Insertional Achilles tendinopathy (IAT) is a frequent cause of posterior heel pain and is often associated with Haglund's deformity. Surgical correction for refractory cases of IAT has been well studied; however, the method of tendon fixation to bone in these procedures remains controversial, and to date, no standard technique has been identified for tendon fixation in these surgeries. Often, after Haglund's resection, there is large exposed cancellous surface for Achilles reattachment, which may require unique fixation to optimize outcomes. Previous studies have consistently demonstrated improved patient outcomes after Achilles tendon reconstruction with early rehabilitation with protected weight bearing, evidencing the need for a strong and stable anchoring of the Achilles tendon that allows early weight bearing without tendon morbidity. In this report, we highlight the design, biomechanics, and surgical technique of Achilles tendon reconstruction with Haglund's deformity using a novel technique that utilizes ultrasonic energy to liquefy the suture anchor, allowing it to incorporate into surrounding bone. Biomechanical studies have demonstrated superior strength of the suture anchor utilizing this novel technique as compared with prior techniques. However, future research is needed to ensure that outcomes of this technique are favorable when compared with outcomes using traditional suture anchoring methods. Level V: Operative technique.
Fukao, Mari; Kawamoto, Kiyosumi; Matsuzawa, Hiroaki; Honda, Osamu; Iwaki, Takeshi; Doi, Tsukasa
2015-01-01
We aimed to optimize the exposure conditions in the acquisition of soft-tissue images using dual-energy subtraction chest radiography with a direct-conversion flat-panel detector system. Two separate chest images were acquired at high- and low-energy exposures with standard or thick chest phantoms. The high-energy exposure was fixed at 120 kVp with the use of an auto-exposure control technique. For the low-energy exposure, the tube voltages and entrance surface doses ranged 40-80 kVp and 20-100 % of the dose required for high-energy exposure, respectively. Further, a repetitive processing algorithm was used for reduction of the image noise generated by the subtraction process. Seven radiology technicians ranked soft-tissue images, and these results were analyzed using the normalized-rank method. Images acquired at 60 kVp were of acceptable quality regardless of the entrance surface dose and phantom size. Using a repetitive processing algorithm, the minimum acceptable doses were reduced from 75 to 40 % for the standard phantom and to 50 % for the thick phantom. We determined that the optimum low-energy exposure was 60 kVp at 50 % of the dose required for the high-energy exposure. This allowed the simultaneous acquisition of standard radiographs and soft-tissue images at 1.5 times the dose required for a standard radiograph, which is significantly lower than the values reported previously.
TH-A-BRF-05: MRI of Individual Lymph Nodes to Guide Regional Breast Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heijst, T van; Asselen, B van; Lagendijk, J
2014-06-15
Purpose: In regional radiotherapy (RT) for breast-cancer patients, direct visualization of individual lymph nodes (LNs) may reduce target volumes and Result in lower toxicity (i.e. reduced radiation pneumonitis, arm edema, arm morbidity), relative to standard CT-based delineations. To this end, newly designed magnetic resonance imaging (MRI) sequences were optimized and assessed qualitatively and quantitatively. Methods: In ten healthy female volunteers, a scanning protocol was developed and optimized. Coronal images were acquired in supine RT position positioned on a wedge board on a 1.5 T Ingenia (Philips) wide-bore MRI. In four volunteers the optimized MRI protocol was applied, including a 3-dimensionalmore » (3D) T1-weighted (T1w) fast-field-echo (FFE). T2w sequences, including 3D FFE, 3D and 2D fast spin echo (FSE), and diffusion-weighted single-shot echo-planar imaging (DWI) were also performed. Several fatsuppression techniques were used. Qualitative evaluation parameters included LN contrast, motion susceptibility, visibility of anatomical structures, and fat suppression. The number of visible axillary and supraclavicular LNs was also determined. Results: T1 FFE, insensitive to motion, lacked contrast of LNs, which often blended in with soft tissue and blood. T2 FFE showed high contrast, but some LNs were obscured due to motion. Both 2D and 3D FSE were motion-insensitive having high contrast, although some blood remained visible. 2D FSE showed more anatomical details, while in 3D FSE, some blurring occurred. DWI showed high LN contrast, but suffered from geometric distortions and low resolution. Fat suppression by mDixon was the most reliable in regions with magnetic-field inhomogeneities. The FSE sequences showed the highest sensitivity for LN detection. Conclusion: MRI of regional LNs was achieved in volunteers. The FSE techniques were robust and the most sensitive. Our optimized MRI sequences can facilitate direct delineation of individual LNs. This can Result in smaller target volumes and reduced toxicity in regional RT compared to standard CT planning.« less
Optimized radiation-hardened erbium doped fiber amplifiers for long space missions
NASA Astrophysics Data System (ADS)
Ladaci, A.; Girard, S.; Mescia, L.; Robin, T.; Laurent, A.; Cadier, B.; Boutillier, M.; Ouerdane, Y.; Boukenter, A.
2017-04-01
In this work, we developed and exploited simulation tools to optimize the performances of rare earth doped fiber amplifiers (REDFAs) for space missions. To describe these systems, a state-of-the-art model based on the rate equations and the particle swarm optimization technique is developed in which we also consider the main radiation effect on REDFA: the radiation induced attenuation (RIA). After the validation of this tool set by confrontation between theoretical and experimental results, we investigate how the deleterious radiation effects on the amplifier performance can be mitigated following adequate strategies to conceive the REDFA architecture. The tool set was validated by comparing the calculated Erbium-doped fiber amplifier (EDFA) gain degradation under X-rays at ˜300 krad(SiO2) with the corresponding experimental results. Two versions of the same fibers were used in this work, a standard optical fiber and a radiation hardened fiber, obtained by loading the previous fiber with hydrogen gas. Based on these fibers, standard and radiation hardened EDFAs were manufactured and tested in different operating configurations, and the obtained data were compared with simulation data done considering the same EDFA structure and fiber properties. This comparison reveals a good agreement between simulated gain and experimental data (<10% as the maximum error for the highest doses). Compared to our previous results obtained on Er/Yb-amplifiers, these results reveal the importance of the photo-bleaching mechanism competing with the RIA that cannot be neglected for the modeling of the radiation-induced gain degradation of EDFAs. This implies to measure in representative conditions the RIA at the pump and signal wavelengths that are used as input parameters for the simulation. The validated numerical codes have then been used to evaluate the potential of some EDFA architecture evolutions in the amplifier performance during the space mission. Optimization of both the fiber length and the EDFA pumping scheme allows us to strongly reduce its radiation vulnerability in terms of gain. The presented approach is a complementary and effective tool for hardening by device techniques and opens new perspectives for the applications of REDFAs and lasers in harsh environments.
Optimization and standardization of pavement management processes.
DOT National Transportation Integrated Search
2004-08-01
This report addresses issues related to optimization and standardization of current pavement management processes in Kentucky. Historical pavement management records were analyzed, which indicates that standardization is necessary in future pavement ...
NASA Astrophysics Data System (ADS)
Llopis-Albert, C.; Peña-Haro, S.; Pulido-Velazquez, M.; Molina, J.
2012-04-01
Water quality management is complex due to the inter-relations between socio-political, environmental and economic constraints and objectives. In order to choose an appropriate policy to reduce nitrate pollution in groundwater it is necessary to consider different objectives, often in conflict. In this paper, a hydro-economic modeling framework, based on a non-linear optimization(CONOPT) technique, which embeds simulation of groundwater mass transport through concentration response matrices, is used to study optimal policies for groundwater nitrate pollution control under different objectives and constraints. Three objectives were considered: recovery time (for meeting the environmental standards, as required by the EU Water Framework Directive and Groundwater Directive), maximum nitrate concentration in groundwater, and net benefits in agriculture. Another criterion was added: the reliability of meeting the nitrate concentration standards. The approach allows deriving the trade-offs between the reliability of meeting the standard, the net benefits from agricultural production and the recovery time. Two different policies were considered: spatially distributed fertilizer standards or quotas (obtained through multi-objective optimization) and fertilizer prices. The multi-objective analysis allows to compare the achievement of the different policies, Pareto fronts (or efficiency frontiers) and tradeoffs for the set of mutually conflicting objectives. The constraint method is applied to generate the set of non-dominated solutions. The multi-objective framework can be used to design groundwater management policies taking into consideration different stakeholders' interests (e.g., policy makers, agricultures or environmental groups). The methodology was applied to the El Salobral-Los Llanos aquifer in Spain. Over the past 30 years the area has undertaken a significant socioeconomic development, mainly due to the intensive groundwater use for irrigated crops, which has provoked a steady decline of groundwater levels as well as high nitrate concentrations at certain locations (above 50 mg/l.). The results showed the usefulness of this multi-objective hydro-economic approach for designing sustainable nitrate pollution control policies (as fertilizer quotas or efficient fertilizer pricing policies) with insight into the economic cost of satisfying the environmental constraints and the tradeoffs with different time horizons.
[Optimizing primary total hip replacement--a technique to effect saving of manpower].
Huber, J F; Rink, M; Broger, I; Zumstein, M; Ruflin, G B
2003-01-01
Development of a standardized surgical technique for total hip replacement thereby saving manpower (one assistant) by using a retractor system. Total hip replacement is performed with the patient in a true lateral position on a tunnel cushion. By means of a direct lateral approach the pelvitrochanteric muscles are partially detached using an omega-shaped cut. The Bookwalter retractor is fixed dorsally on the operating table. The ring is centered keeping the greater trochanter in the middle. The Hohmann retractors are fixed to the ring to sufficiently expose the acetabulum. To insert the femoral stem the ring needs to be opened dorsally and the patient's leg is bent 90 degrees in the hip and the knee over the tunnel cushion. The muscles inserting at the greater trochanter are retracted by a separate Hohmann retractor with weight. In a case control study with matched pairs the patients treated with this technique were compared with those treated in supine position with the transgluteal approach. The number of assistants required and the operating time were assessed. All the hip replacements with the patient in side position were performed with one assistant, in supine position with two assistants. The operating time did not differ significantly (supine position 110 min/side position 112 min). The complication rate in both groups was comparable (one secondary wound healing, one transient ischalgia). The process of total hip replacement can be optimized. The described technique allows to spare one surgical assistant without prolonging the operating time.
Hernandez, Wilmar
2007-01-01
In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.
Where are aphasia theory and management “headed”?
Tippett, Donna C.; Hillis, Argye E.
2017-01-01
The sequelae of post-stroke aphasia are considerable, necessitating an understanding of the functional neuroanatomy of language, cognitive processes underlying various language tasks, and the mechanisms of recovery after stroke. This knowledge is vital in providing optimal care of individuals with aphasia and counseling to their families and caregivers. The standard of care in the rehabilitation of aphasia dictates that treatment be evidence-based and person-centered. Promising techniques, such as cortical stimulation as an adjunct to behavioral therapy, are just beginning to be explored. These topics are discussed in this review. PMID:28713549
Where are aphasia theory and management "headed"?
Tippett, Donna C; Hillis, Argye E
2017-01-01
The sequelae of post-stroke aphasia are considerable, necessitating an understanding of the functional neuroanatomy of language, cognitive processes underlying various language tasks, and the mechanisms of recovery after stroke. This knowledge is vital in providing optimal care of individuals with aphasia and counseling to their families and caregivers. The standard of care in the rehabilitation of aphasia dictates that treatment be evidence-based and person-centered. Promising techniques, such as cortical stimulation as an adjunct to behavioral therapy, are just beginning to be explored. These topics are discussed in this review.
NASA Astrophysics Data System (ADS)
Hicks-Jalali, Shannon; Sica, R. J.; Haefele, Alexander; Martucci, Giovanni
2018-04-01
With only 50% downtime from 2007-2016, the RALMO lidar in Payerne, Switzerland, has one of the largest continuous lidar data sets available. These measurements will be used to produce an extensive lidar water vapour climatology using the Optimal Estimation Method introduced by Sica and Haefele (2016). We will compare our improved technique for external calibration using radiosonde trajectories with the standard external methods, and present the evolution of the lidar constant from 2007 to 2016.
Design of WLAN microstrip antenna for 5.17 - 5.835 GHz
NASA Astrophysics Data System (ADS)
Bugaj, Jarosław; Bugaj, Marek; Wnuk, Marian
2017-04-01
This paper presents the project of miniaturized WLAN Antenna made in microstrip technique working at a frequency of 5.17 - 5.835 GHz in 802.11ac IEEE standard. This dual layer antenna is designed on RT/duroid 5870 ROGERS CORPORATION substrate with dielectric constant 2.33 and thickness of 3.175 mm. The antenna parameters such as return loss, VSWR, gain and directivity are simulated and optimized using commercial computer simulation technology microwave studio (CST MWS). The paper presents the results of discussed numerical analysis.
How to mathematically optimize drug regimens using optimal control.
Moore, Helen
2018-02-01
This article gives an overview of a technique called optimal control, which is used to optimize real-world quantities represented by mathematical models. I include background information about the historical development of the technique and applications in a variety of fields. The main focus here is the application to diseases and therapies, particularly the optimization of combination therapies, and I highlight several such examples. I also describe the basic theory of optimal control, and illustrate each of the steps with an example that optimizes the doses in a combination regimen for leukemia. References are provided for more complex cases. The article is aimed at modelers working in drug development, who have not used optimal control previously. My goal is to make this technique more accessible in the biopharma community.
Optimization techniques applied to passive measures for in-orbit spacecraft survivability
NASA Technical Reports Server (NTRS)
Mog, Robert A.; Price, D. Marvin
1987-01-01
Optimization techniques applied to passive measures for in-orbit spacecraft survivability, is a six-month study, designed to evaluate the effectiveness of the geometric programming (GP) optimization technique in determining the optimal design of a meteoroid and space debris protection system for the Space Station Core Module configuration. Geometric Programming was found to be superior to other methods in that it provided maximum protection from impact problems at the lowest weight and cost.
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
NASA Astrophysics Data System (ADS)
Echer, L.; Marczak, R. J.
2018-02-01
The objective of the present work is to introduce a methodology capable of modelling welded components for structural stress analysis. The modelling technique was based on the recommendations of the International Institute of Welding; however, some geometrical features of the weld fillet were used as design parameters in an optimization problem. Namely, the weld leg length and thickness of the shell elements representing the weld fillet were optimized in such a way that the first natural frequencies were not changed significantly when compared to a reference result. Sequential linear programming was performed for T-joint structures corresponding to two different structural details: with and without full penetration weld fillets. Both structural details were tested in scenarios of various plate thicknesses and depths. Once the optimal parameters were found, a modelling procedure was proposed for T-shaped components. Furthermore, the proposed modelling technique was extended for overlapped welded joints. The results obtained were compared to well-established methodologies presented in standards and in the literature. The comparisons included results for natural frequencies, total mass and structural stress. By these comparisons, it was observed that some established practices produce significant errors in the overall stiffness and inertia. The methodology proposed herein does not share this issue and can be easily extended to other types of structure.
An incremental database access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, Nicholas; Sellis, Timos
1994-01-01
We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.
Pisarska, Margareta D; Akhlaghpour, Marzieh; Lee, Bora; Barlow, Gillian M; Xu, Ning; Wang, Erica T; Mackey, Aaron J; Farber, Charles R; Rich, Stephen S; Rotter, Jerome I; Chen, Yii-der I; Goodarzi, Mark O; Guller, Seth; Williams, John
2016-11-01
Multiple testing to understand global changes in gene expression based on genetic and epigenetic modifications is evolving. Chorionic villi, obtained for prenatal testing, is limited, but can be used to understand ongoing human pregnancies. However, optimal storage, processing and utilization of CVS for multiple platform testing have not been established. Leftover CVS samples were flash-frozen or preserved in RNAlater. Modifications to standard isolation kits were performed to isolate quality DNA and RNA from samples as small as 2-5 mg. RNAlater samples had significantly higher RNA yields and quality and were successfully used in microarray and RNA-sequencing (RNA-seq). RNA-seq libraries generated using 200 versus 800-ng RNA showed similar biological coefficients of variation. RNAlater samples had lower DNA yields and quality, which improved by heating the elution buffer to 70 °C. Purification of DNA was not necessary for bisulfite-conversion and genome-wide methylation profiling. CVS cells were propagated and continue to express genes found in freshly isolated chorionic villi. CVS samples preserved in RNAlater are superior. Our optimized techniques provide specimens for genetic, epigenetic and gene expression studies from a single small sample which can be used to develop diagnostics and treatments using a systems biology approach in the prenatal period. © 2016 John Wiley & Sons, Ltd. © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Sreekanta Murthy, T.
1992-01-01
Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.
NASA Astrophysics Data System (ADS)
Bandte, Oliver
It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.
NASA Astrophysics Data System (ADS)
Li, Haichen; Qin, Tao; Wang, Weiping; Lei, Xiaohui; Wu, Wenhui
2018-02-01
Due to the weakness in holding diversity and reaching global optimum, the standard particle swarm optimization has not performed well in reservoir optimal operation. To solve this problem, this paper introduces downhill simplex method to work together with the standard particle swarm optimization. The application of this approach in Goupitan reservoir optimal operation proves that the improved method had better accuracy and higher reliability with small investment.
Hamedi, Raheleh; Hadjmohammadi, Mohammad Reza
2017-09-01
A novel design of hollow-fiber liquid-phase microextraction containing multiwalled carbon nanotubes as a solid sorbent, which is immobilized in the pore and lumen of hollow fiber by the sol-gel technique, was developed for the pre-concentration and determination of polycyclic aromatic hydrocarbons in environmental water samples. The proposed method utilized both solid- and liquid-phase microextraction media. Parameters that affect the extraction of polycyclic aromatic hydrocarbons were optimized in two successive steps as follows. Firstly, a methodology based on a quarter factorial design was used to choose the significant variables. Then, these significant factors were optimized utilizing central composite design. Under the optimized condition (extraction time = 25 min, amount of multiwalled carbon nanotubes = 78 mg, sample volume = 8 mL, and desorption time = 5 min), the calibration curves showed high linearity (R 2 = 0.99) in the range of 0.01-500 ng/mL and the limits of detection were in the range of 0.007-1.47 ng/mL. The obtained extraction recoveries for 10 ng/mL of polycyclic aromatic hydrocarbons standard solution were in the range of 85-92%. Replicating the experiment under these conditions five times gave relative standard deviations lower than 6%. Finally, the method was successfully applied for pre-concentration and determination of polycyclic aromatic hydrocarbons in environmental water samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Muscle optimization techniques impact the magnitude of calculated hip joint contact forces.
Wesseling, Mariska; Derikx, Loes C; de Groote, Friedl; Bartels, Ward; Meyer, Christophe; Verdonschot, Nico; Jonkers, Ilse
2015-03-01
In musculoskeletal modelling, several optimization techniques are used to calculate muscle forces, which strongly influence resultant hip contact forces (HCF). The goal of this study was to calculate muscle forces using four different optimization techniques, i.e., two different static optimization techniques, computed muscle control (CMC) and the physiological inverse approach (PIA). We investigated their subsequent effects on HCFs during gait and sit to stand and found that at the first peak in gait at 15-20% of the gait cycle, CMC calculated the highest HCFs (median 3.9 times peak GRF (pGRF)). When comparing calculated HCFs to experimental HCFs reported in literature, the former were up to 238% larger. Both static optimization techniques produced lower HCFs (median 3.0 and 3.1 pGRF), while PIA included muscle dynamics without an excessive increase in HCF (median 3.2 pGRF). The increased HCFs in CMC were potentially caused by higher muscle forces resulting from co-contraction of agonists and antagonists around the hip. Alternatively, these higher HCFs may be caused by the slightly poorer tracking of the net joint moment by the muscle moments calculated by CMC. We conclude that the use of different optimization techniques affects calculated HCFs, and static optimization approached experimental values best. © 2014 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Stability characterization of two multi-channel GPS receivers for accurate frequency transfer.
NASA Astrophysics Data System (ADS)
Taris, F.; Uhrich, P.; Thomas, C.; Petit, G.; Jiang, Z.
In recent years, wide-spread use of the GPS common-view technique has led to major improvements, making it possible to compare remote clocks at their full level of performance. For integration times of 1 to 3 days, their frequency differences are consistently measured to about one part in 1014. Recent developments in atomic frequency standards suggest, however, that this performance may no longer be sufficient. The caesium fountain LPTF FO1, built at the BNM-LPTF, Paris, France, shows a short-term white frequency noise characterized by an Allen deviation σy(τ = 1 s) = 5×10-14 and a type B uncertainty of 2×10-15. To compare the frequencies of such highly stable standards would call for GPS common-view results to be averaged over times far exceeding the intervals of their optimal performance. Previous studies have shown the potential of carrier-phase and code measurements from geodetic GPS receivers for clock frequency comparisons. The experiment related here is an attempt to see the stability limit that could be reached using this technique.
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR.
Stein, Erica V; Duewer, David L; Farkas, Natalia; Romsos, Erica L; Wang, Lili; Cole, Kenneth D
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values.
Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR
Duewer, David L.; Farkas, Natalia; Romsos, Erica L.; Wang, Lili; Cole, Kenneth D.
2017-01-01
Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values. PMID:29145448
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Xuanfeng, E-mail: Xuanfeng.ding@beaumont.org; Li, Xiaoqiang; Zhang, J. Michele
Purpose: To present a novel robust and delivery-efficient spot-scanning proton arc (SPArc) therapy technique. Methods and Materials: A SPArc optimization algorithm was developed that integrates control point resampling, energy layer redistribution, energy layer filtration, and energy layer resampling. The feasibility of such a technique was evaluated using sample patients: 1 patient with locally advanced head and neck oropharyngeal cancer with bilateral lymph node coverage, and 1 with a nonmobile lung cancer. Plan quality, robustness, and total estimated delivery time were compared with the robust optimized multifield step-and-shoot arc plan without SPArc optimization (Arc{sub multi-field}) and the standard robust optimized intensity modulatedmore » proton therapy (IMPT) plan. Dose-volume histograms of target and organs at risk were analyzed, taking into account the setup and range uncertainties. Total delivery time was calculated on the basis of a 360° gantry room with 1 revolutions per minute gantry rotation speed, 2-millisecond spot switching time, 1-nA beam current, 0.01 minimum spot monitor unit, and energy layer switching time of 0.5 to 4 seconds. Results: The SPArc plan showed potential dosimetric advantages for both clinical sample cases. Compared with IMPT, SPArc delivered 8% and 14% less integral dose for oropharyngeal and lung cancer cases, respectively. Furthermore, evaluating the lung cancer plan compared with IMPT, it was evident that the maximum skin dose, the mean lung dose, and the maximum dose to ribs were reduced by 60%, 15%, and 35%, respectively, whereas the conformity index was improved from 7.6 (IMPT) to 4.0 (SPArc). The total treatment delivery time for lung and oropharyngeal cancer patients was reduced by 55% to 60% and 56% to 67%, respectively, when compared with Arc{sub multi-field} plans. Conclusion: The SPArc plan is the first robust and delivery-efficient proton spot-scanning arc therapy technique, which could potentially be implemented into routine clinical practice.« less
Progress in multidisciplinary design optimization at NASA Langley
NASA Technical Reports Server (NTRS)
Padula, Sharon L.
1993-01-01
Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.
Ma, Ning; Duncan, Joanna K; Scarfe, Anje J; Schuhmann, Susanne; Cameron, Alun L
2017-06-01
Transversus abdominis plane (TAP) blocks can provide analgesia postoperatively for a range of surgeries. Abundant clinical trials have assessed TAP block showing positive analgesic effects. This systematic review assesses safety and effectiveness outcomes of TAP block in all clinical settings, comparing with both active (standard care) and inactive (placebo) comparators. PubMed, EMBASE, The Cochrane Library and the University of York CRD databases were searched. RCTs were screened for their eligibility and assessed for risk of bias. Meta-analyses were performed on available data. TAP block showed an equivalent safety profile to all comparators in the incidence of nausea (OR = 1.07) and vomiting (OR = 0.81). TAP block was more effective in reducing morphine consumption [MD = 13.05, 95% CI (8.33, 51.23)] and in delaying time to first analgesic request [MD = 123.49, 95% CI (48.59, 198.39)]. Postoperative pain within 24 h was reduced or at least equivalent in TAP block compared to its comparators. Therefore, TAP block is a safe and effective procedure compared to standard care, placebo and other analgesic techniques. Further research is warranted to investigate whether the TAP block technique can be improved by optimizing dose and technique-related factors.
Santiago-Moreno, Julian; Esteso, Milagros Cristina; Villaverde-Morcillo, Silvia; Toledano-Díaz, Adolfo; Castaño, Cristina; Velázquez, Rosario; López-Sebastián, Antonio; Goya, Agustín López; Martínez, Javier Gimeno
2016-01-01
Postcopulatory sexual selection through sperm competition may be an important evolutionary force affecting many reproductive traits, including sperm morphometrics. Environmental factors such as pollutants, pesticides, and climate change may affect different sperm traits, and thus reproduction, in sensitive bird species. Many sperm-handling processes used in assisted reproductive techniques may also affect the size of sperm cells. The accurately measured dimensions of sperm cell structures (especially the head) can thus be used as indicators of environmental influences, in improving our understanding of reproductive and evolutionary strategies, and for optimizing assisted reproductive techniques (e.g., sperm cryopreservation) for use with birds. Computer-assisted sperm morphometry analysis (CASA-Morph) provides an accurate and reliable method for assessing sperm morphometry, reducing the problem of subjectivity associated with human visual assessment. Computerized systems have been standardized for use with semen from different mammalian species. Avian spermatozoa, however, are filiform, limiting their analysis with such systems, which were developed to examine the approximately spherical heads of mammalian sperm cells. To help overcome this, the standardization of staining techniques to be used in computer-assessed light microscopical methods is a priority. The present review discusses these points and describes the sperm morphometric characteristics of several wild and domestic bird species. PMID:27678467
Actuation of atomic force microscopy microcantilevers using contact acoustic nonlinearities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torello, D.; Degertekin, F. Levent, E-mail: levent.degertekin@me.gatech.edu
2013-11-15
A new method of actuating atomic force microscopy (AFM) cantilevers is proposed in which a high frequency (>5 MHz) wave modulated by a lower frequency (∼300 kHz) wave passes through a contact acoustic nonlinearity at the contact interface between the actuator and the cantilever chip. The nonlinearity converts the high frequency, modulated signal to a low frequency drive signal suitable for actuation of tapping-mode AFM probes. The higher harmonic content of this signal is filtered out mechanically by the cantilever transfer function, providing for clean output. A custom probe holder was designed and constructed using rapid prototyping technologies and off-the-shelfmore » components and was interfaced with an Asylum Research MFP-3D AFM, which was then used to evaluate the performance characteristics with respect to standard hardware and linear actuation techniques. Using a carrier frequency of 14.19 MHz, it was observed that the cantilever output was cleaner with this actuation technique and added no significant noise to the system. This setup, without any optimization, was determined to have an actuation bandwidth on the order of 10 MHz, suitable for high speed imaging applications. Using this method, an image was taken that demonstrates the viability of the technique and is compared favorably to images taken with a standard AFM setup.« less
Extended lymphadenectomy in bladder cancer.
Dorin, Ryan P; Skinner, Eila C
2010-09-01
Radical cystectomy with pelvic lymph node dissection (PLND) is the preferred treatment for invasive bladder cancer. It not only results in the best disease-free term survival rates, but also provides the most accurate disease staging and most effective local symptom control. Recent investigations have demonstrated a clinical benefit to performance of an extended PLND, including all lymphatic tissue to the level of the aortic bifurcation. This review will summarize recent findings regarding the clinical benefits of radical cystectomy with extended lymphadenectomy, and will also examine the latest surgical techniques for optimizing the performance of this technically demanding procedure. Recent studies have demonstrated increased recurrence-free survival and overall survival rates in patients undergoing radical cystectomy with extended PLND, even in cases of pathologically lymph node negative disease. The growing use of minimally invasive techniques has prompted interest in robotic radical cystectomy and extended PLND, and recent reports have demonstrated the feasibility of this technique. The standardization of extended PLND templates has also been a focus of contemporary research. Contemporary research strongly suggests that all patients undergoing radical cystectomy for bladder cancer should undergo concomitant extended PLND. Randomized trials are still needed to confirm the benefits of extended over 'standard' PLND, and to clarify which patients may receive the greatest benefit from this procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeling, V; Hossain, S; Hildebrand, K
Purpose: To show improvements in dose conformity and normal brain tissue sparing using an optimal planning technique (OPT) against clinically acceptable planning technique (CAP) in the treatment of multiple brain metastases. Methods: A standardized international benchmark case with12 intracranial tumors was planned using two different VMAT optimization methods. Plans were split into four groups with 3, 6, 9, and 12 targets each planned with 3, 5, and 7 arcs using Eclipse TPS. The beam geometries were 1 full coplanar and half non-coplanar arcs. A prescription dose of 20Gy was used for all targets. The following optimization criteria was used (OPTmore » vs. CAP): (No upper limit vs.108% upper limit for target volume), (priority 140–150 vs. 75–85 for normal-brain-tissue), and (selection of automatic sparing Normal-Tissue-Objective (NTO) vs. Manual NTO). Both had priority 50 to critical structures such as brainstem and optic-chiasm, and both had an NTO priority 150. Normal-brain-tissue doses along with Paddick Conformity Index (PCI) were evaluated. Results: In all cases PCI was higher for OPT plans. The average PCI (OPT,CAP) for all targets was (0.81,0.64), (0.81,0.63), (0.79,0.57), and (0.72,0.55) for 3, 6, 9, and 12 target plans respectively. The percent decrease in normal brain tissue volume (OPT/CAP*100) achieved by OPT plans was (reported as follows: V4, V8, V12, V16, V20) (184, 343, 350, 294, 371%), (192, 417, 380, 299, 360%), and (235, 390, 299, 281, 502%) for the 3, 5, 7 arc 12 target plans, respectively. The maximum brainstem dose decreased for the OPT plan by 4.93, 4.89, and 5.30 Gy for 3, 5, 7 arc 12 target plans, respectively. Conclusion: Substantial increases in PCI, critical structure sparing, and decreases in normal brain tissue dose were achieved by eliminating upper limits from optimization, using automatic sparing of normal tissue function with high priority, and a high priority to normal brain tissue.« less
Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu
2017-01-01
The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.
Multispectral tissue characterization for intestinal anastomosis optimization.
Cha, Jaepyeong; Shademan, Azad; Le, Hanh N D; Decker, Ryan; Kim, Peter C W; Kang, Jin U; Krieger, Axel
2015-10-01
Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement.
Accurate EPR radiosensitivity calibration using small sample masses
NASA Astrophysics Data System (ADS)
Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.
2000-03-01
We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.
Multispectral tissue characterization for intestinal anastomosis optimization
Cha, Jaepyeong; Shademan, Azad; Le, Hanh N. D.; Decker, Ryan; Kim, Peter C. W.; Kang, Jin U.; Krieger, Axel
2015-01-01
Abstract. Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement. PMID:26440616
Three-Dimensional Printing: An Aid to Epidural Access for Neuromodulation.
Taverner, Murray G; Monagle, John P
2017-08-01
The case report details to use of three-dimensional (3D) printing as an aid to neuromodulation. A patient is described in whom previous attempts at spinal neuromodulation had failed due to lack of epidural or intrathecal access, and the use of a 3D printed model allowed for improved planning and ultimately, success. Successful spinal cord stimulation was achieved with the plan developed by access to a 3D model of the patient's spine. Neuromodulation techniques can provide the optimal analgesic techniques for individual patients. At times these can fail due to lack of access to the site for intervention, in this case epidural access. 3D printing may provide additional information to improve the likelihood of access when anatomy is distorted and standard approaches prove difficult. © 2017 International Neuromodulation Society.
Diagnosis and quantification of the iron overload through Magnetic resonance.
Alústiza Echeverría, J M; Barrera Portillo, M C; Guisasola Iñiguiz, A; Ugarte Muño, A
There are different magnetic resonance techniques and models to quantify liver iron concentration. T2 relaxometry methods evaluate the iron concentration in the myocardium, and they are able to discriminate all the levels of iron overload in the liver. Signal intensity ratio methods saturate with high levels of liver overload and can not assess iron concentration in the myocardium but they are more accessible and are very standardized. This article reviews, in different clinical scenarios, when Magnetic Resonance must be used to assess iron overload in the liver and myocardium and analyzes the current challenges to optimize the aplication of the technique and to be it included in the clinical guidelines. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
Non-parametric PCM to ADM conversion. [Pulse Code to Adaptive Delta Modulation
NASA Technical Reports Server (NTRS)
Locicero, J. L.; Schilling, D. L.
1977-01-01
An all-digital technique to convert pulse code modulated (PCM) signals into adaptive delta modulation (ADM) format is presented. The converter developed is shown to be independent of the statistical parameters of the encoded signal and can be constructed with only standard digital hardware. The structure of the converter is simple enough to be fabricated on a large scale integrated circuit where the advantages of reliability and cost can be optimized. A concise evaluation of this PCM to ADM translation technique is presented and several converters are simulated on a digital computer. A family of performance curves is given which displays the signal-to-noise ratio for sinusoidal test signals subjected to the conversion process, as a function of input signal power for several ratios of ADM rate to Nyquist rate.
NASA Astrophysics Data System (ADS)
Chen, Shiyu; Li, Haiyang; Baoyin, Hexi
2018-06-01
This paper investigates a method for optimizing multi-rendezvous low-thrust trajectories using indirect methods. An efficient technique, labeled costate transforming, is proposed to optimize multiple trajectory legs simultaneously rather than optimizing each trajectory leg individually. Complex inner-point constraints and a large number of free variables are one main challenge in optimizing multi-leg transfers via shooting algorithms. Such a difficulty is reduced by first optimizing each trajectory leg individually. The results may be, next, utilized as an initial guess in the simultaneous optimization of multiple trajectory legs. In this paper, the limitations of similar techniques in previous research is surpassed and a homotopic approach is employed to improve the convergence efficiency of the shooting process in multi-rendezvous low-thrust trajectory optimization. Numerical examples demonstrate that newly introduced techniques are valid and efficient.
Aboal, J R; Boquete, M T; Carballeira, A; Casanova, A; Debén, S; Fernández, J A
2017-05-01
In this study we examined 6080 data gathered by our research group during more than 20 years of research on the moss biomonitoring technique, in order to quantify the variability generated by different aspects of the protocol and to calculate the overall measurement uncertainty associated with the technique. The median variance of the concentrations of different pollutants measured in moss tissues attributed to the different methodological aspects was high, reaching values of 2851 (ng·g -1 ) 2 for Cd (sample treatment), 35.1 (μg·g -1 ) 2 for Cu (sample treatment), 861.7 (ng·g -1 ) 2 and for Hg (material selection). These variances correspond to standard deviations that constitute 67, 126 and 59% the regional background levels of these elements in the study region. The overall measurement uncertainty associated with the worst experimental protocol (5 subsamples, refrigerated, washed, 5 × 5 m size of the sampling area and once a year sampling) was between 2 and 6 times higher than that associated with the optimal protocol (30 subsamples, dried, unwashed, 20 × 20 m size of the sampling area and once a week sampling), and between 1.5 and 7 times higher than that associated with the standardized protocol (30 subsamples and once a year sampling). The overall measurement uncertainty associated with the standardized protocol could generate variations of between 14 and 47% in the regional background levels of Cd, Cu, Hg, Pb and Zn in the study area and much higher levels of variation in polluted sampling sites. We demonstrated that although the overall measurement uncertainty of the technique is still high, it can be reduced by using already well defined aspects of the protocol. Further standardization of the protocol together with application of the information on the overall measurement uncertainty would improve the reliability and comparability of the results of different biomonitoring studies, thus extending use of the technique beyond the context of scientific research. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Fujimoto, R
Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less
NASA Astrophysics Data System (ADS)
Petrov, Dimitar; Michielsen, Koen; Cockmartin, Lesley; Zhang, Gouzhi; Young, Kenneth; Marshall, Nicholas; Bosmans, Hilde
2016-03-01
Digital breast tomosynthesis (DBT) is a 3D mammography technique that promises better visualization of low contrast lesions than conventional 2D mammography. A wide range of parameters influence the diagnostic information in DBT images and a systematic means of DBT system optimization is needed. The gold standard for image quality assessment is to perform a human observer experiment with experienced readers. Using human observers for optimization is time consuming and not feasible for the large parameter space of DBT. Our goal was to develop a model observer (MO) that can predict human reading performance for standard detection tasks of target objects within a structured phantom and subsequently apply it in a first comparative study. The phantom consists of an acrylic semi-cylindrical container with acrylic spheres of different sizes and the remaining space filled with water. Three types of lesions were included: 3D printed spiculated and non-spiculated mass lesions along with calcification groups. The images of the two mass lesion types were reconstructed with 3 different reconstruction methods (FBP, FBP with SRSAR, MLTRpr) and read by human readers. A Channelized Hotelling model observer was created for the non-spiculated lesion detection task using five Laguerre-Gauss channels, tuned for better performance. For the non-spiculated mass lesions a linear relation between the MO and human observer results was found, with correlation coefficients of 0.956 for standard FBP, 0.998 for FBP with SRSAR and 0.940 for MLTRpr. Both the MO and human observer percentage correct results for the spiculated masses were close to 100%, and showed no difference from each other for every reconstruction algorithm.
Modern concepts in facial nerve reconstruction
2010-01-01
Background Reconstructive surgery of the facial nerve is not daily routine for most head and neck surgeons. The published experience on strategies to ensure optimal functional results for the patients are based on small case series with a large variety of surgical techniques. On this background it is worthwhile to develop a standardized approach for diagnosis and treatment of patients asking for facial rehabilitation. Conclusion A standardized approach is feasible: Patients with chronic facial palsy first need an exact classification of the palsy's aetiology. A step-by-step clinical examination, if necessary MRI imaging and electromyographic examination allow a classification of the palsy's aetiology as well as the determination of the severity of the palsy and the functional deficits. Considering the patient's desire, age and life expectancy, an individual surgical concept is applicable using three main approaches: a) early extratemporal reconstruction, b) early reconstruction of proximal lesions if extratemporal reconstruction is not possible, c) late reconstruction or in cases of congenital palsy. Twelve to 24 months after the last step of surgical reconstruction a standardized evaluation of the therapeutic results is recommended to evaluate the necessity for adjuvant surgical procedures or other adjuvant procedures, e.g. botulinum toxin application. Up to now controlled trials on the value of physiotherapy and other adjuvant measures are missing to give recommendation for optimal application of adjuvant therapies. PMID:21040532
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
Application of optimal design methodologies in clinical pharmacology experiments.
Ogungbenro, Kayode; Dokoumetzidis, Aristides; Aarons, Leon
2009-01-01
Pharmacokinetics and pharmacodynamics data are often analysed by mixed-effects modelling techniques (also known as population analysis), which has become a standard tool in the pharmaceutical industries for drug development. The last 10 years has witnessed considerable interest in the application of experimental design theories to population pharmacokinetic and pharmacodynamic experiments. Design of population pharmacokinetic experiments involves selection and a careful balance of a number of design factors. Optimal design theory uses prior information about the model and parameter estimates to optimize a function of the Fisher information matrix to obtain the best combination of the design factors. This paper provides a review of the different approaches that have been described in the literature for optimal design of population pharmacokinetic and pharmacodynamic experiments. It describes options that are available and highlights some of the issues that could be of concern as regards practical application. It also discusses areas of application of optimal design theories in clinical pharmacology experiments. It is expected that as the awareness about the benefits of this approach increases, more people will embrace it and ultimately will lead to more efficient population pharmacokinetic and pharmacodynamic experiments and can also help to reduce both cost and time during drug development. Copyright (c) 2008 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Wincheski, Buzz; Williams, Phillip; Simpson, John
2007-01-01
The use of eddy current techniques for the detection of outer diameter damage in tubing and many complex aerospace structures often requires the use of an inner diameter probe due to a lack of access to the outside of the part. In small bore structures the probe size and orientation are constrained by the inner diameter of the part, complicating the optimization of the inspection technique. Detection of flaws through a significant remaining wall thickness becomes limited not only by the standard depth of penetration, but also geometrical aspects of the probe. Recently, an orthogonal eddy current probe was developed for detection of such flaws in Space Shuttle Primary Reaction Control System (PRCS) Thrusters. In this case, the detection of deeply buried stress corrosion cracking by an inner diameter eddy current probe was sought. Probe optimization was performed based upon the limiting spatial dimensions, flaw orientation, and required detection sensitivity. Analysis of the probe/flaw interaction was performed through the use of finite and boundary element modeling techniques. Experimental data for the flaw detection capabilities, including a probability of detection study, will be presented along with the simulation data. The results of this work have led to the successful deployment of an inspection system for the detection of stress corrosion cracking in Space Shuttle Primary Reaction Control System (PRCS) Thrusters.
Oloibiri, Violet; Ufomba, Innocent; Chys, Michael; Audenaert, Wim; Demeestere, Kristof; Van Hulle, Stijn W H
2015-01-01
A major concern for landfilling facilities is the treatment of their leachate. To optimize organic matter removal from this leachate, the combination of two or more techniques is preferred in order to meet stringent effluent standards. In our study, coagulation-flocculation and ozonation are compared as pre- treatment steps for stabilized landfill leachate prior to granular activated carbon (GAC) adsorption. The efficiency of the pre treatment techniques is evaluated using COD and UVA254 measurements. For coagulation- flocculation, different chemicals are compared and optimal dosages are determined. After this, iron (III) chloride is selected for subsequent adsorption studies due to its high percentage of COD and UVA254 removal and good sludge settle-ability. Our finding show that ozonation as a single treatment is effective in reducing COD in landfill leachate by 66% compared to coagulation flocculation (33%). Meanwhile, coagulation performs better in UVA254 reduction than ozonation. Subsequent GAC adsorption of ozonated effluent, coagulated effluent and untreated leachate resulted in 77%, 53% and 8% total COD removal respectively (after 6 bed volumes). The effect of the pre-treatment techniques on GAC adsorption properties is evaluated experimentally and mathematically using Thomas and Yoon-Nelson models. Mathematical modelling of the experimental GAC adsorption data shows that ozonation increases the adsorption capacity and break through time with a factor of 2.5 compared to coagulation-flocculation.
Planning hybrid intensity modulated radiation therapy for whole-breast irradiation.
Farace, Paolo; Zucca, Sergio; Solla, Ignazio; Fadda, Giuseppina; Durzu, Silvia; Porru, Sergio; Meleddu, Gianfranco; Deidda, Maria Assunta; Possanzini, Marco; Orrù, Sivia; Lay, Giancarlo
2012-09-01
To test tangential and not-tangential hybrid intensity modulated radiation therapy (IMRT) for whole-breast irradiation. Seventy-eight (36 right-, 42 left-) breast patients were randomly selected. Hybrid IMRT was performed by direct aperture optimization. A semiautomated method for planning hybrid IMRT was implemented using Pinnacle scripts. A plan optimization volume (POV), defined as the portion of the planning target volume covered by the open beams, was used as the target objective during inverse planning. Treatment goals were to prescribe a minimum dose of 47.5 Gy to greater than 90% of the POV and to minimize the POV and/or normal tissue receiving a dose greater than 107%. When treatment goals were not achieved by using a 4-field technique (2 conventional open plus 2 IMRT tangents), a 6-field technique was applied, adding 2 non tangential (anterior-oblique) IMRT beams. Using scripts, manual procedures were minimized (choice of optimal beam angle, setting monitor units for open tangentials, and POV definition). Treatment goals were achieved by using the 4-field technique in 61 of 78 (78%) patients. The 6-field technique was applied in the remaining 17 of 78 (22%) patients, allowing for significantly better achievement of goals, at the expense of an increase of low-dose (∼5 Gy) distribution in the contralateral tissue, heart, and lungs but with no significant increase of higher doses (∼20 Gy) in heart and lungs. The mean monitor unit contribution to IMRT beams was significantly greater (18.7% vs 9.9%) in the group of patients who required 6-field procedure. Because hybrid IMRT can be performed semiautomatically, it can be planned for a large number of patients with little impact on human or departmental resources, promoting it as the standard practice for whole-breast irradiation. Copyright © 2012 Elsevier Inc. All rights reserved.
2014-01-01
Background Heterologous gene expression is an important tool for synthetic biology that enables metabolic engineering and the production of non-natural biologics in a variety of host organisms. The translational efficiency of heterologous genes can often be improved by optimizing synonymous codon usage to better match the host organism. However, traditional approaches for optimization neglect to take into account many factors known to influence synonymous codon distributions. Results Here we define an alternative approach for codon optimization that utilizes systems level information and codon context for the condition under which heterologous genes are being expressed. Furthermore, we utilize a probabilistic algorithm to generate multiple variants of a given gene. We demonstrate improved translational efficiency using this condition-specific codon optimization approach with two heterologous genes, the fluorescent protein-encoding eGFP and the catechol 1,2-dioxygenase gene CatA, expressed in S. cerevisiae. For the latter case, optimization for stationary phase production resulted in nearly 2.9-fold improvements over commercial gene optimization algorithms. Conclusions Codon optimization is now often a standard tool for protein expression, and while a variety of tools and approaches have been developed, they do not guarantee improved performance for all hosts of applications. Here, we suggest an alternative method for condition-specific codon optimization and demonstrate its utility in Saccharomyces cerevisiae as a proof of concept. However, this technique should be applicable to any organism for which gene expression data can be generated and is thus of potential interest for a variety of applications in metabolic and cellular engineering. PMID:24636000
Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig
2016-10-01
To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2017-04-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
NASA Astrophysics Data System (ADS)
Zawadowicz, M. A.; Del Negro, L. A.
2010-12-01
Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.
Hashim, H A; Abido, M A
2015-01-01
This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed.
Hashim, H. A.; Abido, M. A.
2015-01-01
This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed. PMID:25960738
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wasser, M.N.; Schultze Kool, L.J.; Roos, A. de
Our goal was to assess the value of MRA for detecting stenoses in the celiac (CA) and superior mesenteric (SMA) arteries in patients suspected of having chronic mesenteric ischemia, using an optimized systolically gated 3D phase contrast technique. In an initial study in 24 patients who underwent conventional angiography of the abdominal vessels for different clinical indications, a 3D phase contrast MRA technique (3D-PCA) was evaluated and optimized to image the CAs and SMAs. Subsequently, a prospective study was performed to assess the value of systolically gated 3D-PCA in evaluation of the mesenteric arteries in 10 patients with signs andmore » symptoms of chronic mesenteric ischemia. Intraarterial digital subtraction angiography and surgical findings were used as the reference standard. In the initial study, systolic gating appeared to be essential in imaging the SMA on 3D-PCA. In 10 patients suspected of mesenteric ischemia, systolically gated 3D-PCA identified significant proximal disease in the two mesenteric vessels in 4 patients. These patients underwent successful reconstruction of their stenotic vessels. Cardiac-gated MRA may become a useful tool in selection of patients suspected of having mesenteric ischemia who may benefit from surgery. 16 refs., 6 figs., 4 tabs.« less
Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.
2016-02-15
Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less
A PC program to optimize system configuration for desired reliability at minimum cost
NASA Technical Reports Server (NTRS)
Hills, Steven W.; Siahpush, Ali S.
1994-01-01
High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.
Wu, Xiaoling; Yang, Miyi; Zeng, Haozhe; Xi, Xuefei; Zhang, Sanbing; Lu, Runhua; Gao, Haixiang; Zhou, Wenfeng
2016-11-01
In this study, a simple effervescence-assisted dispersive solid-phase extraction method was developed to detect fungicides in honey and juice. Most significantly, an innovative ionic-liquid-modified magnetic β-cyclodextrin/attapulgite sorbent was used because its large specific surface area enhanced the extraction capacity and also led to facile separation. A one-factor-at-a-time approach and orthogonal design were employed to optimize the experimental parameters. Under the optimized conditions, the entire extraction procedure was completed within 3 min. In addition, the calibration curves exhibited good linearity, and high enrichment factors were achieved for pure water and honey samples. For the honey samples, the extraction efficiencies for the target fungicides ranged from 77.0 to 94.3% with relative standard deviations of 2.3-5.44%. The detection and quantitation limits were in the ranges of 0.07-0.38 and 0.23-1.27 μg/L, respectively. Finally, the developed technique was successfully applied to real samples, and satisfactory results were achieved. This analytical technique is cost-effective, environmentally friendly, and time-saving. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Baig, Jameel A; Kazi, Tasneem G; Shah, Abdul Q; Arain, Mohammad B; Afridi, Hassan I; Kandhro, Ghulam A; Khan, Sumaira
2009-09-28
The simple and rapid pre-concentration techniques viz. cloud point extraction (CPE) and solid phase extraction (SPE) were applied for the determination of As(3+) and total inorganic arsenic (iAs) in surface and ground water samples. The As(3+) was formed complex with ammonium pyrrolidinedithiocarbamate (APDC) and extracted by surfactant-rich phases in the non-ionic surfactant Triton X-114, after centrifugation the surfactant-rich phase was diluted with 0.1 mol L(-1) HNO(3) in methanol. While total iAs in water samples was adsorbed on titanium dioxide (TiO(2)); after centrifugation, the solid phase was prepared to be slurry for determination. The extracted As species were determined by electrothermal atomic absorption spectrometry. The multivariate strategy was applied to estimate the optimum values of experimental factors for the recovery of As(3+) and total iAs by CPE and SPE. The standard addition method was used to validate the optimized methods. The obtained result showed sufficient recoveries for As(3+) and iAs (>98.0%). The concentration factor in both cases was found to be 40.
Significance of Algal Polymer in Designing Amphotericin B Nanoparticles
Bhatia, Saurabh; Kumar, Vikash; Sharma, Kiran; Nagpal, Kalpana; Bera, Tanmoy
2014-01-01
Development of oral amphotericin B (AmB) loaded nanoparticles (NPs) demands a novel technique which reduces its toxicity and other associated problems. Packing of AmB in between two oppositely charged ions by polyelectrolyte complexation technique proved to be a successful strategy. We have developed a novel carrier system in form of polyelectrolyte complex of AmB by using chitosan (CS) and porphyran (POR) as two oppositely charged polymers with TPP as a crosslinking agent. Initially POR was isolated from Porphyra vietnamensis followed by the fact that its alkali induced safe reduction in molecular weight was achieved. Formulation was optimized using three-factor three-level (33) central composite design. High concentration of POR in NPs was confirmed by sulfated polysaccharide (SP) assay. Degradation and dissolution studies suggested the stability of NPs over wide pH range. Hemolytic toxicity data suggested the safety of prepared formulation. In vivo and in vitro antifungal activity demonstrated the high antifungal potential of optimized formulation when compared with standard drug and marketed formulations. Throughout the study TPP addition did not cause any significant changes. Therefore, these experimental oral NPs may represent an interesting carrier system for the delivery of AmB. PMID:25478596
Marković, Aleksa; Calvo-Guirado, José Luís; Lazić, Zoran; Gómez-Moreno, Gerardo; Ćalasan, Dejan; Guardia, Javier; Čolic, Snježana; Aguilar-Salvatierra, Antonio; Gačić, Bojan; Delgado-Ruiz, Rafael; Janjić, Bojan; Mišić, Tijana
2013-06-01
The aim of this study was to investigate the relationship between surgical techniques and implant macro-design (self-tapping/non-self-tapping) for the optimization of implant stability in the low-density bone present in the posterior maxilla using resonance frequency analysis (RFA). A total of 102 implants were studied. Fifty-six self-tapping BlueSkyBredent® (Bredent GmbH&Co.Kg®, Senden, Germany) and 56 non-self-tapping Standard Plus Straumann® (Institut Straumann AG®, Waldenburg, Switzerland) were placed in the posterior segment of the maxilla. Implants of both types were placed in sites prepared with either lateral bone-condensing or with bone-drilling techniques. Implant stability measurements were performed using RFA immediately after implant placement and weekly during a 12-week follow-up period. Both types of implants placed after bone condensing achieved significantly higher stability immediately after surgery, as well as during the entire 12-week observation period compared with those placed following bone drilling. After bone condensation, there were no significant differences in primary stability or in implant stability after the first week between both implant types. From 2 to 12 postoperative weeks, significantly higher stability was shown by self-tapping implants. After bone drilling, self-tapping implants achieved significantly higher stability than non-self-tapping implants during the entire follow-up period. The outcomes of the present study indicate that bone drilling is not an effective technique for improving implant stability and, following this technique, the use of self-tapping implants is highly recommended. Implant stability optimization in the soft bone can be achieved by lateral bone-condensing technique, regardless of implant macro-design. © 2011 Wiley Periodicals, Inc.
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi
2017-01-01
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as −0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349
Lock, Martin; Alvira, Mauricio R; Chen, Shu-Jen; Wilson, James M
2014-04-01
Accurate titration of adeno-associated viral (AAV) vector genome copies is critical for ensuring correct and reproducible dosing in both preclinical and clinical settings. Quantitative PCR (qPCR) is the current method of choice for titrating AAV genomes because of the simplicity, accuracy, and robustness of the assay. However, issues with qPCR-based determination of self-complementary AAV vector genome titers, due to primer-probe exclusion through genome self-annealing or through packaging of prematurely terminated defective interfering (DI) genomes, have been reported. Alternative qPCR, gel-based, or Southern blotting titering methods have been designed to overcome these issues but may represent a backward step from standard qPCR methods in terms of simplicity, robustness, and precision. Droplet digital PCR (ddPCR) is a new PCR technique that directly quantifies DNA copies with an unparalleled degree of precision and without the need for a standard curve or for a high degree of amplification efficiency; all properties that lend themselves to the accurate quantification of both single-stranded and self-complementary AAV genomes. Here we compare a ddPCR-based AAV genome titer assay with a standard and an optimized qPCR assay for the titration of both single-stranded and self-complementary AAV genomes. We demonstrate absolute quantification of single-stranded AAV vector genomes by ddPCR with up to 4-fold increases in titer over a standard qPCR titration but with equivalent readout to an optimized qPCR assay. In the case of self-complementary vectors, ddPCR titers were on average 5-, 1.9-, and 2.3-fold higher than those determined by standard qPCR, optimized qPCR, and agarose gel assays, respectively. Droplet digital PCR-based genome titering was superior to qPCR in terms of both intra- and interassay precision and is more resistant to PCR inhibitors, a desirable feature for in-process monitoring of early-stage vector production and for vector genome biodistribution analysis in inhibitory tissues.
Polgár, L; Soós, P; Lajkó, E; Láng, O; Merkely, B; Kőhidai, L
2018-06-01
Thrombogenesis plays an important role in today's morbidity and mortality. Antithrombotics are among the most frequently prescribed drugs. Thorough knowledge of platelet function is needed for optimal clinical care. Platelet adhesion is a separate subprocess of platelet thrombus formation; still, no well-standardized technique for the isolated measurement of platelet adhesion exists. Impedimetry is one of the most reliable, state-of-art techniques to analyze cell adhesion, proliferation, viability, and cytotoxicity. We propose impedimetry as a feasible novel method for the isolated measurement of 2 significant platelet functions: adhesion and spreading. Laboratory reference platelet agonists (epinephrine, ADP, and collagen) were applied to characterize platelet functions by impedimetry using the xCELLigence SP system. Platelet samples were obtained from 20 healthy patients under no drug therapy. Standard laboratory parameters and clinical patient history were also analyzed. Epinephrine and ADP increased platelet adhesion in a concentration-dependent manner, while collagen tended to have a negative effect. Serum sodium and calcium levels and age had a negative correlation with platelet adhesion induced by epinephrine and ADP, while increased immunoreactivity connected with allergic diseases was associated with increased platelet adhesion induced by epinephrine and ADP. ADP increased platelet spreading in a concentration-dependent manner. Impedimetry proved to be a useful and sensitive method for the qualitative and quantitated measurement of platelet adhesion, even differentiating between subgroups of a healthy population. This novel technique is offered as an important method in the further investigation of platelet function. © 2018 John Wiley & Sons Ltd.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors.
Zhang, Yajia; Hauser, Kris
2013-01-01
Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors
2013-01-01
Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175
Anticoagulative strategies in reconstructive surgery – clinical significance and applicability
Jokuszies, Andreas; Herold, Christian; Niederbichler, Andreas D.; Vogt, Peter M.
2012-01-01
Advanced strategies in reconstructive microsurgery and especially free tissue transfer with advanced microvascular techniques have been routinely applied and continously refined for more than three decades in day-to-day clinical work. Bearing in mind the success rates of more than 95%, the value of these techniques in patient care and comfort (one-step reconstruction of even the most complex tissue defects) cannot be underestimated. However, anticoagulative protocols and practices are far from general acceptance and – most importantly – lack the benchmark of evidence basis while the reconstructive and microsurgical methods are mostly standardized. Therefore, the aim of our work was to review the actual literature and synoptically lay out the mechanisms of action of the plethora of anticoagulative substances. The pharmacologic prevention and the surgical intervention of thrombembolic events represent an established and essential part of microsurgery. The high success rates of microvascular free tissue transfer as of today are due to treatment of patients in reconstructive centers where proper patient selection, excellent microsurgical technique, tissue transfer to adequate recipient vessels, and early anastomotic revision in case of thrombosis is provided. Whether the choice of antithrombotic agents is a factor of success remains still unclear. Undoubtedly however the lack of microsurgical experience and bad technique can never be compensated by any regimen of antithrombotic therapy. All the more, the development of consistent standards and algorithms in reconstructive microsurgery is absolutely essential to optimize clinical outcomes and increase multicentric and international comparability of postoperative results and complications. PMID:22294976
Piper, Timm; Piper, Jörg
2012-04-01
Variable bright-darkfield contrast (VBDC) is a new technique in light microscopy which promises significant improvements in imaging of transparent colorless specimens especially when characterized by a high regional thickness and a complex three-dimensional architecture. By a particular light pathway, two brightfield- and darkfield-like partial images are simultaneously superimposed so that the brightfield-like absorption image based on the principal zeroth order maximum interferes with the darkfield-like reflection image which is based on the secondary maxima. The background brightness and character of the resulting image can be continuously modulated from a brightfield-dominated to a darkfield-dominated appearance. When the weighting of the dark- and brightfield components is balanced, medium background brightness will result showing the specimen in a phase- or interference contrast-like manner. Specimens can either be illuminated axially/concentrically or obliquely/eccentrically. In oblique illumination, the angle of incidence and grade of eccentricity can be continuously changed. The condenser aperture diaphragm can be used for improvements of the image quality in the same manner as usual in standard brightfield illumination. By this means, the illumination can be optimally adjusted to the specific properties of the specimen. In VBDC, the image contrast is higher than in normal brightfield illumination, blooming and scattering are lower than in standard darkfield examinations, and any haloing is significantly reduced or absent. Although axial resolution and depth of field are higher than in concurrent standard techniques, the lateral resolution is not visibly reduced. Three dimensional structures, reliefs and fine textures can be perceived in superior clarity. Copyright © 2011 Wiley-Liss, Inc.
Umoh, J. U.; Blenden, D. C.
1981-01-01
Formalin-fixed central nervous system tissue from clinically rabid animals was treated with 0.25% trypsin and tested for the presence of rabies virus antigen by direct immunofluorescent (IF) staining. The results were comparable with those obtained from direct IF staining of acetone-fixed standard smears or fresh frozen-cut sections. Experiments were conducted using coded brain specimens (classified as IF-negative, weakly positive, or strongly positive) and showed a specificity of 100% for sections and 92% for smears; the latter figure was subsequently improved by modifying the preparation technique. The specificity of the technique was checked by standard virus neutralization of the conjugate, and by known antibody neutralization of the virus antigen in the specimens. The optimal duration for the trypsin digestion was found to be a minimum of 60 minutes at 37 °C or 120 minutes at 4 °C. The tissues could be held in buffered formalin for between 3 days and 7 weeks with no apparent difference in the results. Satisfactory concentrations of formalin were 0.125% or 0.25%. Trypsin was found to have no effect on non-formalinized tissues, with the exception that softening occurred making tissues harder to cut and process. The results suggest that trypsinization of formalin-fixed tissue is a valid procedure for the preparation of tissues for IF examination, which would be useful in cases where the current standard techniques cannot be used. However, further evaluation of the method is still required. ImagesFig. 3Fig. 1Fig. 2 PMID:6172212
2017-11-01
ARL-TR-8225 ● NOV 2017 US Army Research Laboratory Methodology for Designing and Developing a New Ultra-Wideband Antenna Based...Research Laboratory Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio-Inspired Optimization Techniques by...SUBTITLE Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio-Inspired Optimization Techniques 5a. CONTRACT NUMBER
Research on an augmented Lagrangian penalty function algorithm for nonlinear programming
NASA Technical Reports Server (NTRS)
Frair, L.
1978-01-01
The augmented Lagrangian (ALAG) Penalty Function Algorithm for optimizing nonlinear mathematical models is discussed. The mathematical models of interest are deterministic in nature and finite dimensional optimization is assumed. A detailed review of penalty function techniques in general and the ALAG technique in particular is presented. Numerical experiments are conducted utilizing a number of nonlinear optimization problems to identify an efficient ALAG Penalty Function Technique for computer implementation.
Nguyen, Phong Thanh; Abbosh, Amin; Crozier, Stuart
2017-06-01
In this paper, a technique for noninvasive microwave hyperthermia treatment for breast cancer is presented. In the proposed technique, microwave hyperthermia of patient-specific breast models is implemented using a three-dimensional (3-D) antenna array based on differential beam-steering subarrays to locally raise the temperature of the tumor to therapeutic values while keeping healthy tissue at normal body temperature. This approach is realized by optimizing the excitations (phases and amplitudes) of the antenna elements using the global optimization method particle swarm optimization. The antennae excitation phases are optimized to maximize the power at the tumor, whereas the amplitudes are optimized to accomplish the required temperature at the tumor. During the optimization, the technique ensures that no hotspots exist in healthy tissue. To implement the technique, a combination of linked electromagnetic and thermal analyses using MATLAB and the full-wave electromagnetic simulator is conducted. The technique is tested at 4.2 GHz, which is a compromise between the required power penetration and focusing, in a realistic simulation environment, which is built using a 3-D antenna array of 4 × 6 unidirectional antenna elements. The presented results on very dense 3-D breast models, which have the realistic dielectric and thermal properties, validate the capability of the proposed technique in focusing power at the exact location and volume of tumor even in the challenging cases where tumors are embedded in glands. Moreover, the models indicate the capability of the technique in dealing with tumors at different on- and off-axis locations within the breast with high efficiency in using the microwave power.
Design of a modulated orthovoltage stereotactic radiosurgery system.
Fagerstrom, Jessica M; Bender, Edward T; Lawless, Michael J; Culberson, Wesley S
2017-07-01
To achieve stereotactic radiosurgery (SRS) dose distributions with sharp gradients using orthovoltage energy fluence modulation with inverse planning optimization techniques. A pencil beam model was used to calculate dose distributions from an orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods. A Genetic Algorithm search heuristic was used to optimize the spatial distribution of added tungsten filtration to achieve dose distributions with sharp dose gradients. Optimizations were performed for depths of 2.5, 5.0, and 7.5 cm, with cone sizes of 5, 6, 8, and 10 mm. In addition to the beam profiles, 4π isocentric irradiation geometries were modeled to examine dose at 0.07 mm depth, a representative skin depth, for the low energy beams. Profiles from 4π irradiations of a constant target volume, assuming maximally conformal coverage, were compared. Finally, dose deposition in bone compared to tissue in this energy range was examined. Based on the results of the optimization, circularly symmetric tungsten filters were designed to modulate the orthovoltage beam across the apertures of SRS cone collimators. For each depth and cone size combination examined, the beam flatness and 80-20% and 90-10% penumbrae were calculated for both standard, open cone-collimated beams as well as for optimized, filtered beams. For all configurations tested, the modulated beam profiles had decreased penumbra widths and flatness statistics at depth. Profiles for the optimized, filtered orthovoltage beams also offered decreases in these metrics compared to measured linear accelerator cone-based SRS profiles. The dose at 0.07 mm depth in the 4π isocentric irradiation geometries was higher for the modulated beams compared to unmodulated beams; however, the modulated dose at 0.07 mm depth remained <0.025% of the central, maximum dose. The 4π profiles irradiating a constant target volume showed improved statistics for the modulated, filtered distribution compared to the standard, open cone-collimated distribution. Simulations of tissue and bone confirmed previously published results that a higher energy beam (≥ 200 keV) would be preferable, but the 250 kVp beam was chosen for this work because it is available for future measurements. A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions with decreased flatness and penumbra statistics compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system. © 2017 American Association of Physicists in Medicine.
Optimization of a chondrogenic medium through the use of factorial design of experiments.
Enochson, Lars; Brittberg, Mats; Lindahl, Anders
2012-12-01
The standard culture system for in vitro cartilage research is based on cells in a three-dimensional micromass culture and a defined medium containing the chondrogenic key growth factor, transforming growth factor (TGF)-β1. The aim of this study was to optimize the medium for chondrocyte micromass culture. Human chondrocytes were cultured in different media formulations, designed with a factorial design of experiments (DoE) approach and based on the standard medium for redifferentiation. The significant factors for the redifferentiation of the chondrocytes were determined and optimized in a two-step process through the use of response surface methodology. TGF-β1, dexamethasone, and glucose were significant factors for differentiating the chondrocytes. Compared to the standard medium, TGF-β1 was increased 30%, dexamethasone reduced 50%, and glucose increased 22%. The potency of the optimized medium was validated in a comparative study against the standard medium. The optimized medium resulted in micromass cultures with increased expression of genes important for the articular chondrocyte phenotype and in cultures with increased glycosaminoglycan/DNA content. Optimizing the standard medium with the efficient DoE method, a new medium that gave better redifferentiation for articular chondrocytes was determined.
NASA Astrophysics Data System (ADS)
Tian, Lunfu; Wang, Lili; Gao, Wei; Weng, Xiaodong; Liu, Jianhui; Zou, Deshuang; Dai, Yichun; Huang, Shuke
2018-03-01
For the quantitative analysis of the principal elements in lead-antimony-tin alloys, directly X-ray fluorescence (XRF) method using solid metal disks introduces considerable errors due to the microstructure inhomogeneity. To solve this problem, an aqueous solution XRF method is proposed for determining major amounts of Sb, Sn, Pb in lead-based bearing alloys. The alloy samples were dissolved by a mixture of nitric acid and tartaric acid to eliminated the effects of microstructure of these alloys on the XRF analysis. Rh Compton scattering was used as internal standard for Sb and Sn, and Bi was added as internal standard for Pb, to correct for matrix effects, instrumental and operational variations. High-purity lead, antimony and tin were used to prepare synthetic standards. Using these standards, calibration curves were constructed for the three elements after optimizing the spectrometer parameters. The method has been successfully applied to the analysis of lead-based bearing alloys and is more rapid than classical titration methods normally used. The determination results are consistent with certified values or those obtained by titrations.
The integrated manual and automatic control of complex flight systems
NASA Technical Reports Server (NTRS)
Schmidt, D. K.
1985-01-01
Pilot/vehicle analysis techniques for optimizing aircraft handling qualities are presented. The analysis approach considered is based on the optimal control frequency domain techniques. These techniques stem from an optimal control approach of a Neal-Smith like analysis on aircraft attitude dynamics extended to analyze the flared landing task. Some modifications to the technique are suggested and discussed. An in depth analysis of the effect of the experimental variables, such as prefilter, is conducted to gain further insight into the flared land task for this class of vehicle dynamics.
NASA Astrophysics Data System (ADS)
Saunders, R.; Samei, E.; Badea, C.; Yuan, H.; Ghaghada, K.; Qi, Y.; Hedlund, L. W.; Mukundan, S.
2008-03-01
Dual-energy contrast-enhanced breast tomosynthesis has been proposed as a technique to improve the detection of early-stage cancer in young, high-risk women. This study focused on optimizing this technique using computer simulations. The computer simulation used analytical calculations to optimize the signal difference to noise ratio (SdNR) of resulting images from such a technique at constant dose. The optimization included the optimal radiographic technique, optimal distribution of dose between the two single-energy projection images, and the optimal weighting factor for the dual energy subtraction. Importantly, the SdNR included both anatomical and quantum noise sources, as dual energy imaging reduces anatomical noise at the expense of increases in quantum noise. Assuming a tungsten anode, the maximum SdNR at constant dose was achieved for a high energy beam at 49 kVp with 92.5 μm copper filtration and a low energy beam at 49 kVp with 95 μm tin filtration. These analytical calculations were followed by Monte Carlo simulations that included the effects of scattered radiation and detector properties. Finally, the feasibility of this technique was tested in a small animal imaging experiment using a novel iodinated liposomal contrast agent. The results illustrated the utility of dual energy imaging and determined the optimal acquisition parameters for this technique. This work was supported in part by grants from the Komen Foundation (PDF55806), the Cancer Research and Prevention Foundation, and the NIH (NCI R21 CA124584-01). CIVM is a NCRR/NCI National Resource under P41-05959/U24-CA092656.
Optimal time-domain technique for pulse width modulation in power electronics
NASA Astrophysics Data System (ADS)
Mayergoyz, I.; Tyagi, S.
2018-05-01
Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Optimized Orthovoltage Stereotactic Radiosurgery
NASA Astrophysics Data System (ADS)
Fagerstrom, Jessica M.
Because of its ability to treat intracranial targets effectively and noninvasively, stereotactic radiosurgery (SRS) is a prevalent treatment modality in modern radiation therapy. This work focused on SRS delivering rectangular function dose distributions, which are desirable for some targets such as those with functional tissue included within the target volume. In order to achieve such distributions, this work used fluence modulation and energies lower than those utilized in conventional SRS. In this work, the relationship between prescription isodose and dose gradients was examined for standard, unmodulated orthovoltage SRS dose distributions. Monte Carlo-generated energy deposition kernels were used to calculate 4pi, isocentric dose distributions for a polyenergetic orthovoltage spectrum, as well as monoenergetic orthovoltage beams. The relationship between dose gradients and prescription isodose was found to be field size and energy dependent, and values were found for prescription isodose that optimize dose gradients. Next, a pencil-beam model was used with a Genetic Algorithm search heuristic to optimize the spatial distribution of added tungsten filtration within apertures of cone collimators in a moderately filtered 250 kVp beam. Four cone sizes at three depths were examined with a Monte Carlo model to determine the effects of the optimized modulation compared to open cones, and the simulations found that the optimized cones were able to achieve both improved penumbra and flatness statistics at depth compared to the open cones. Prototypes of the filter designs calculated using mathematical optimization techniques and Monte Carlo simulations were then manufactured and inserted into custom built orthovoltage SRS cone collimators. A positioning system built in-house was used to place the collimator and filter assemblies temporarily in the 250 kVp beam line. Measurements were performed in water using radiochromic film scanned with both a standard white light flatbed scanner as well as a prototype laser densitometry system. Measured beam profiles showed that the modulated beams could more closely approach rectangular function dose profiles compared to the open cones. A methodology has been described and implemented to achieve optimized SRS delivery, including the development of working prototypes. Future work may include the construction of a full treatment platform.
Advanced optimal design concepts for composite material aircraft repair
NASA Astrophysics Data System (ADS)
Renaud, Guillaume
The application of an automated optimization approach for bonded composite patch design is investigated. To do so, a finite element computer analysis tool to evaluate patch design quality was developed. This tool examines both the mechanical and the thermal issues of the problem. The optimized shape is obtained with a bi-quadratic B-spline surface that represents the top surface of the patch. Additional design variables corresponding to the ply angles are also used. Furthermore, a multi-objective optimization approach was developed to treat multiple and uncertain loads. This formulation aims at designing according to the most unfavorable mechanical and thermal loads. The problem of finding the optimal patch shape for several situations is addressed. The objective is to minimize a stress component at a specific point in the host structure (plate) while ensuring acceptable stress levels in the adhesive. A parametric study is performed in order to identify the effects of various shape parameters on the quality of the repair and its optimal configuration. The effects of mechanical loads and service temperature are also investigated. Two bonding methods are considered, as they imply different thermal histories. It is shown that the proposed techniques are effective and inexpensive for analyzing and optimizing composite patch repairs. It is also shown that thermal effects should not only be present in the analysis, but that they play a paramount role on the resulting quality of the optimized design. In all cases, the optimized configuration results in a significant reduction of the desired stress level by deflecting the loads away from rather than over the damage zone, as is the case with standard designs. Furthermore, the automated optimization ensures the safety of the patch design for all considered operating conditions.
van der Kruk, E; Schwab, A L; van der Helm, F C T; Veeger, H E J
2018-03-01
In gait studies body pose reconstruction (BPR) techniques have been widely explored, but no previous protocols have been developed for speed skating, while the peculiarities of the skating posture and technique do not automatically allow for the transfer of the results of those explorations to kinematic skating data. The aim of this paper is to determine the best procedure for body pose reconstruction and inverse dynamics of speed skating, and to what extend this choice influences the estimation of joint power. The results show that an eight body segment model together with a global optimization method with revolute joint in the knee and in the lumbosacral joint, while keeping the other joints spherical, would be the most realistic model to use for the inverse kinematics in speed skating. To determine joint power, this method should be combined with a least-square error method for the inverse dynamics. Reporting on the BPR technique and the inverse dynamic method is crucial to enable comparison between studies. Our data showed an underestimation of up to 74% in mean joint power when no optimization procedure was applied for BPR and an underestimation of up to 31% in mean joint power when a bottom-up inverse dynamics method was chosen instead of a least square error approach. Although these results are aimed at speed skating, reporting on the BPR procedure and the inverse dynamics method, together with setting a golden standard should be common practice in all human movement research to allow comparison between studies. Copyright © 2018 Elsevier Ltd. All rights reserved.
Delaminated Transfer of CVD Graphene
NASA Astrophysics Data System (ADS)
Clavijo, Alexis; Mao, Jinhai; Tilak, Nikhil; Altvater, Michael; Andrei, Eva
Single layer graphene is commonly synthesized by dissociation of a carbonaceous gas at high temperatures in the presence of a metallic catalyst in a process known as Chemical Vapor Deposition or CVD. Although it is possible to achieve high quality graphene by CVD, the standard transfer technique of etching away the metallic catalyst is wasteful and jeopardizes the quality of the graphene film by contamination from etchants. Thus, development of a clean transfer technique and preservation of the parent substrate remain prominent hurdles to overcome. In this study, we employ a copper pretreatment technique and optimized parameters for growth of high quality single layer graphene at atmospheric pressure. We address the transfer challenge by utilizing the adhesive properties between a polymer film and graphene to achieve etchant-free transfer of graphene films from a copper substrate. Based on this concept we developed a technique for dry delamination and transferring of graphene to hexagonal boron nitride substrates, which produced high quality graphene films while at the same time preserving the integrity of the copper catalyst for reuse. DOE-FG02-99ER45742, Ronald E. McNair Postbaccalaureate Achievement Program.
Improvements to III-nitride light-emitting diodes through characterization and material growth
NASA Astrophysics Data System (ADS)
Getty, Amorette Rose Klug
A variety of experiments were conducted to improve or aid the improvement of the efficiency of III-nitride light-emitting diodes (LEDs), which are a critical area of research for multiple applications, including high-efficiency solid state lighting. To enhance the light extraction in ultraviolet LEDs grown on SiC substrates, a distributed Bragg reflector (DBR) optimized for operation in the range from 250 to 280 nm has been developed using MBE growth techniques. The best devices had a peak reflectivity of 80% with 19.5 periods, which is acceptable for the intended application. DBR surfaces were sufficiently smooth for subsequent epitaxy of the LED device. During the course of this work, pros and cons of AlGaN growth techniques, including analog versus digital alloying, were examined. This work highlighted a need for more accurate values of the refractive index of high-Al-content AlxGa1-xNin the UV wavelength range. We present refractive index results for a wide variety of materials pertinent to the fabrication of optical III-nitride devices. Characterization was done using Variable-Angle Spectroscopic Ellipsometry. The three binary nitrides, and all three ternaries, have been characterized to a greater or lesser extent depending on material compositions available. Semi-transparent p-contact materials and other thin metals for reflecting contacts have been examined to allow optimization of deposition conditions and to allow highly accurate modeling of the behavior of light within these devices. Standard substrate materials have also been characterized for completeness and as an indicator of the accuracy of our modeling technique. We have demonstrated a new technique for estimating the internal quantum efficiency (IQE) of nitride light-emitting diodes. This method is advantageous over the standard low-temperature photoluminescence-based method of estimating IQE, as the new method is conducted under the same conditions as normal device operation. We have developed processing techniques and have characterized patternable absorbing materials which eliminate scattered light within the device, allowing an accurate simulation of the device extraction efficiency. This efficiency, with measurements of the input current and optical output power, allow a straightforward calculation of the IQE. Two sets of devices were measured, one of material grown in-house, with a rough p-GaN surface, and one of commercial LED material, with smooth interfaces and very high internal quantum efficiency.
Lichte, F.E.; Meier, A.L.; Crock, J.G.
1987-01-01
A method of analysis of geological materials for the determination of the rare-earth elements using the Inductively coupled plasma mass spectrometric technique (ICP-MS) has been developed. Instrumental parameters and factors affecting analytical results have been first studied and then optimized. Samples are analyzed directly following an acid digestion, without the need for separation or preconcentration with limits of detection of 2-11 ng/g, precision of ?? 2.5% relative standard deviation, and accuracy comparable to inductively coupled plasma emission spectrometry and instrumental neutron activation analysis. A commercially available ICP-MS instrument is used with modifications to the sample introduction system, torch, and sampler orifice to reduce the effects of high salt content of sample solutions prepared from geologic materials. Corrections for isobaric interferences from oxide ions and other diatomic and triatomic ions are made mathematically. Special internal standard procedures are used to compensate for drift in metahmetal oxide ratios and sensitivity. Reference standard values are used to verify the accuracy and utility of the method.
Chuard, C.; Reller, L. B.
1998-01-01
The bile-esculin test is used to differentiate enterococci and group D streptococci from non-group D viridans group streptococci. The effects on test performance of the concentration of bile salts, inoculum, and duration of incubation were examined with 110 strains of enterococci, 30 strains of Streptococcus bovis, and 110 strains of non-group D viridans group streptococci. Optimal sensitivity (>99%) and specificity (97%) of the bile-esculin test can be obtained with a bile concentration of 40%, a standardized inoculum of 106 CFU, and incubation for 24 h. PMID:9542954
Chuard, C; Reller, L B
1998-04-01
The bile-esculin test is used to differentiate enterococci and group D streptococci from non-group D viridans group streptococci. The effects on test performance of the concentration of bile salts, inoculum, and duration of incubation were examined with 110 strains of enterococci, 30 strains of Streptococcus bovis, and 110 strains of non-group D viridans group streptococci. Optimal sensitivity (> 99%) and specificity (97%) of the bile-esculin test can be obtained with a bile concentration of 40%, a standardized inoculum of 10(6) CFU, and incubation for 24 h.
Lorenzo-Seva, Urbano; Ferrando, Pere J
2011-03-01
We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.
Laboratory Diagnosis of Infective Endocarditis
Liesman, Rachael M.; Pritt, Bobbi S.; Maleszewski, Joseph J.
2017-01-01
ABSTRACT Infective endocarditis is life-threatening; identification of the underlying etiology informs optimized individual patient management. Changing epidemiology, advances in blood culture techniques, and new diagnostics guide the application of laboratory testing for diagnosis of endocarditis. Blood cultures remain the standard test for microbial diagnosis, with directed serological testing (i.e., Q fever serology, Bartonella serology) in culture-negative cases. Histopathology and molecular diagnostics (e.g., 16S rRNA gene PCR/sequencing, Tropheryma whipplei PCR) may be applied to resected valves to aid in diagnosis. Herein, we summarize recent knowledge in this area and propose a microbiologic and pathological algorithm for endocarditis diagnosis. PMID:28659319
Speeding up local correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kats, Daniel
2014-12-28
We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.
Chen, Wen; Zhu, Ming-Dong; Yan, Xiao-Lan; Lin, Li-Jun; Zhang, Jian-Feng; Li, Li; Wen, Li-Yong
2011-06-01
To understand and evaluate the quality of feces examination for schistosomiasis in province-level laboratories of Zhejiang Province. With the single-blind method, the stool samples were detected by the stool hatching method and sediment detection method. In the 3 quality control assessments in 2006, 2008 and 2009, most laboratories finished the examinations on time. The accordance rates of detections were 88.9%, 100% and 93.9%, respectively. The province-level laboratories for schistosomiasis feces examination of Zhejiang Province is coming into standardization, and the techniques of schistosomiasis feces examination are optimized gradually.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bank, Tracy L.; Roth, Elliot A.; Tinker, Phillip
2016-04-17
Inductively Coupled Plasma Mass Spectrometry (ICP-MS) is used to measure the concentrations of rare earth elements (REE) in certified standard reference materials including shale and coal. The instrument used in this study is a Perkin Elmer Nexion 300D ICP-MS. The goal of the study is to identify sample preparation and operating conditions that optimized recovery of each element of concern. Additionally, the precision and accuracy of the technique are summarized and the drawbacks and limitations of the method are outlined.
Optical trapping performance of dielectric-metallic patchy particles
Lawson, Joseph L.; Jenness, Nathan J.; Clark, Robert L.
2015-01-01
We demonstrate a series of simulation experiments examining the optical trapping behavior of composite micro-particles consisting of a small metallic patch on a spherical dielectric bead. A full parameter space of patch shapes, based on current state of the art manufacturing techniques, and optical properties of the metallic film stack is examined. Stable trapping locations and optical trap stiffness of these particles are determined based on the particle design and potential particle design optimizations are discussed. A final test is performed examining the ability to incorporate these composite particles with standard optical trap metrology technologies. PMID:26832054
Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.
Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal
2013-11-01
In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Quaternion error-based optimal control applied to pinpoint landing
NASA Astrophysics Data System (ADS)
Ghiglino, Pablo
Accurate control techniques for pinpoint planetary landing - i.e., the goal of achieving landing errors in the order of 100m for unmanned missions - is a complex problem that have been tackled in different ways in the available literature. Among other challenges, this kind of control is also affected by the well known trade-off in UAV control that for complex underlying models the control is sub-optimal, while optimal control is applied to simplifed models. The goal of this research has been the development new control algorithms that would be able to tackle these challenges and the result are two novel optimal control algorithms namely: OQTAL and HEX2OQTAL. These controllers share three key properties that are thoroughly proven and shown in this thesis; stability, accuracy and adaptability. Stability is rigorously demonstrated for both controllers. Accuracy is shown in results of comparing these novel controllers with other industry standard algorithms in several different scenarios: there is a gain in accuracy of at least 15% for each controller, and in many cases much more than that. A new tuning algorithm based on swarm heuristics optimisation was developed as well as part of this research in order to tune in an online manner the standard Proportional-Integral-Derivative (PID) controllers used for benchmarking. Finally, adaptability of these controllers can be seen as a combination of four elements: mathematical model extensibility, cost matrices tuning, reduced computation time required and finally no prior knowledge of the navigation or guidance strategies needed. Further simulations in real planetary landing trajectories has shown that these controllers have the capacity of achieving landing errors in the order of pinpoint landing requirements, making them not only very precise UAV controllers, but also potential candidates for pinpoint landing unmanned missions.
Optimization Methods in Sherpa
NASA Astrophysics Data System (ADS)
Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.
2009-09-01
Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).
Pushpalatha, Hulikal Basavarajaiah; Pramod, Kumar; Sundaram, Ramachandran; Shyam, Ramakrishnan
2014-10-01
Irradiation and use of preservatives are routine procedures to control bio-burden in solid herbal dosage forms. Use of steam or pasteurization is even though reported in the literature, not many studies are available with respect to its application in reducing the bio-burden in herbal drug formulations. Hence, we undertook a series of studies to explore the suitability of pasteurization as a method to reduce bio-burden during formulation and development of herbal dosage forms, which will pave the way for preparing preservative-free formulations. Optimized Ashoka (Saraca indica) tablets were formulated and developed. The optimized formula was then subjected to pasteurization during formulation, with an aim to keep the microbial count well within the limits of pharmacopoeial standards. Then, three variants of the optimized Ashoka formulation - with preservative, without preservative and formulation without preservative and subjected to pasteurization, were compared by routine in-process parameters and stability studies. The results obtained indicate that Ashoka tablets manufactured by inclusion of the pasteurization technique not only showed the bio-burden to be within the limits of pharmacopoeial standards, but also exhibited the compliance with other parameters, such as stability and quality. The outcome of this pilot study shows that pasteurization can be employed as a distinctive method for reducing bio-burden during the formulation and development of herbal dosage forms, such as tablets.
The promise of macromolecular crystallization in microfluidic chips
NASA Technical Reports Server (NTRS)
van der Woerd, Mark; Ferree, Darren; Pusey, Marc
2003-01-01
Microfluidics, or lab-on-a-chip technology, is proving to be a powerful, rapid, and efficient approach to a wide variety of bioanalytical and microscale biopreparative needs. The low materials consumption, combined with the potential for packing a large number of experiments in a few cubic centimeters, makes it an attractive technique for both initial screening and subsequent optimization of macromolecular crystallization conditions. Screening operations, which require a macromolecule solution with a standard set of premixed solutions, are relatively straightforward and have been successfully demonstrated in a microfluidics platform. Optimization methods, in which crystallization solutions are independently formulated from a range of stock solutions, are considerably more complex and have yet to be demonstrated. To be competitive with either approach, a microfluidics system must offer ease of operation, be able to maintain a sealed environment over several weeks to months, and give ready access for the observation and harvesting of crystals as they are grown.
NASA Astrophysics Data System (ADS)
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy
NASA Astrophysics Data System (ADS)
Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.
2016-02-01
We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.
NASA Astrophysics Data System (ADS)
Najafi, Ali; Acar, Erdem; Rais-Rohani, Masoud
2014-02-01
The stochastic uncertainties associated with the material, process and product are represented and propagated to process and performance responses. A finite element-based sequential coupled process-performance framework is used to simulate the forming and energy absorption responses of a thin-walled tube in a manner that both material properties and component geometry can evolve from one stage to the next for better prediction of the structural performance measures. Metamodelling techniques are used to develop surrogate models for manufacturing and performance responses. One set of metamodels relates the responses to the random variables whereas the other relates the mean and standard deviation of the responses to the selected design variables. A multi-objective robust design optimization problem is formulated and solved to illustrate the methodology and the influence of uncertainties on manufacturability and energy absorption of a metallic double-hat tube. The results are compared with those of deterministic and augmented robust optimization problems.
Thermosonication and optimization of stingless bee honey processing.
Chong, K Y; Chin, N L; Yusof, Y A
2017-10-01
The effects of thermosonication on the quality of a stingless bee honey, the Kelulut, were studied using processing temperature from 45 to 90 ℃ and processing time from 30 to 120 minutes. Physicochemical properties including water activity, moisture content, color intensity, viscosity, hydroxymethylfurfural content, total phenolic content, and radical scavenging activity were determined. Thermosonication reduced the water activity and moisture content by 7.9% and 16.6%, respectively, compared to 3.5% and 6.9% for conventional heating. For thermosonicated honey, color intensity increased by 68.2%, viscosity increased by 275.0%, total phenolic content increased by 58.1%, and radical scavenging activity increased by 63.0% when compared to its raw form. The increase of hydroxymethylfurfural to 62.46 mg/kg was still within the limits of international standards. Optimized thermosonication conditions using response surface methodology were predicted at 90 ℃ for 111 minutes. Thermosonication was revealed as an effective alternative technique for honey processing.
NASA Astrophysics Data System (ADS)
Swarnalatha, Kalaiyar; Kamalesu, Subramaniam; Subramanian, Ramasamy
2016-11-01
New Ruthenium complexes I, II and III were synthesized using 5-chlorothiophene-2-carboxylic acid (5TPC), as ligand and the complexes were characterized by elemental analysis, FT-IR, 1H, 13C NMR, and mass spectroscopic techniques. Photophysical and electrochemical studies were carried out and the structures of the synthesized complex were optimized using density functional theory (DFT). The molecular geometry, the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO) energies and Mulliken atomic charges of the molecules are determined at the B3LYP method and standard 6-311++G (d,p) basis set starting from optimized geometry. They possess excellent stabilities and their thermal decomposition temperatures are 185 °C, 180 °C and 200 °C respectively, indicating that the metal complexes are suitable for the fabrication processes of optoelectronic devices.
Methodological Variables in the Analysis of Cell-Free DNA.
Bronkhorst, Abel Jacobus; Aucamp, Janine; Pretorius, Piet J
2016-01-01
In recent years, cell-free DNA (cfDNA) analysis has received increasing amounts of attention as a potential non-invasive screening tool for the early detection of genetic aberrations and a wide variety of diseases, especially cancer. However, except for some prenatal tests and BEAMing, a technique used to detect mutations in various genes of cancer patients, cfDNA analysis is not yet routinely applied in clinical practice. Although some confusing biological factors inherent to the in vivo setting play a key part, it is becoming increasingly clear that this struggle is mainly due to the lack of an analytical consensus, especially as regards quantitative analyses of cfDNA. In order to use quantitative analysis of cfDNA with confidence, process optimization and standardization are crucial. In this work we aim to elucidate the most confounding variables of each preanalytical step that must be considered for process optimization and equivalence of procedures.
Mathematical Optimization Techniques
NASA Technical Reports Server (NTRS)
Bellman, R. (Editor)
1963-01-01
The papers collected in this volume were presented at the Symposium on Mathematical Optimization Techniques held in the Santa Monica Civic Auditorium, Santa Monica, California, on October 18-20, 1960. The objective of the symposium was to bring together, for the purpose of mutual education, mathematicians, scientists, and engineers interested in modern optimization techniques. Some 250 persons attended. The techniques discussed included recent developments in linear, integer, convex, and dynamic programming as well as the variational processes surrounding optimal guidance, flight trajectories, statistical decisions, structural configurations, and adaptive control systems. The symposium was sponsored jointly by the University of California, with assistance from the National Science Foundation, the Office of Naval Research, the National Aeronautics and Space Administration, and The RAND Corporation, through Air Force Project RAND.
Yu, Chen; Zhang, Qian; Xu, Peng-Yao; Bai, Yin; Shen, Wen-Bin; Di, Bin; Su, Meng-Xiang
2018-01-01
Quantitative nuclear magnetic resonance (qNMR) is a well-established technique in quantitative analysis. We presented a validated 1 H-qNMR method for assay of octreotide acetate, a kind of cyclic octopeptide. Deuterium oxide was used to remove the undesired exchangeable peaks, which was referred to as proton exchange, in order to make the quantitative signals isolated in the crowded spectrum of the peptide and ensure precise quantitative analysis. Gemcitabine hydrochloride was chosen as the suitable internal standard. Experimental conditions, including relaxation delay time, the numbers of scans, and pulse angle, were optimized first. Then method validation was carried out in terms of selectivity, stability, linearity, precision, and robustness. The assay result was compared with that by means of high performance liquid chromatography, which is provided by Chinese Pharmacopoeia. The statistical F test, Student's t test, and nonparametric test at 95% confidence level indicate that there was no significant difference between these two methods. qNMR is a simple and accurate quantitative tool with no need for specific corresponding reference standards. It has the potential of the quantitative analysis of other peptide drugs and standardization of the corresponding reference standards. Copyright © 2017 John Wiley & Sons, Ltd.
Li, Yongtao; Whitaker, Joshua S; McCarty, Christina L
2012-07-06
A large volume direct aqueous injection method was developed for the analysis of iodinated haloacetic acids in drinking water by using reversed-phase liquid chromatography/electrospray ionization/tandem mass spectrometry in the negative ion mode. Both the external and internal standard calibration methods were studied for the analysis of monoiodoacetic acid, chloroiodoacetic acid, bromoiodoacetic acid, and diiodoacetic acid in drinking water. The use of a divert valve technique for the mobile phase solvent delay, along with isotopically labeled analogs used as internal standards, effectively reduced and compensated for the ionization suppression typically caused by coexisting common inorganic anions. Under the optimized method conditions, the mean absolute and relative recoveries resulting from the replicate fortified deionized water and chlorinated drinking water analyses were 83-107% with a relative standard deviation of 0.7-11.7% and 84-111% with a relative standard deviation of 0.8-12.1%, respectively. The method detection limits resulting from the external and internal standard calibrations, based on seven fortified deionized water replicates, were 0.7-2.3 ng/L and 0.5-1.9 ng/L, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Recommendations for standardized reporting of protein electrophoresis in Australia and New Zealand.
Tate, Jillian; Caldwell, Grahame; Daly, James; Gillis, David; Jenkins, Margaret; Jovanovich, Sue; Martin, Helen; Steele, Richard; Wienholt, Louise; Mollee, Peter
2012-05-01
Although protein electrophoresis of serum (SPEP) and urine (UPEP) specimens is a well-established laboratory technique, the reporting of results using this important method varies considerably between laboratories. The Australasian Association of Clinical Biochemists recognized a need to adopt a standardized approach to reporting SPEP and UPEP by clinical laboratories. A Working Party considered available data including published literature and clinical studies, together with expert opinion in order to establish optimal reporting practices. A position paper was produced, which was subsequently revised through a consensus process involving scientists and pathologists with expertise in the field throughout Australia and New Zealand. Recommendations for standardized reporting of protein electrophoresis have been produced. These cover analytical requirements: detection systems; serum protein and albumin quantification; fractionation into alpha-1, alpha-2, beta and gamma fractions; paraprotein quantification; urine Bence Jones protein quantification; paraprotein characterization; and laboratory performance, expertise and staffing. The recommendations also include general interpretive commenting and commenting for specimens with paraproteins and small bands together with illustrative examples of reports. Recommendations are provided for standardized reporting of protein electrophoresis in Australia and New Zealand. It is expected that such standardized reporting formats will reduce both variation between laboratories and the risk of misinterpretation of results.
Total energy expenditure in burned children using the doubly labeled water technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goran, M.I.; Peters, E.J.; Herndon, D.N.
Total energy expenditure (TEE) was measured in 15 burned children with the doubly labeled water technique. Application of the technique in burned children required evaluation of potential errors resulting from nutritional intake altering background enrichments during studies and from the high rate of water turnover relative to CO2 production. Five studies were discarded because of these potential problems. TEE was 1.33 +/- 0.27 times predicted basal energy expenditure (BEE), and in studies where resting energy expenditure (REE) was simultaneously measured, TEE was 1.18 +/- 0.17 times REE, which in turn was 1.16 +/- 0.10 times predicted BEE. TEE was significantlymore » correlated with measured REE (r2 = 0.92) but not with predicted BEE. These studies substantiate the advantage of measuring REE to predict TEE in severely burned patients as opposed to relying on standardized equations. Therefore we recommend that optimal nutritional support will be achieved in convalescent burned children by multiplying REE by an activity factor of 1.2.« less
Response Surface Methods for Spatially-Resolved Optical Measurement Techniques
NASA Technical Reports Server (NTRS)
Danehy, P. M.; Dorrington, A. A.; Cutler, A. D.; DeLoach, R.
2003-01-01
Response surface methods (or methodology), RSM, have been applied to improve data quality for two vastly different spatial ly-re solved optical measurement techniques. In the first application, modern design of experiments (MDOE) methods, including RSM, are employed to map the temperature field in a direct-connect supersonic combustion test facility at NASA Langley Research Center. The laser-based measurement technique known as coherent anti-Stokes Raman spectroscopy (CARS) is used to measure temperature at various locations in the combustor. RSM is then used to develop temperature maps of the flow. Even though the temperature fluctuations at a single point in the flowfield have a standard deviation on the order of 300 K, RSM provides analytic fits to the data having 95% confidence interval half width uncertainties in the fit as low as +/-30 K. Methods of optimizing future CARS experiments are explored. The second application of RSM is to quantify the shape of a 5-meter diameter, ultra-light, inflatable space antenna at NASA Langley Research Center.
NASA Astrophysics Data System (ADS)
Lynam, Alfred E.
2015-04-01
Multiple-satellite-aided capture is a -efficient technique for capturing a spacecraft into orbit at Jupiter. However, finding the times when the Galilean moons of Jupiter align such that three or four of them can be encountered in a single pass is difficult using standard astrodynamics algorithms such as Lambert's problem. In this paper, we present simple but powerful techniques that simplify the dynamics and geometry of the Galilean satellites so that many of these triple- and quadruple-satellite-aided capture sequences can be found quickly over an extended 60-year time period from 2020 to 2080. The techniques find many low-fidelity trajectories that could be used as initial guesses for future high-fidelity optimization. Results indicate the existence of approximately 3,100 unique triple-satellite-aided capture trajectories and 6 unique quadruple-satellite-aided capture trajectories during the 60-year time period. The entire search takes less than one minute of computational time.
Generalized ISAR--part II: interferometric techniques for three-dimensional location of scatterers.
Given, James A; Schmidt, William R
2005-11-01
This paper is the second part of a study dedicated to optimizing diagnostic inverse synthetic aperture radar (ISAR) studies of large naval vessels. The method developed here provides accurate determination of the position of important radio-frequency scatterers by combining accurate knowledge of ship position and orientation with specialized signal processing. The method allows for the simultaneous presence of substantial Doppler returns from both change of roll angle and change of aspect angle by introducing generalized ISAR ates. The first paper provides two modes of interpreting ISAR plots, one valid when roll Doppler is dominant, the other valid when the aspect angle Doppler is dominant. Here, we provide, for each type of ISAR plot technique, a corresponding interferometric ISAR (InSAR) technique. The former, aspect-angle dominated InSAR, is a generalization of standard InSAR; the latter, roll-angle dominated InSAR, seems to be new to this work. Both methods are shown to be efficient at identifying localized scatterers under simulation conditions.
Osendarp, Saskia J M; Broersen, Britt; van Liere, Marti J; De-Regil, Luz M; Bahirathan, Lavannya; Klassen, Eva; Neufeld, Lynnette M
2016-12-01
The question whether diets composed of local foods can meet recommended nutrient intakes in children aged 6 to 23 months living in low- and middle-income countries is contested. To review evidence of studies evaluating whether (1) macro- and micronutrient requirements of children aged 6 to 23 months from low- and middle-income countries are met by the consumption of locally available foods ("observed intake") and (2) nutrient requirements can be met when the use of local foods is optimized, using modeling techniques ("modeled intake"). Twenty-three articles were included after conducting a systematic literature search. To allow for comparisons between studies, findings of 15 observed intake studies were compared against their contribution to a standardized recommended nutrient intake from complementary foods. For studies with data on intake distribution, %< estimated average requirements were calculated. Data from the observed intake studies indicate that children aged 6 to 23 months meet requirements of protein, while diets are inadequate in calcium, iron, and zinc. Also for energy, vitamin A, thiamin, riboflavin, niacin, folate, and vitamin C, children did not always fulfill their requirements. Very few studies reported on vitamin B6, B12, and magnesium, and no conclusions can be drawn for these nutrients. When diets are optimized using modeling techniques, most of these nutrient requirements can be met, with the exception of iron and zinc and in some settings calcium, folate, and B vitamins. Our findings suggest that optimizing the use of local foods in diets of children aged 6 to 23 months can improve nutrient intakes; however, additional cost-effective strategies are needed to ensure adequate intakes of iron and zinc. © The Author(s) 2016.
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
Reprocessing of LiH in Molten Chlorides
NASA Astrophysics Data System (ADS)
Masset, Patrick J.; Gabriel, Armand; Poignet, Jean-Claude
2008-06-01
LiH was used as inactive material to stimulate the reprocessing of lithium tritiate in molten chlorides. The electrochemical properties (diffusion coefficients, apparent standard potentials) were measured by means of transient electrochemical techniques (cyclic voltammetry and chronopotentiometry). At 425 ºC the diffusion coefficient and the apparent standard potential were 2.5 · 10-5 cm2 s-1 and -1.8 V vs. Ag/AgCl, respectively. For the process design the LiH solubility was measured by means of DTA to optimize the LiH concentration in the molten phase. In addition electrolysis tests were carried out at 460 ºC with current densities up to 1 A cm-2 over 24 h. These results show that LiH may be reprocessed in molten chlorides consisting in the production of hydrogen gas at the anode and molten metallic lithium at the cathode.
Schlue, Danijela; Mate, Sebastian; Haier, Jörg; Kadioglu, Dennis; Prokosch, Hans-Ulrich; Breil, Bernhard
2017-01-01
Heterogeneous tumor documentation and its challenges of interpretation of medical terms lead to problems in analyses of data from clinical and epidemiological cancer registries. The objective of this project was to design, implement and improve a national content delivery portal for oncological terms. Data elements of existing handbooks and documentation sources were analyzed, combined and summarized by medical experts of different comprehensive cancer centers. Informatics experts created a generic data model based on an existing metadata repository. In order to establish a national knowledge management system for standardized cancer documentation, a prototypical tumor wiki was designed and implemented. Requirements engineering techniques were applied to optimize this platform. It is targeted to user groups such as documentation officers, physicians and patients. The linkage to other information sources like PubMed and MeSH was realized.
Method of optimization onboard communication network
NASA Astrophysics Data System (ADS)
Platoshin, G. A.; Selvesuk, N. I.; Semenov, M. E.; Novikov, V. M.
2018-02-01
In this article the optimization levels of onboard communication network (OCN) are proposed. We defined the basic parameters, which are necessary for the evaluation and comparison of modern OCN, we identified also a set of initial data for possible modeling of the OCN. We also proposed a mathematical technique for implementing the OCN optimization procedure. This technique is based on the principles and ideas of binary programming. It is shown that the binary programming technique allows to obtain an inherently optimal solution for the avionics tasks. An example of the proposed approach implementation to the problem of devices assignment in OCN is considered.
Feedback laws for fuel minimization for transport aircraft
NASA Technical Reports Server (NTRS)
Price, D. B.; Gracey, C.
1984-01-01
The Theoretical Mechanics Branch has as one of its long-range goals to work toward solving real-time trajectory optimization problems on board an aircraft. This is a generic problem that has application to all aspects of aviation from general aviation through commercial to military. Overall interest is in the generic problem, but specific problems to achieve concrete results are examined. The problem is to develop control laws that generate approximately optimal trajectories with respect to some criteria such as minimum time, minimum fuel, or some combination of the two. These laws must be simple enough to be implemented on a computer that is flown on board an aircraft, which implies a major simplification from the two point boundary value problem generated by a standard trajectory optimization problem. In addition, the control laws allow for changes in end conditions during the flight, and changes in weather along a planned flight path. Therefore, a feedback control law that generates commands based on the current state rather than a precomputed open-loop control law is desired. This requirement, along with the need for order reduction, argues for the application of singular perturbation techniques.
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
NASA Astrophysics Data System (ADS)
Zhao, Dang-Jun; Song, Zheng-Yu
2017-08-01
This study proposes a multiphase convex programming approach for rapid reentry trajectory generation that satisfies path, waypoint and no-fly zone (NFZ) constraints on Common Aerial Vehicles (CAVs). Because the time when the vehicle reaches the waypoint is unknown, the trajectory of the vehicle is divided into several phases according to the prescribed waypoints, rendering a multiphase optimization problem with free final time. Due to the requirement of rapidity, the minimum flight time of each phase index is preferred over other indices in this research. The sequential linearization is used to approximate the nonlinear dynamics of the vehicle as well as the nonlinear concave path constraints on the heat rate, dynamic pressure, and normal load; meanwhile, the convexification techniques are proposed to relax the concave constraints on control variables. Next, the original multiphase optimization problem is reformulated as a standard second-order convex programming problem. Theoretical analysis is conducted to show that the original problem and the converted problem have the same solution. Numerical results are presented to demonstrate that the proposed approach is efficient and effective.
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com
2016-06-15
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiationmore » of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.« less
Curve fitting air sample filter decay curves to estimate transuranic content.
Hayes, Robert B; Chiou, Hung Cheng
2004-01-01
By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.
Purification of Bacteriophages Using Anion-Exchange Chromatography.
Vandenheuvel, Dieter; Rombouts, Sofie; Adriaenssens, Evelien M
2018-01-01
In bacteriophage research and therapy, most applications ask for highly purified phage suspensions. The standard technique for this is ultracentrifugation using cesium chloride gradients. This technique is cumbersome, elaborate and expensive. Moreover, it is unsuitable for the purification of large quantities of phage suspensions.The protocol described here, uses anion-exchange chromatography to bind phages to a stationary phase. This is done using an FLPC system, combined with Convective Interaction Media (CIM ® ) monoliths. Afterward, the column is washed to remove impurities from the CIM ® disk. By using a buffer solution with a high ionic strength, the phages are subsequently eluted from the column and collected. In this way phages can be efficiently purified and concentrated.This protocol can be used to determine the optimal buffers, stationary phase chemistry and elution conditions, as well as the maximal capacity and recovery of the columns.
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Optimal systems of geoscience surveying A preliminary discussion
NASA Astrophysics Data System (ADS)
Shoji, Tetsuya
2006-10-01
In any geoscience survey, each survey technique must be effectively applied, and many techniques are often combined optimally. An important task is to get necessary and sufficient information to meet the requirement of the survey. A prize-penalty function quantifies effectiveness of the survey, and hence can be used to determine the best survey technique. On the other hand, an information-cost function can be used to determine the optimal combination of survey techniques on the basis of the geoinformation obtained. Entropy is available to evaluate geoinformation. A simple model suggests the possibility that low-resolvability techniques are generally applied at early stages of survey, and that higher-resolvability techniques should alternate with lower-resolvability ones with the progress of the survey.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
NASA Astrophysics Data System (ADS)
Jorris, Timothy R.
2007-12-01
To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.
Commowick, Olivier; Akhondi-Asl, Alireza; Warfield, Simon K.
2012-01-01
We present a new algorithm, called local MAP STAPLE, to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters. It is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation. In order to allow for optimal fusion from the small amount of data in each local region, and to account for the possibility of labels not being observed in a local region of some (or all) input segmentations, we introduce prior probabilities for the local performance parameters through a new Maximum A Posteriori formulation of STAPLE. Further, we propose an expression to compute confidence intervals in the estimated local performance parameters. We carried out several experiments with local MAP STAPLE to characterize its performance and value for local segmentation evaluation. First, with simulated segmentations with known reference standard segmentation and spatially varying performance, we show that local MAP STAPLE performs better than both STAPLE and majority voting. Then we present evaluations with data sets from clinical applications. These experiments demonstrate that spatial adaptivity in segmentation performance is an important property to capture. We compared the local MAP STAPLE segmentations to STAPLE, and to previously published fusion techniques and demonstrate the superiority of local MAP STAPLE over other state-of-the- art algorithms. PMID:22562727
Acceleration techniques in the univariate Lipschitz global optimization
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela
2016-10-01
Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.
Modeling OPC complexity for design for manufacturability
NASA Astrophysics Data System (ADS)
Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong
2005-11-01
Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data from OPC and mask data preparation runs, we build models of FC as function of OPC tolerances and layout parameters.
Character Recognition Using Genetically Trained Neural Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diniz, C.; Stantz, K.M.; Trahan, M.W.
1998-10-01
Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfidmore » recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the amount of noise significantly degrades character recognition efficiency, some of which can be overcome by adding noise during training and optimizing the form of the network's activation fimction.« less
Yang, Ching-Ching; Yang, Bang-Hung; Tu, Chun-Yuan; Wu, Tung-Hsin; Liu, Shu-Hsin
2017-06-01
This study aimed to evaluate the efficacy of automatic exposure control (AEC) in order to optimize low-dose computed tomography (CT) protocols for patients of different ages undergoing cardiac PET/CT and single-photon emission computed tomography/computed tomography (SPECT/CT). One PET/CT and one SPECT/CT were used to acquire CT images for four anthropomorphic phantoms representative of 1-year-old, 5-year-old and 10-year-old children and an adult. For the hybrid systems investigated in this study, the radiation dose and image quality of cardiac CT scans performed with AEC activated depend mainly on the selection of a predefined image quality index. Multiple linear regression methods were used to analyse image data from anthropomorphic phantom studies to investigate the effects of body size and predefined image quality index on CT radiation dose in cardiac PET/CT and SPECT/CT scans. The regression relationships have a coefficient of determination larger than 0.9, indicating a good fit to the data. According to the regression models, low-dose protocols using the AEC technique were optimized for patients of different ages. In comparison with the standard protocol with AEC activated for adult cardiac examinations used in our clinical routine practice, the optimized paediatric protocols in PET/CT allow 32.2, 63.7 and 79.2% CT dose reductions for anthropomorphic phantoms simulating 10-year-old, 5-year-old and 1-year-old children, respectively. The corresponding results for cardiac SPECT/CT are 8.4, 51.5 and 72.7%. AEC is a practical way to reduce CT radiation dose in cardiac PET/CT and SPECT/CT, but the AEC settings should be determined properly for optimal effect. Our results show that AEC does not eliminate the need for paediatric protocols and CT examinations using the AEC technique should be optimized for paediatric patients to reduce the radiation dose as low as reasonably achievable.
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Jamema, Swamidas V; Kirisits, Christian; Mahantshetty, Umesh; Trnkova, Petra; Deshpande, Deepak D; Shrivastava, Shyam K; Pötter, Richard
2010-12-01
Comparison of inverse planning with the standard clinical plan and with the manually optimized plan based on dose-volume parameters and loading patterns. Twenty-eight patients who underwent MRI based HDR brachytherapy for cervix cancer were selected for this study. Three plans were calculated for each patient: (1) standard loading, (2) manual optimized, and (3) inverse optimized. Dosimetric outcomes from these plans were compared based on dose-volume parameters. The ratio of Total Reference Air Kerma of ovoid to tandem (TRAK(O/T)) was used to compare the loading patterns. The volume of HR CTV ranged from 9-68 cc with a mean of 41(±16.2) cc. Mean V100 for standard, manual optimized and inverse plans was found to be not significant (p=0.35, 0.38, 0.4). Dose to bladder (7.8±1.6 Gy) and sigmoid (5.6±1.4 Gy) was high for standard plans; Manual optimization reduced the dose to bladder (7.1±1.7 Gy p=0.006) and sigmoid (4.5±1.0 Gy p=0.005) without compromising the HR CTV coverage. The inverse plan resulted in a significant reduction to bladder dose (6.5±1.4 Gy, p=0.002). TRAK was found to be 0.49(±0.02), 0.44(±0.04) and 0.40(±0.04) cGy m(-2) for the standard loading, manual optimized and inverse plans, respectively. It was observed that TRAK(O/T) was 0.82(±0.05), 1.7(±1.04) and 1.41(±0.93) for standard loading, manual optimized and inverse plans, respectively, while this ratio was 1 for the traditional loading pattern. Inverse planning offers good sparing of critical structures without compromising the target coverage. The average loading pattern of the whole patient cohort deviates from the standard Fletcher loading pattern. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nardi, F.; Grimaldi, S.; Petroselli, A.
2012-12-01
Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.
Blind multirigid retrospective motion correction of MR images.
Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard
2015-04-01
Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. © 2014 Wiley Periodicals, Inc.
A survey of compiler optimization techniques
NASA Technical Reports Server (NTRS)
Schneck, P. B.
1972-01-01
Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
Project officer's perspective: quality assurance as a management tool.
Heiby, J
1993-06-01
Advances in the management of health programs in less developed countries (LDC) have not kept pace with the progress of the technology used. The US Agency for International Development mandated the Quality Assurance Project (QAP) to provide quality improvement technical assistance to primary health care systems in LDCs while developing appropriate quality assurance (QA) strategies. The quality of health care in recent years in the US and Europe focused on the introduction of management techniques developed for industry into health systems. The experience of the QAP and its predecessor, the PRICOR Project, shows that quality improvement techniques facilitate measurement of quality of care. A recently developed WHO model for the management of the sick child provides scientifically based standards for actual care. Since 1988, outside investigators measuring how LDC clinicians perform have revealed serious deficiencies in quality compared with the program's own standards. This prompted developed of new QA management initiatives: 1) communicating standards clearly to the program staff; 2) actively monitoring actual performance corresponds to these standards; and 3) taking action to improve performance. QA means that managers are expected to monitor service delivery, undertake problem solving, and set specific targets for quality improvement. Quality improvement methods strengthen supervision as supervisors can objectively assess health worker performance. QA strengthens the management functions that support service delivery, e.g., training, records management, finance, logistics, and supervision. Attention to quality can contribute to improved health worker motivation and effective incentive programs by recognition for a job well done and opportunities for learning new skills. These standards can also address patient satisfaction. QA challenges managers to aim for the optimal level of care attainable.
Computer-intensive simulation of solid-state NMR experiments using SIMPSON.
Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas
2014-09-01
Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.
Reynolds, Penny S; Tamariz, Francisco J; Barbee, Robert Wayne
2010-04-01
Exploratory pilot studies are crucial to best practice in research but are frequently conducted without a systematic method for maximizing the amount and quality of information obtained. We describe the use of response surface regression models and simultaneous optimization methods to develop a rat model of hemorrhagic shock in the context of chronic hypertension, a clinically relevant comorbidity. Response surface regression model was applied to determine optimal levels of two inputs--dietary NaCl concentration (0.49%, 4%, and 8%) and time on the diet (4, 6, 8 weeks)--to achieve clinically realistic and stable target measures of systolic blood pressure while simultaneously maximizing critical oxygen delivery (a measure of vulnerability to hemorrhagic shock) and body mass M. Simultaneous optimization of the three response variables was performed though a dimensionality reduction strategy involving calculation of a single aggregate measure, the "desirability" function. Optimal conditions for inducing systolic blood pressure of 208 mmHg, critical oxygen delivery of 4.03 mL/min, and M of 290 g were determined to be 4% [NaCl] for 5 weeks. Rats on the 8% diet did not survive past 7 weeks. Response surface regression model and simultaneous optimization method techniques are commonly used in process engineering but have found little application to date in animal pilot studies. These methods will ensure both the scientific and ethical integrity of experimental trials involving animals and provide powerful tools for the development of novel models of clinically interacting comorbidities with shock.
Addressing forecast uncertainty impact on CSP annual performance
NASA Astrophysics Data System (ADS)
Ferretti, Fabio; Hogendijk, Christopher; Aga, Vipluv; Ehrsam, Andreas
2017-06-01
This work analyzes the impact of weather forecast uncertainty on the annual performance of a Concentrated Solar Power (CSP) plant. Forecast time series has been produced by a commercial forecast provider using the technique of hindcasting for the full year 2011 in hourly resolution for Ouarzazate, Morocco. Impact of forecast uncertainty has been measured on three case studies, representing typical tariff schemes observed in recent CSP projects plus a spot market price scenario. The analysis has been carried out using an annual performance model and a standard dispatch optimization algorithm based on dynamic programming. The dispatch optimizer has been demonstrated to be a key requisite to maximize the annual revenues depending on the price scenario, harvesting the maximum potential out of the CSP plant. Forecasting uncertainty affects the revenue enhancement outcome of a dispatch optimizer depending on the error level and the price function. Results show that forecasting accuracy of direct solar irradiance (DNI) is important to make best use of an optimized dispatch but also that a higher number of calculation updates can partially compensate this uncertainty. Improvement in revenues can be significant depending on the price profile and the optimal operation strategy. Pathways to achieve better performance are presented by having more updates both by repeatedly generating new optimized trajectories but also more often updating weather forecasts. This study shows the importance of working on DNI weather forecasting for revenue enhancement as well as selecting weather services that can provide multiple updates a day and probabilistic forecast information.
Shape and Reinforcement Optimization of Underground Tunnels
NASA Astrophysics Data System (ADS)
Ghabraie, Kazem; Xie, Yi Min; Huang, Xiaodong; Ren, Gang
Design of support system and selecting an optimum shape for the opening are two important steps in designing excavations in rock masses. Currently selecting the shape and support design are mainly based on designer's judgment and experience. Both of these problems can be viewed as material distribution problems where one needs to find the optimum distribution of a material in a domain. Topology optimization techniques have proved to be useful in solving these kinds of problems in structural design. Recently the application of topology optimization techniques in reinforcement design around underground excavations has been studied by some researchers. In this paper a three-phase material model will be introduced changing between normal rock, reinforced rock, and void. Using such a material model both problems of shape and reinforcement design can be solved together. A well-known topology optimization technique used in structural design is bi-directional evolutionary structural optimization (BESO). In this paper the BESO technique has been extended to simultaneously optimize the shape of the opening and the distribution of reinforcements. Validity and capability of the proposed approach have been investigated through some examples.
NASA Technical Reports Server (NTRS)
Laird, Philip
1992-01-01
We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.
Fanali, Chiara; Dugo, Laura; D'Orazio, Giovanni; Lirangi, Melania; Dachà, Marina; Dugo, Paola; Mondello, Luigi
2011-01-01
Nano-LC and conventional HPLC techniques were applied for the analysis of anthocyanins present in commercial fruit juices using a capillary column of 100 μm id and a 2.1 mm id narrow-bore C(18) column. Analytes were detected by UV-Vis at 518 nm and ESI-ion trap MS with HPLC and nano-LC, respectively. Commercial blueberry juice (14 anthocyanins detected) was used to optimize chromatographic separation of analytes and other analysis parameters. Qualitative identification of anthocyanins was performed by comparing the recorded mass spectral data with those of published papers. The use of the same mobile phase composition in both techniques revealed that the miniaturized method exhibited shorter analysis time and higher sensitivity than narrow-bore chromatography. Good intra-day and day-to-day precision of retention time was obtained in both methods with values of RSD less than 3.4 and 0.8% for nano-LC and HPLC, respectively. Quantitative analysis was performed by external standard curve calibration of cyanidin-3-O-glucoside standard. Calibration curves were linear in the concentration ranges studied, 0.1-50 and 6-50 μg/mL for HPLC-UV/Vis and nano-LC-MS, respectively. LOD and LOQ values were good for both methods. In addition to commercial blueberry juice, qualitative and quantitative analysis of other juices (e.g. raspberry, sweet cherry and pomegranate) was performed. The optimized nano-LC-MS method allowed an easy and selective identification and quantification of anthocyanins in commercial fruit juices; it offered good results, shorter analysis time and reduced mobile phase volume with respect to narrow-bore HPLC. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reconstruction of reflectance data using an interpolation technique.
Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh
2009-03-01
A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.
Montoro Bustos, Antonio R; Petersen, Elijah J; Possolo, Antonio; Winchester, Michael R
2015-09-01
Single particle inductively coupled plasma-mass spectrometry (spICP-MS) is an emerging technique that enables simultaneous measurement of nanoparticle size and number quantification of metal-containing nanoparticles at realistic environmental exposure concentrations. Such measurements are needed to understand the potential environmental and human health risks of nanoparticles. Before spICP-MS can be considered a mature methodology, additional work is needed to standardize this technique including an assessment of the reliability and variability of size distribution measurements and the transferability of the technique among laboratories. This paper presents the first post hoc interlaboratory comparison study of the spICP-MS technique. Measurement results provided by six expert laboratories for two National Institute of Standards and Technology (NIST) gold nanoparticle reference materials (RM 8012 and RM 8013) were employed. The general agreement in particle size between spICP-MS measurements and measurements by six reference techniques demonstrates the reliability of spICP-MS and validates its sizing capability. However, the precision of the spICP-MS measurement was better for the larger 60 nm gold nanoparticles and evaluation of spICP-MS precision indicates substantial variability among laboratories, with lower variability between operators within laboratories. Global particle number concentration and Au mass concentration recovery were quantitative for RM 8013 but significantly lower and with a greater variability for RM 8012. Statistical analysis did not suggest an optimal dwell time, because this parameter did not significantly affect either the measured mean particle size or the ability to count nanoparticles. Finally, the spICP-MS data were often best fit with several single non-Gaussian distributions or mixtures of Gaussian distributions, rather than the more frequently used normal or log-normal distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, Victor, E-mail: vhernandezmasgrau@gmail.com; Arenas, Meritxell; Müller, Katrin
2013-01-01
To assess the advantages of an optimized posterior axillary (AX) boost technique for the irradiation of supraclavicular (SC) and AX lymph nodes. Five techniques for the treatment of SC and levels I, II, and III AX lymph nodes were evaluated for 10 patients selected at random: a direct anterior field (AP); an anterior to posterior parallel pair (AP-PA); an anterior field with a posterior axillary boost (PAB); an anterior field with an anterior axillary boost (AAB); and an optimized PAB technique (OptPAB). The target coverage, hot spots, irradiated volume, and dose to organs at risk were evaluated and a statisticalmore » analysis comparison was performed. The AP technique delivered insufficient dose to the deeper AX nodes. The AP-PA technique produced larger irradiated volumes and higher mean lung doses than the other techniques. The PAB and AAB techniques originated excessive hot spots in most of the cases. The OptPAB technique produced moderate hot spots while maintaining a similar planning target volume (PTV) coverage, irradiated volume, and dose to organs at risk. This optimized technique combines the advantages of the PAB and AP-PA techniques, with moderate hot spots, sufficient target coverage, and adequate sparing of normal tissues. The presented technique is simple, fast, and easy to implement in routine clinical practice and is superior to the techniques historically used for the treatment of SC and AX lymph nodes.« less
1980-12-01
AFIT/GEO/EE/80D-1 I -’ SYSTEM OPTIMIZATION OF THE GLOW DISCHARGE OPTICAL SPECTROSCOPY TECHNIQUE USED FOR IMPURITY PROFILING OF ION IMPLANTED GALLIUM ...EE/80D-1 (\\) SYSTEM OPTIMIZATION OF THE GLOW DISCHARGE OPTICAL SPECTROSCOPY TECHNIQUE USED FOR IMPURITY PROFILING OF ION IMPLANTED GALLIUM ARSENIDE...semiconductors, specifically annealed and unan- nealed ion implanted gallium arsenide (GaAs). Methods to improve the sensitivity of the GDOS system have
Halladay, Jason S; Delarosa, Erlie Marie; Tran, Daniel; Wang, Leslie; Wong, Susan; Khojasteh, S Cyrus
2011-08-01
Here we describe a high capacity and high-throughput, automated, 384-well CYP inhibition assay using well-known HLM-based MS probes. We provide consistently robust IC(50) values at the lead optimization stage of the drug discovery process. Our method uses the Agilent Technologies/Velocity11 BioCel 1200 system, timesaving techniques for sample analysis, and streamlined data processing steps. For each experiment, we generate IC(50) values for up to 344 compounds and positive controls for five major CYP isoforms (probe substrate): CYP1A2 (phenacetin), CYP2C9 ((S)-warfarin), CYP2C19 ((S)-mephenytoin), CYP2D6 (dextromethorphan), and CYP3A4/5 (testosterone and midazolam). Each compound is incubated separately at four concentrations with each CYP probe substrate under the optimized incubation condition. Each incubation is quenched with acetonitrile containing the deuterated internal standard of the respective metabolite for each probe substrate. To minimize the number of samples to be analyzed by LC-MS/MS and reduce the amount of valuable MS runtime, we utilize timesaving techniques of cassette analysis (pooling the incubation samples at the end of each CYP probe incubation into one) and column switching (reducing the amount of MS runtime). Here we also report on the comparison of IC(50) results for five major CYP isoforms using our method compared to values reported in the literature.
Thermography based prescreening software tool for veterinary clinics
NASA Astrophysics Data System (ADS)
Dahal, Rohini; Umbaugh, Scott E.; Mishra, Deependra; Lama, Norsang; Alvandipour, Mehrdad; Umbaugh, David; Marino, Dominic J.; Sackman, Joseph
2017-05-01
Under development is a clinical software tool which can be used in the veterinary clinics as a prescreening tool for these pathologies: anterior cruciate ligament (ACL) disease, bone cancer and feline hyperthyroidism. Currently, veterinary clinical practice uses several imaging techniques including radiology, computed tomography (CT), and magnetic resonance imaging (MRI). But, harmful radiation involved during imaging, expensive equipment setup, excessive time consumption and the need for a cooperative patient during imaging, are major drawbacks of these techniques. In veterinary procedures, it is very difficult for animals to remain still for the time periods necessary for standard imaging without resorting to sedation - which creates another set of complexities. Therefore, clinical application software integrated with a thermal imaging system and the algorithms with high sensitivity and specificity for these pathologies, can address the major drawbacks of the existing imaging techniques. A graphical user interface (GUI) has been created to allow ease of use for the clinical technician. The technician inputs an image, enters patient information, and selects the camera view associated with the image and the pathology to be diagnosed. The software will classify the image using an optimized classification algorithm that has been developed through thousands of experiments. Optimal image features are extracted and the feature vector is then used in conjunction with the stored image database for classification. Classification success rates as high as 88% for bone cancer, 75% for ACL and 90% for feline hyperthyroidism have been achieved. The software is currently undergoing preliminary clinical testing.
NASA Astrophysics Data System (ADS)
Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre
2014-12-01
In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.
Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial
Ibrahim, Ahmed; Alfa, Attahiru
2017-01-01
This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes. PMID:28763039
Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial.
Ibrahim, Ahmed; Alfa, Attahiru
2017-08-01
This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.
D-Optimal Experimental Design for Contaminant Source Identification
NASA Astrophysics Data System (ADS)
Sai Baba, A. K.; Alexanderian, A.
2016-12-01
Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.
Goldmann Tonometer Prism with an Optimized Error Correcting Applanation Surface.
McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko; Schwiegerling, Jim
2016-09-01
We evaluate solutions for an applanating surface modification to the Goldmann tonometer prism, which substantially negates the errors due to patient variability in biomechanics. A modified Goldmann or correcting applanation tonometry surface (CATS) prism is presented which was optimized to minimize the intraocular pressure (IOP) error due to corneal thickness, stiffness, curvature, and tear film. Mathematical modeling with finite element analysis (FEA) and manometric IOP referenced cadaver eyes were used to optimize and validate the design. Mathematical modeling of the optimized CATS prism indicates an approximate 50% reduction in each of the corneal biomechanical and tear film errors. Manometric IOP referenced pressure in cadaveric eyes demonstrates substantial equivalence to GAT in nominal eyes with the CATS prism as predicted by modeling theory. A CATS modified Goldmann prism is theoretically able to significantly improve the accuracy of IOP measurement without changing Goldmann measurement technique or interpretation. Clinical validation is needed but the analysis indicates a reduction in CCT error alone to less than ±2 mm Hg using the CATS prism in 100% of a standard population compared to only 54% less than ±2 mm Hg error with the present Goldmann prism. This article presents an easily adopted novel approach and critical design parameters to improve the accuracy of a Goldmann applanating tonometer.
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
Genetic algorithms and their use in Geophysical Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Paul B.
1999-04-01
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less
Genetic algorithms and their use in geophysical problems
NASA Astrophysics Data System (ADS)
Parker, Paul Bradley
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.