Science.gov

Sample records for advanced optimization techniques

  1. Techniques for developing approximate optimal advanced launch system guidance

    NASA Technical Reports Server (NTRS)

    Feeley, Timothy S.; Speyer, Jason L.

    1991-01-01

    An extension to the authors' previous technique used to develop a real-time guidance scheme for the Advanced Launch System is presented. The approach is to construct an optimal guidance law based upon an asymptotic expansion associated with small physical parameters, epsilon. The trajectory of a rocket modeled as a point mass is considered with the flight restricted to an equatorial plane while reaching an orbital altitude at orbital injection speeds. The dynamics of this problem can be separated into primary effects due to thrust and gravitational forces, and perturbation effects which include the aerodynamic forces and the remaining inertial forces. An analytic solution to the reduced-order problem represented by the primary dynamics is possible. The Hamilton-Jacobi-Bellman or dynamic programming equation is expanded in an asymptotic series where the zeroth-order term (epsilon = 0) can be obtained in closed form.

  2. Techniques Optimized for Reducing Instabilities in Advanced Nickel-Base Superalloys for Turbine Blades

    NASA Technical Reports Server (NTRS)

    MacKay, Rebecca A.; Locci, Ivan E.; Garg, anita; Ritzert, Frank J.

    2002-01-01

    is a three-phase constituent composed of TCP and stringers of gamma phase in a matrix of gamma prime. An incoherent grain boundary separates the SRZ from the gammagamma prime microstructure of the superalloy. The SRZ is believed to form as a result of local chemistry changes in the superalloy due to the application of the diffusion aluminide bondcoat. Locally high surface stresses also appear to promote the formation of the SRZ. Thus, techniques that change the local alloy chemistry or reduce surface stresses have been examined for their effectiveness in reducing SRZ. These SRZ-reduction steps are performed on the test specimen or the turbine blade before the bondcoat is applied. Stressrelief heat treatments developed at NASA Glenn have been demonstrated to reduce significantly the amount of SRZ that develops during subsequent high-temperature exposures. Stress-relief heat treatments reduce surface stresses by recrystallizing a thin surface layer of the superalloy. However, in alloys with very high propensities to form SRZ, stress relief heat treatments alone do not eliminate SRZ entirely. Thus, techniques that modify the local chemistry under the bondcoat have been emphasized and optimized successfully at Glenn. One such technique is carburization, which changes the local chemistry by forming submicron carbides near the surface of the superalloy. Detailed characterizations have demonstrated that the depth and uniform distribution of these carbides are enhanced when a stress relief treatment and an appropriate surface preparation are employed in advance of the carburization treatment. Even in alloys that have the propensity to develop a continuous SRZ layer beneath the diffusion zone, the SRZ has been completely eliminated or reduced to low, manageable levels when this combination of techniques is utilized. Now that the techniques to mitigate SRZ have been established at Glenn, TCP phase formation is being emphasized in ongoing work under the UEET Program. The

  3. TOOLKIT FOR ADVANCED OPTIMIZATION

    2000-10-13

    The TAO project focuses on the development of software for large scale optimization problems. TAO uses an object-oriented design to create a flexible toolkit with strong emphasis on the reuse of external tools where appropriate. Our design enables bi-directional connection to lower level linear algebra support (for example, parallel sparse matrix data structures) as well as higher level application frameworks. The Toolkist for Advanced Optimization (TAO) is aimed at teh solution of large-scale optimization problemsmore » on high-performance architectures. Our main goals are portability, performance, scalable parallelism, and an interface independent of the architecture. TAO is suitable for both single-processor and massively-parallel architectures. The current version of TAO has algorithms for unconstrained and bound-constrained optimization.« less

  4. MR enterography in Crohn's disease: current consensus on optimal imaging technique and future advances from the SAR Crohn's disease-focused panel.

    PubMed

    Grand, David J; Guglielmo, Flavius F; Al-Hawary, Mahmoud M

    2015-06-01

    MR enterography is a powerful tool for the non-invasive evaluation of patients with Crohn's disease (CD) without ionizing radiation. The following paper describes the current consensus on optimal imaging technique, interpretation, and future advances from the Society of Abdominal Radiology CD-focused panel. PMID:25666967

  5. Improving target dose coverage and organ-at-risk sparing in intensity-modulated radiotherapy of advanced laryngeal cancer by a simple optimization technique

    PubMed Central

    Lu, J-Y; Wu, L-L; Zhang, J-Y; Zheng, J; Cheung, M L-M; Ma, C-C; Xie, L-X

    2015-01-01

    Objective: To evaluate a simple optimization technique intended to improve planning target volume (PTV) dose coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) of advanced laryngeal cancer. Methods: Generally acceptable initial IMRT plans were generated for 12 patients and were improved individually by the following two techniques: (1) base dose function-based (BDF) technique, in which the treatment plans were reoptimized based on the initial IMRT plans; (2) dose-controlling structure-based (DCS) technique, in which the initial IMRT plans were reoptimized by adding constraints for hot and cold spots. The initial, BDF and DCS IMRT plans and additionally generated volumetric modulated arc therapy (VMAT) plans were compared concerning homogeneity index (HI) and conformity index (CI) of PTVs prescribed at 70 Gy/60 Gy (PTV70/PTV60), OAR sparing, monitor units (MUs) per fraction and total planning time. Results: Compared with the initial IMRT and DCS IMRT plans, the BDF technique provided superior HI/CI, by approximately 19–37%/4–11%, and lower doses to most OARs, by approximately 1–7%, except for the comparable HI of PTV60 to DCS IMRT plans. Compared with VMAT plans, the BDF technique provided comparable HI, CI and most-OAR sparing, except for the superior HI of PTV70, by approximately 13%. The BDF technique produced more MUs and reduced the planning time. Conclusion: The BDF optimization technique for IMRT of advanced laryngeal cancer can improve target dose homogeneity and conformity, spare most OARs and is efficient. Advances in knowledge: A novel optimization technique for improving IMRT was assessed and found to be effective and efficient. PMID:25494885

  6. Code Optimization Techniques

    SciTech Connect

    MAGEE,GLEN I.

    2000-08-03

    Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.

  7. A feasibility and optimization study to determine cooling time and burnup of advanced test reactor fuels using a nondestructive technique

    SciTech Connect

    Navarro, Jorge

    2013-12-01

    The goal of this study presented is to determine the best available non-destructive technique necessary to collect validation data as well as to determine burn-up and cooling time of the fuel elements onsite at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads3 to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent to the reactor. Once it was establish that useful spectra can be obtained at the ATR canal the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements non-destructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed was used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however in order to enhance the quality of the spectra collected using this scintillator a deconvolution method was developed. Following the development of the deconvolution method

  8. Advanced radiographic imaging techniques.

    NASA Technical Reports Server (NTRS)

    Beal, J. B.; Brown, R. L.

    1973-01-01

    Examination of the nature and operational constraints of conventional X-radiographic and neutron imaging methods, providing a foundation for a discussion of advanced radiographic imaging systems. Two types of solid-state image amplifiers designed to image X rays are described. Operational theory, panel construction, and performance characteristics are discussed. A closed-circuit television system for imaging neutrons is then described and the system design, operational theory, and performance characteristics are outlined. Emphasis is placed on a description of the advantages of these imaging systems over conventional methods.

  9. OPTIMIZING EXPOSURE MEASUREMENT TECHNIQUES

    EPA Science Inventory

    The research reported in this task description addresses one of a series of interrelated NERL tasks with the common goal of optimizing the predictive power of low cost, reliable exposure measurements for the planned Interagency National Children's Study (NCS). Specifically, we w...

  10. Advanced Coating Removal Techniques

    NASA Technical Reports Server (NTRS)

    Seibert, Jon

    2006-01-01

    An important step in the repair and protection against corrosion damage is the safe removal of the oxidation and protective coatings without further damaging the integrity of the substrate. Two such methods that are proving to be safe and effective in this task are liquid nitrogen and laser removal operations. Laser technology used for the removal of protective coatings is currently being researched and implemented in various areas of the aerospace industry. Delivering thousands of focused energy pulses, the laser ablates the coating surface by heating and dissolving the material applied to the substrate. The metal substrate will reflect the laser and redirect the energy to any remaining protective coating, thus preventing any collateral damage the substrate may suffer throughout the process. Liquid nitrogen jets are comparable to blasting with an ultra high-pressure water jet but without the residual liquid that requires collection and removal .As the liquid nitrogen reaches the surface it is transformed into gaseous nitrogen and reenters the atmosphere without any contamination to surrounding hardware. These innovative technologies simplify corrosion repair by eliminating hazardous chemicals and repetitive manual labor from the coating removal process. One very significant advantage is the reduction of particulate contamination exposure to personnel. With the removal of coatings adjacent to sensitive flight hardware, a benefit of each technique for the space program is that no contamination such as beads, water, or sanding residue is left behind when the job is finished. One primary concern is the safe removal of coatings from thin aluminum honeycomb face sheet. NASA recently conducted thermal testing on liquid nitrogen systems and found that no damage occurred on 1/6", aluminum substrates. Wright Patterson Air Force Base in conjunction with Boeing and NASA is currently testing the laser remOval technique for process qualification. Other applications of liquid

  11. Advanced Wavefront Control Techniques

    SciTech Connect

    Olivier, S S; Brase, J M; Avicola, K; Thompson, C A; Kartz, M W; Winters, S; Hartley, R; Wihelmsen, J; Dowla, F V; Carrano, C J; Bauman, B J; Pennington, D M; Lande, D; Sawvel, R M; Silva, D A; Cooke, J B; Brown, C G

    2001-02-21

    this project, work was performed in four areas (1) advanced modeling tools for deformable mirrors (2) low-order wavefront correctors with Alvarez lenses, (3) a direct phase measuring heterdyne wavefront sensor, and (4) high-spatial-frequency wavefront control using spatial light modulators.

  12. Advanced qualification techniques

    SciTech Connect

    Winokur, P.S; Shaneyfelt, M.R.; Meisenheimer, T.L.; Fleetwood, D.M.

    1993-12-01

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML ``builds in`` the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structured-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish ``process capability`` is illustrated and a comparison of 10-keV x-ray and Co{sup 60} gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883D, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SSC Basic Specification No. 22900, Europe`s Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  13. Advanced qualification techniques

    NASA Astrophysics Data System (ADS)

    Winokur, P. S.; Shaneyfelt, M. R.; Meisenheimer, T. L.; Fleetwood, D. M.

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML 'builds in' the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structured-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish 'process capability' is illustrated and a comparison of 10-keV x-ray and Co-60 gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883D, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SSC Basic Specification No. 22900, Europe's Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  14. Advanced qualification techniques

    SciTech Connect

    Winokur, P.S.; Shaneyfelt, M.R.; Meisenheimer, T.L.; Fleetwood, D.M. )

    1994-06-01

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML ''builds in'' the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structure-to-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish ''process capability'' is illustrated and a comparison of 10-kev x-ray wafer-level test system to support SPC and establish ''process capability'' is illustrated and a comparison of 10-keV x-ray and Co[sup 60] gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SCC Basic Specification No. 22900, Europe's Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  15. Advanced qualification techniques

    NASA Astrophysics Data System (ADS)

    Winokur, P. S.; Shaneyfelt, M. R.; Meisenheimer, T. L.; Fleetwood, D. M.

    1994-06-01

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML 'builds in' the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structure-to-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish 'process capability' is illustrated and a comparison of 10-keV x-ray and Co-60 gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SCC Basic Specification No. 22900, Europe's Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  16. Techniques and tactics for optimizing CT dose in adults and children: state of the art and future advances.

    PubMed

    Lambert, Jack; MacKenzie, John D; Cody, Dianna D; Gould, Robert

    2014-03-01

    With growing concern over radiation exposure from CT, dose reduction and optimization have become important considerations. Many protocol factors and CT technologies influence this dose reduction effort, and as such, users should maintain a working knowledge of developments in the field. Individual patient factors and scanner-specific details also require care and expertise, which are vital to the success of any dose reduction effort. The authors review the content of the Virtual Symposium on Radiation Safety in Computed Tomography (University of California Dose Optimization and Standardization Endeavor), specifically that pertaining to the more practical aspects of dose optimization. These range from prescan tips to postscan factors, as well as protocol definition itself. Topics discussed include localizer radiograph acquisition, tube current modulation, reconstruction methods, and pediatric considerations, with the content biased toward a CT technologist or protocol manager. Near-term innovations, including new iterative reconstruction methods, tube potential modulation, and dual-energy CT, are presented, and their capability for dose reduction is briefly discussed. PMID:24589401

  17. Advances in cell culture process development: tools and techniques for improving cell line development and process optimization.

    PubMed

    Sharfstein, Susan T

    2008-01-01

    At the 234th National Meeting of the American Chemical Society, held in Boston, MA, August 19-23, 2007, the ACS BIOT division held two oral sessions on Cell Culture Process Development. In addition, a number of posters were presented in this area. The critical issues facing cell culture process development today are how to effectively respond to the increase in product demands and decreased process timelines while maintaining robust process performance and product quality and responding to the Quality by Design initiative promulgated by the Food and Drug Administration. Two main areas were addressed in the presentations: first, to understand the effects of process conditions on productivity and product quality, and second, to achieve improved production cell lines. A variety of techniques to achieve these goals were presented, including automated flow cytometric analysis, a high-throughput cell analysis and selection method, transcriptional and epigenetic techniques for analysis of cell lines and cell culture systems, and novel techniques for glycoform analysis. PMID:18426245

  18. Techniques for shuttle trajectory optimization

    NASA Technical Reports Server (NTRS)

    Edge, E. R.; Shieh, C. J.; Powers, W. F.

    1973-01-01

    The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.

  19. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert

    1998-04-30

    Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based

  20. Advances in Procedural Techniques - Antegrade

    PubMed Central

    Wilson, William; Spratt, James C.

    2014-01-01

    There have been many technological advances in antegrade CTO PCI, but perhaps most importantly has been the evolution of the “hybrid’ approach where ideally there exists a seamless interplay of antegrade wiring, antegrade dissection re-entry and retrograde approaches as dictated by procedural factors. Antegrade wire escalation with intimal tracking remains the preferred initial strategy in short CTOs without proximal cap ambiguity. More complex CTOs, however, usually require either a retrograde or an antegrade dissection re-entry approach, or both. Antegrade dissection re-entry is well suited to long occlusions where there is a healthy distal vessel and limited “interventional” collaterals. Early use of a dissection re-entry strategy will increase success rates, reduce complications, and minimise radiation exposure, contrast use as well as procedural times. Antegrade dissection can be achieved with a knuckle wire technique or the CrossBoss catheter whilst re-entry will be achieved in the most reproducible and reliable fashion by the Stingray balloon/wire. It should be avoided where there is potential for loss of large side branches. It remains to be seen whether use of newer dissection re-entry strategies will be associated with lower restenosis rates compared with the more uncontrolled subintimal tracking strategies such as STAR and whether stent insertion in the subintimal space is associated with higher rates of late stent malapposition and stent thrombosis. It is to be hoped that the algorithms, which have been developed to guide CTO operators, allow for a better transfer of knowledge and skills to increase uptake and acceptance of CTO PCI as a whole. PMID:24694104

  1. Advanced Spectroscopy Technique for Biomedicine

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhua; Zeng, Haishan

    This chapter presents an overview of the applications of optical spectroscopy in biomedicine. We focus on the optical design aspects of advanced biomedical spectroscopy systems, Raman spectroscopy system in particular. Detailed components and system integration are provided. As examples, two real-time in vivo Raman spectroscopy systems, one for skin cancer detection and the other for endoscopic lung cancer detection, and an in vivo confocal Raman spectroscopy system for skin assessment are presented. The applications of Raman spectroscopy in cancer diagnosis of the skin, lung, colon, oral cavity, gastrointestinal tract, breast, and cervix are summarized.

  2. Stitching Techniques Advance Optics Manufacturing

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Because NASA depends on the fabrication and testing of large, high-quality aspheric (nonspherical) optics for applications like the James Webb Space Telescope, it sought an improved method for measuring large aspheres. Through Small Business Innovation Research (SBIR) awards from Goddard Space Flight Center, QED Technologies, of Rochester, New York, upgraded and enhanced its stitching technology for aspheres. QED developed the SSI-A, which earned the company an R&D 100 award, and also developed a breakthrough machine tool called the aspheric stitching interferometer. The equipment is applied to advanced optics in telescopes, microscopes, cameras, medical scopes, binoculars, and photolithography."

  3. Advanced measurement techniques, part 1

    NASA Technical Reports Server (NTRS)

    Holmes, Bruce J.; Carraway, Debra L.; Manuel, Gregory S.; Croom, Cynthia C.

    1987-01-01

    In modern laminar flow flight and wind tunnel research, it is important to understand the specific cause(s) of laminar to turbulent boundary layer transition. Such information is crucial to the exploration of the limits of practical application of laminar flow for drag reduction on aircraft. The process of transition involves both the possible modes of disturbance growth, and the environmental conditioning of the instabilities by freestream or surface conditions. The possible modes of disturbance growth include viscous, inviscid, and modes which may bypass these natural ones. Theory provides information on the possible modes of disturbance amplification, but experimentation must be relied upon to determine which of those modes actually dominates the transition process in a given environment. The results to date of research on advanced devices and methods used for the study of transition phenomena in the subsonic and transonic flight and wind tunnel environments are presented.

  4. Optimal multiobjective design of digital filters using spiral optimization technique.

    PubMed

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2013-01-01

    The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use. PMID:24083108

  5. Nuclear material investigations by advanced analytical techniques

    NASA Astrophysics Data System (ADS)

    Degueldre, C.; Kuri, G.; Martin, M.; Froideval, A.; Cammelli, S.; Orlov, A.; Bertsch, J.; Pouchon, M. A.

    2010-10-01

    Advanced analytical techniques have been used to characterize nuclear materials at the Paul Scherrer Institute during the last decade. The analysed materials ranged from reactor pressure vessel (RPV) steels, Zircaloy claddings to fuel samples. The processes studied included copper cluster build up in RPV steels, corrosion, mechanical and irradiation damage behaviour of PWR and BWR cladding materials as well as fuel defect development. The used advanced techniques included muon spin resonance spectroscopy for zirconium alloy defect characterization while fuel element materials were analysed by techniques derived from neutron and X-ray scattering and absorption spectroscopy.

  6. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar

    2002-06-30

    Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program

  7. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  8. Hybrid mesh generation using advancing reduction technique

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study presents an extension of the application of the advancing reduction technique to the hybrid mesh generation. The proposed algorithm is based on a pre-generated rectangle mesh (RM) with a certain orientation. The intersection points between the two sets of perpendicular mesh lines in RM an...

  9. Advanced Cogeneration Technology Economic Optimization Study (ACTEOS)

    NASA Technical Reports Server (NTRS)

    Nanda, P.; Ansu, Y.; Manuel, E. H., Jr.; Price, W. G., Jr.

    1980-01-01

    The advanced cogeneration technology economic optimization study (ACTEOS) was undertaken to extend the results of the cogeneration technology alternatives study (CTAS). Cost comparisons were made between designs involving advanced cogeneration technologies and designs involving either conventional cogeneration technologies or not involving cogeneration. For the specific equipment cost and fuel price assumptions made, it was found that: (1) coal based cogeneration systems offered appreciable cost savings over the no cogeneration case, while systems using coal derived liquids offered no costs savings; and (2) the advanced cogeneration systems provided somewhat larger cost savings than the conventional systems. Among the issues considered in the study included: (1) temporal variations in steam and electric demands; (2) requirements for reliability/standby capacity; (3) availability of discrete equipment sizes; (4) regional variations in fuel and electricity prices; (5) off design system performance; and (6) separate demand and energy charges for purchased electricity.

  10. A technique for optimizing grid blocks

    NASA Technical Reports Server (NTRS)

    Dannenhoffer, John F., III

    1995-01-01

    A new technique for automatically combining grid blocks of a given block-structured grid into logically-rectangular clusters which are 'optimal' is presented. This technique uses the simulated annealing optimization method to reorganize the blocks into an optimum configuration, that is, one which minimizes a user-defined objective function such as the number of clusters or the differential in the sizes of all the clusters. The clusters which result from applying the technique to two different two-dimensional configurations are presented for a variety of objective function definitions. In all cases, the automatically-generated clusters are significantly better than the original clusters. While this new technique can be applied to block-structured grids generated from any source, it is particularly useful for operating on block-structured grids containing many blocks, such as those produced by the emerging automatic block-structured grid generators.

  11. Multiobjective optimization techniques for structural design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.

  12. Recent advancement of turbulent flow measurement techniques

    NASA Technical Reports Server (NTRS)

    Battle, T.; Wang, P.; Cheng, D. Y.

    1974-01-01

    Advancements of the fluctuating density gradient cross beam laser Schlieren technique, the fluctuating line-reversal temperature measurement and the development of the two-dimensional drag-sensing probe to a three-dimensional drag-sensing probe are discussed. The three-dimensionality of the instantaneous momentum vector can shed some light on the nature of turbulence especially with swirling flow. All three measured fluctuating quantities (density, temperature, and momentum) can provide valuable information for theoreticians.

  13. Optimization techniques for integrating spatial data

    USGS Publications Warehouse

    Herzfeld, U.C.; Merriam, D.F.

    1995-01-01

    Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.

  14. Software for the grouped optimal aggregation technique

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  15. Language abstractions for low level optimization techniques

    NASA Astrophysics Data System (ADS)

    Dévai, Gergely; Gera, Zoltán; Kelemen, Zoltán

    2012-09-01

    In case of performance critical applications programmers are often forced to write code at a low abstraction level. This leads to programs that are hard to develop and maintain because the program text is mixed up by low level optimization tricks and is far from the algorithm it implements. Even if compilers are smart nowadays and provide the user with many automatically applied optimizations, practice shows that in some cases it is hopeless to optimize the program automatically without the programmer's knowledge. A complementary approach is to allow the programmer to fine tune the program but provide him with language features that make the optimization easier. These are language abstractions that make optimization techniques explicit without adding too much syntactic noise to the program text. This paper presents such language abstractions for two well-known optimizations: bitvectors and SIMD (Single Instruction Multiple Data). The language features are implemented in the embedded domain specific language Feldspar which is specifically tailored for digital signal processing applications. While we present these language elements as part of Feldspar, the ideas behind them are general enough to be applied in other language definition projects as well.

  16. Advanced Tools and Techniques for Formal Techniques in Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    2005-01-01

    This is the final technical report for grant number NAG-1-02101. The title of this grant was "Advanced Tools and Techniques for Formal Techniques In Aerospace Systems". The principal investigator on this grant was Dr. John C. Knight of the Computer Science Department, University of Virginia, Charlottesville, Virginia 22904-4740. This report summarizes activities under the grant during the period 7/01/2002 to 9/30/2004. This report is organized as follows. In section 2, the technical background of the grant is summarized. Section 3 lists accomplishments and section 4 lists students funded under the grant. In section 5, we present a list of presentations given at various academic and research institutions about the research conducted. Finally, a list of publications generated under this grant is included in section 6.

  17. Advanced flow MRI: emerging techniques and applications.

    PubMed

    Markl, M; Schnell, S; Wu, C; Bollache, E; Jarvis, K; Barker, A J; Robinson, J D; Rigsby, C K

    2016-08-01

    Magnetic resonance imaging (MRI) techniques provide non-invasive and non-ionising methods for the highly accurate anatomical depiction of the heart and vessels throughout the cardiac cycle. In addition, the intrinsic sensitivity of MRI to motion offers the unique ability to acquire spatially registered blood flow simultaneously with the morphological data, within a single measurement. In clinical routine, flow MRI is typically accomplished using methods that resolve two spatial dimensions in individual planes and encode the time-resolved velocity in one principal direction, typically oriented perpendicular to the two-dimensional (2D) section. This review describes recently developed advanced MRI flow techniques, which allow for more comprehensive evaluation of blood flow characteristics, such as real-time flow imaging, 2D multiple-venc phase contrast MRI, four-dimensional (4D) flow MRI, quantification of complex haemodynamic properties, and highly accelerated flow imaging. Emerging techniques and novel applications are explored. In addition, applications of these new techniques for the improved evaluation of cardiovascular (aorta, pulmonary arteries, congenital heart disease, atrial fibrillation, coronary arteries) as well as cerebrovascular disease (intra-cranial arteries and veins) are presented. PMID:26944696

  18. Advanced Bode Plot Techniques for Ultrasonic Transducers

    NASA Astrophysics Data System (ADS)

    DeAngelis, D. A.; Schulze, G. W.

    The Bode plot, displayed as either impedance or admittance versus frequency, is the most basic test used by ultrasonic transducer designers. With simplicity and ease-of-use, Bode plots are ideal for baseline comparisons such as spacing of parasitic modes or impedance, but quite often the subtleties that manifest as poor process control are hard to interpret or are nonexistence. In-process testing of transducers is time consuming for quantifying statistical aberrations, and assessments made indirectly via the workpiece are difficult. This research investigates the use of advanced Bode plot techniques to compare ultrasonic transducers with known "good" and known "bad" process performance, with the goal of a-priori process assessment. These advanced techniques expand from the basic constant voltage versus frequency sweep to include constant current and constant velocity interrogated locally on transducer or tool; they also include up and down directional frequency sweeps to quantify hysteresis effects like jumping and dropping phenomena. The investigation focuses solely on the common PZT8 piezoelectric material used with welding transducers for semiconductor wire bonding. Several metrics are investigated such as impedance, displacement/current gain, velocity/current gain, displacement/voltage gain and velocity/voltage gain. The experimental and theoretical research methods include Bode plots, admittance loops, laser vibrometry and coupled-field finite element analysis.

  19. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution

  20. Cache Energy Optimization Techniques For Modern Processors

    SciTech Connect

    Mittal, Sparsh

    2013-01-01

    and veterans in the field of cache power management. It will help graduate students, CAD tool developers and designers in understanding the need of energy efficiency in modern computing systems. Further, it will be useful for researchers in gaining insights into algorithms and techniques for micro-architectural and system-level energy optimization using dynamic cache reconfiguration. We sincerely believe that the ``food for thought'' presented in this book will inspire the readers to develop even better ideas for designing ``green'' processors of tomorrow.

  1. Optimal design application on the advanced aeroelastic rotor blade

    NASA Technical Reports Server (NTRS)

    Wei, F. S.; Jones, R.

    1985-01-01

    The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.

  2. Full Endoscopic Spinal Surgery Techniques: Advancements, Indications, and Outcomes

    PubMed Central

    Yue, James J.; Long, William

    2015-01-01

    Advancements in both surgical instrumentation and full endoscopic spine techniques have resulted in positive clinical outcomes in the treatment of cervical, thoracic, and lumbar spine pathologies. Endoscopic techniques impart minimal approach related disruption of non-pathologic spinal anatomy and function while concurrently maximizing functional visualization and correction of pathological tissues. An advanced understanding of the applicable functional neuroanatomy, in particular the neuroforamen, is essential for successful outcomes. Additionally, an understanding of the varying types of disc prolapse pathology in relation to the neuroforamen will result in more optimal surgical outcomes. Indications for lumbar endoscopic spine surgery include disc herniations, spinal stenosis, infections, medial branch rhizotomy, and interbody fusion. Limitations are based on both non spine and spine related findings. A high riding iliac wing, a more posteriorly located retroperitoneal cavity, an overly distal or proximally migrated herniated disc are all relative contra-indications to lumbar endoscopic spinal surgery techniques. Modifications in scope size and visual field of view angulation have enabled both anterior and posterior cervical decompression. Endoscopic burrs, electrocautery, and focused laser technology allow for the least invasive spinal surgical techniques in all age groups and across varying body habitus. Complications include among others, dural tears, dysesthsia, nerve injury, and infection. PMID:26114086

  3. Global Optimization Techniques for Fluid Flow and Propulsion Devices

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Vaidyanathan, Raj; Tucker, Kevin; Griffin, Lisa; Dorney, Dan; Huber, Frank; Tran, Ken; Turner, James E. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of global optimization techniques for fluid flow and propulsion devices. Details are given on the need, characteristics, and techniques for global optimization. The techniques include response surface methodology (RSM), neural networks and back-propagation neural networks, design of experiments, face centered composite design (FCCD), orthogonal arrays, outlier analysis, and design optimization.

  4. Techniques for optimizing inerting in electron processors

    NASA Astrophysics Data System (ADS)

    Rangwalla, I. J.; Korn, D. J.; Nablo, S. V.

    1993-07-01

    The design of an "inert gas" distribution system in an electron processor must satisfy a number of requirements. The first of these is the elimination or control of beam produced ozone and NO x which can be transported from the process zone by the product into the work area. Since the tolerable levels for O 3 in occupied areas around the processor are <0.1 ppm, good control techniques are required involving either recombination of the O 3 in the beam heated process zone, or exhausting and dilution of the gas at the processor exit. The second requirement of the inerting system is to provide a suitable environment for completing efficient, free radical initiated addition polymerization. In this case, the competition between radical loss through de-excitation and that from O 2 quenching must be understood. This group has used gas chromatographic analysis of electron cured coatings to study the trade-offs of delivered dose, dose rate and O 2 concentrations in the process zone to determine the tolerable ranges of parameter excursions can be determined for production quality control purposes. These techniques are described for an ink:coating system on paperboard, where a broad range of process parameters have been studied (D, Ġ, O 2. It is then shown how the technique is used to optimize the use of higher purity (10-100 ppm O 2) nitrogen gas for inerting, in combination with lower purity (2-20, 000 ppm O 2) non-cryogenically produced gas, as from a membrane or pressure swing adsorption generators.

  5. Utilizing object-oriented design to build advanced optimization strategies with generic implementation

    SciTech Connect

    Eldred, M.S.; Hart, W.E.; Bohnhoff, W.J.; Romero, V.J.; Hutchinson, S.A.; Salinger, A.G.

    1996-08-01

    the benefits of applying optimization to computational models are well known, but their range of widespread application to date has been limited. This effort attempts to extend the disciplinary areas to which optimization algorithms may be readily applied through the development and application of advanced optimization strategies capable of handling the computational difficulties associated with complex simulation codes. Towards this goal, a flexible software framework is under continued development for the application of optimization techniques to broad classes of engineering applications, including those with high computational expense and nonsmooth, nonconvex design space features. Object-oriented software design with C++ has been employed as a tool in providing a flexible, extensible, and robust multidisciplinary toolkit with computationally intensive simulations. In this paper, demonstrations of advanced optimization strategies using the software are presented in the hybridization and parallel processing research areas. Performance of the advanced strategies is compared with a benchmark nonlinear programming optimization.

  6. Optimizing correlation techniques for improved earthquake location

    USGS Publications Warehouse

    Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.

    2004-01-01

    Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.

  7. Advances in procedural techniques--antegrade.

    PubMed

    Wilson, William; Spratt, James C

    2014-05-01

    There have been many technological advances in antegrade CTO PCI, but perhaps most importantly has been the evolution of the "hybrid' approach where ideally there exists a seamless interplay of antegrade wiring, antegrade dissection re-entry and retrograde approaches as dictated by procedural factors. Antegrade wire escalation with intimal tracking remains the preferred initial strategy in short CTOs without proximal cap ambiguity. More complex CTOs, however, usually require either a retrograde or an antegrade dissection re-entry approach, or both. Antegrade dissection re-entry is well suited to long occlusions where there is a healthy distal vessel and limited "interventional" collaterals. Early use of a dissection re-entry strategy will increase success rates, reduce complications, and minimise radiation exposure, contrast use as well as procedural times. Antegrade dissection can be achieved with a knuckle wire technique or the CrossBoss catheter whilst re-entry will be achieved in the most reproducible and reliable fashion by the Stingray balloon/wire. It should be avoided where there is potential for loss of large side branches. It remains to be seen whether use of newer dissection re-entry strategies will be associated with lower restenosis rates compared with the more uncontrolled subintimal tracking strategies such as STAR and whether stent insertion in the subintimal space is associated with higher rates of late stent malapposition and stent thrombosis. It is to be hoped that the algorithms, which have been developed to guide CTO operators, allow for a better transfer of knowledge and skills to increase uptake and acceptance of CTO PCI as a whole. PMID:24694104

  8. Hybrid inverse lithography techniques for advanced hierarchical memories

    NASA Astrophysics Data System (ADS)

    Xiao, Guangming; Hooker, Kevin; Irby, Dave; Zhang, Yunqiang; Ward, Brian; Cecil, Tom; Hall, Brett; Lee, Mindy; Kim, Dave; Lucas, Kevin

    2014-03-01

    Traditional segment-based model-based OPC methods have been the mainstream mask layout optimization techniques in volume production for memory and embedded memory devices for many device generations. These techniques have been continually optimized over time to meet the ever increasing difficulties of memory and memory periphery patterning. There are a range of difficult issues for patterning embedded memories successfully. These difficulties include the need for a very high level of symmetry and consistency (both within memory cells themselves and between cells) due to circuit effects such as noise margin requirements in SRAMs. Memory cells and access structures consume a large percentage of area in embedded devices so there is a very high return from shrinking the cell area as much as possible. This aggressive scaling leads to very difficult resolution, 2D CD control and process window requirements. Additionally, the range of interactions between mask synthesis corrections of neighboring areas can extend well beyond the size of the memory cell, making it difficult to fully take advantage of the inherent designed cell hierarchy in mask pattern optimization. This is especially true for non-traditional (i.e., less dependent on geometric rule) OPC/RET methods such as inverse lithography techniques (ILT) which inherently have more model-based decisions in their optimizations. New inverse methods such as model-based SRAF placement and ILT are, however, well known to have considerable benefits in finding flexible mask pattern solutions to improve process window, improve 2D CD control, and improve resolution in ultra-dense memory patterns. They also are known to reduce recipe complexity and provide native MRC compliant mask pattern solutions. Unfortunately, ILT is also known to be several times slower than traditional OPC methods due to the increased computational lithographic optimizations it performs. In this paper, we describe and present results for a methodology to

  9. Aerodynamic optimization studies on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana

    1995-01-01

    The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

  10. Advanced Nuclear Fuel Cycle Transitions: Optimization, Modeling Choices, and Disruptions

    NASA Astrophysics Data System (ADS)

    Carlsen, Robert W.

    Many nuclear fuel cycle simulators have evolved over time to help understan the nuclear industry/ecosystem at a macroscopic level. Cyclus is one of th first fuel cycle simulators to accommodate larger-scale analysis with it liberal open-source licensing and first-class Linux support. Cyclus also ha features that uniquely enable investigating the effects of modeling choices o fuel cycle simulators and scenarios. This work is divided into thre experiments focusing on optimization, effects of modeling choices, and fue cycle uncertainty. Effective optimization techniques are developed for automatically determinin desirable facility deployment schedules with Cyclus. A novel method fo mapping optimization variables to deployment schedules is developed. Thi allows relationships between reactor types and scenario constraints to b represented implicitly in the variable definitions enabling the usage o optimizers lacking constraint support. It also prevents wasting computationa resources evaluating infeasible deployment schedules. Deployed power capacit over time and deployment of non-reactor facilities are also included a optimization variables There are many fuel cycle simulators built with different combinations o modeling choices. Comparing results between them is often difficult. Cyclus flexibility allows comparing effects of many such modeling choices. Reacto refueling cycle synchronization and inter-facility competition among othe effects are compared in four cases each using combinations of fleet of individually modeled reactors with 1-month or 3-month time steps. There are noticeable differences in results for the different cases. The larges differences occur during periods of constrained reactor fuel availability This and similar work can help improve the quality of fuel cycle analysi generally There is significant uncertainty associated deploying new nuclear technologie such as time-frames for technology availability and the cost of buildin advanced reactors

  11. Emerging Techniques for Dose Optimization in Abdominal CT

    PubMed Central

    Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit

    2014-01-01

    Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277

  12. Bringing Advanced Computational Techniques to Energy Research

    SciTech Connect

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  13. Reading Comprehension: Techniques for Assessment and Optimization.

    ERIC Educational Resources Information Center

    Tyler, Sherman W.; And Others

    Three studies examined the nature of individual differences and the role of advance information in reading comprehension. The subjects, 116 college students, read short passages--in some cases preceded by a given type of advance organizer--recalled the information therein, and finally sorted ideas from the passage into groups of similar ideas.…

  14. Environmental Monitoring Networks Optimization Using Advanced Active Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail; Volpi, Michele; Copa, Loris

    2010-05-01

    The problem of environmental monitoring networks optimization (MNO) belongs to one of the basic and fundamental tasks in spatio-temporal data collection, analysis, and modeling. There are several approaches to this problem, which can be considered as a design or redesign of monitoring network by applying some optimization criteria. The most developed and widespread methods are based on geostatistics (family of kriging models, conditional stochastic simulations). In geostatistics the variance is mainly used as an optimization criterion which has some advantages and drawbacks. In the present research we study an application of advanced techniques following from the statistical learning theory (SLT) - support vector machines (SVM) and the optimization of monitoring networks when dealing with a classification problem (data are discrete values/classes: hydrogeological units, soil types, pollution decision levels, etc.) is considered. SVM is a universal nonlinear modeling tool for classification problems in high dimensional spaces. The SVM solution is maximizing the decision boundary between classes and has a good generalization property for noisy data. The sparse solution of SVM is based on support vectors - data which contribute to the solution with nonzero weights. Fundamentally the MNO for classification problems can be considered as a task of selecting new measurement points which increase the quality of spatial classification and reduce the testing error (error on new independent measurements). In SLT this is a typical problem of active learning - a selection of the new unlabelled points which efficiently reduce the testing error. A classical approach (margin sampling) to active learning is to sample the points closest to the classification boundary. This solution is suboptimal when points (or generally the dataset) are redundant for the same class. In the present research we propose and study two new advanced methods of active learning adapted to the solution of

  15. Recent advances in multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.

    1992-01-01

    A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.

  16. Robust Neighboring Optimal Guidance for the Advanced Launch System

    NASA Technical Reports Server (NTRS)

    Hull, David G.

    1993-01-01

    In recent years, optimization has become an engineering tool through the availability of numerous successful nonlinear programming codes. Optimal control problems are converted into parameter optimization (nonlinear programming) problems by assuming the control to be piecewise linear, making the unknowns the nodes or junction points of the linear control segments. Once the optimal piecewise linear control (suboptimal) control is known, a guidance law for operating near the suboptimal path is the neighboring optimal piecewise linear control (neighboring suboptimal control). Research conducted under this grant has been directed toward the investigation of neighboring suboptimal control as a guidance scheme for an advanced launch system.

  17. Recent Advances in Beam Diagnostic Techniques

    NASA Astrophysics Data System (ADS)

    Fiorito, R. B.

    2002-12-01

    We describe recent advances in diagnostics of the transverse phase space of charged particle beams. The emphasis of this paper is on the utilization of beam-based optical radiation for the precise measurement of the spatial distribution, divergence and emittance of relativistic charged particle beams. The properties and uses of incoherent as well as coherent optical transition, diffraction and synchrotron radiation for beam diagnosis are discussed.

  18. Numerical optimization design of advanced transonic wing configurations

    NASA Technical Reports Server (NTRS)

    Cosentino, G. B.; Holst, T. L.

    1984-01-01

    A computationally efficient and versatile technique for use in the design of advanced transonic wing configurations has been developed. A reliable and fast transonic wing flow-field analysis program, TWING, has been coupled with a modified quasi-Newton method, unconstrained optimization algorithm, QNMDIF, to create a new design tool. Fully three-dimensional wing designs utilizing both specified wing pressure distributions and drag-to-lift ration minimization as design objectives are demonstrated. Because of the high computational efficiency of each of the components of the design code, in particular the vectorization of TWING and the high speed of the Cray X-MP vector computer, the computer time required for a typical wing design is reduced by approximately an order of magnitude over previous methods. In the results presented here, this computed wave drag has been used as the quantity to be optimized (minimized) with great success, yielding wing designs with nearly shock-free (zero wave drag) pressure distributions and very reasonable wing section shapes.

  19. A mesh gradient technique for numerical optimization

    NASA Technical Reports Server (NTRS)

    Willis, E. A., Jr.

    1973-01-01

    A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.

  20. Optimal control techniques for active noise suppression

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Keeling, S. L.; Silcox, R. J.

    1988-01-01

    Active suppression of noise in a bounded enclosure is considered within the framework of optimal control theory. A sinusoidal pressure field due to exterior offending noise sources is assumed to be known in a neighborhood of interior sensors. The pressure field due to interior controlling sources is assumed to be governed by a nonhomogeneous wave equation within the enclosure and by a special boundary condition designed to accommodate frequency-dependent reflection properties of the enclosure boundary. The form of the controlling sources is determined by considering the steady-state behavior of the system, and it is established that the control strategy proposed is stable and asymptotically optimal.

  1. Advanced rotorcraft control using parameter optimization

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1991-01-01

    A reliable algorithm for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters is presented. The algorithm is part of a design algorithm for an optimal linear dynamic output feedback controller that minimizes a finite time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed loop eigensystem. This approach through the use of a accurate Pade series approximation does not require the closed loop system matrix to be diagonalizable. The algorithm has been included in a control design package for optimal robust low order controllers. Usefulness of the proposed numerical algorithm has been demonstrated using numerous practical design cases where degeneracies occur frequently in the closed loop system under an arbitrary controller design initialization and during the numerical search.

  2. Advances in optimal routing through computer networks

    NASA Technical Reports Server (NTRS)

    Paz, I. M.

    1977-01-01

    The optimal routing problem is defined. Progress in solving the problem during the previous decade is reviewed, with special emphasis on technical developments made during the last few years. The relationships between the routing, the throughput, and the switching technology used are discussed and their future trends are reviewed. Economic aspects are also briefly considered. Modern technical approaches for handling the routing problems and, more generally, the flow control problems are reviewed.

  3. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    SciTech Connect

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow in the wellbore

  4. Optimization Techniques for College Financial Aid Managers

    ERIC Educational Resources Information Center

    Bosshardt, Donald I.; Lichtenstein, Larry; Palumbo, George; Zaporowski, Mark P.

    2010-01-01

    In the context of a theoretical model of expected profit maximization, this paper shows how historic institutional data can be used to assist enrollment managers in determining the level of financial aid for students with varying demographic and quality characteristics. Optimal tuition pricing in conjunction with empirical estimation of…

  5. Advances in laparoscopic urologic surgery techniques

    PubMed Central

    Abdul-Muhsin, Haidar M.; Humphreys, Mitchell R.

    2016-01-01

    The last two decades witnessed the inception and exponential implementation of key technological advancements in laparoscopic urology. While some of these technologies thrived and became part of daily practice, others are still hindered by major challenges. This review was conducted through a comprehensive literature search in order to highlight some of the most promising technologies in laparoscopic visualization, augmented reality, and insufflation. Additionally, this review will provide an update regarding the current status of single-site and natural orifice surgery in urology. PMID:27134743

  6. Advances in laparoscopic urologic surgery techniques.

    PubMed

    Abdul-Muhsin, Haidar M; Humphreys, Mitchell R

    2016-01-01

    The last two decades witnessed the inception and exponential implementation of key technological advancements in laparoscopic urology. While some of these technologies thrived and became part of daily practice, others are still hindered by major challenges. This review was conducted through a comprehensive literature search in order to highlight some of the most promising technologies in laparoscopic visualization, augmented reality, and insufflation. Additionally, this review will provide an update regarding the current status of single-site and natural orifice surgery in urology. PMID:27134743

  7. Neural network training with global optimization techniques.

    PubMed

    Yamazaki, Akio; Ludermir, Teresa B

    2003-04-01

    This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks. PMID:12923920

  8. Advance crew procedures development techniques: Procedures generation program requirements document

    NASA Technical Reports Server (NTRS)

    Arbet, J. D.; Benbow, R. L.; Hawk, M. L.

    1974-01-01

    The Procedures Generation Program (PGP) is described as an automated crew procedures generation and performance monitoring system. Computer software requirements to be implemented in PGP for the Advanced Crew Procedures Development Techniques are outlined.

  9. Advanced airfoil design empirically based transonic aircraft drag buildup technique

    NASA Technical Reports Server (NTRS)

    Morrison, W. D., Jr.

    1976-01-01

    To systematically investigate the potential of advanced airfoils in advance preliminary design studies, empirical relationships were derived, based on available wind tunnel test data, through which total drag is determined recognizing all major aircraft geometric variables. This technique recognizes a single design lift coefficient and Mach number for each aircraft. Using this technique drag polars are derived for all Mach numbers up to MDesign + 0.05 and lift coefficients -0.40 to +0.20 from CLDesign.

  10. Advanced Optical Imaging Techniques for Neurodevelopment

    PubMed Central

    Wu, Yicong; Christensen, Ryan; Colón-Ramos, Daniel; Shroff, Hari

    2013-01-01

    Over the past decade, developmental neuroscience has been transformed by the widespread application of confocal and two-photon fluorescence microscopy. Even greater progress is imminent, as recent innovations in microscopy now enable imaging with increased depth, speed, and spatial resolution; reduced phototoxicity; and in some cases without external fluorescent probes. We discuss these new techniques and emphasize their dramatic impact on neurobiology, including the ability to image neurons at depths exceeding 1 mm, to observe neurodevelopment noninvasively throughout embryogenesis, and to visualize neuronal processes or structures that were previously too small or too difficult to target with conventional microscopy. PMID:23831260

  11. Advanced analysis techniques for uranium assay

    SciTech Connect

    Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.; Beard, C. A.

    2001-01-01

    Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples count rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.

  12. Recent advances in DNA sequencing techniques

    NASA Astrophysics Data System (ADS)

    Singh, Rama Shankar

    2013-06-01

    Successful mapping of the draft human genome in 2001 and more recent mapping of the human microbiome genome in 2012 have relied heavily on the parallel processing of the second generation/Next Generation Sequencing (NGS) DNA machines at a cost of several millions dollars and long computer processing times. These have been mainly biochemical approaches. Here a system analysis approach is used to review these techniques by identifying the requirements, specifications, test methods, error estimates, repeatability, reliability and trends in the cost reduction. The first generation, NGS and the Third Generation Single Molecule Real Time (SMART) detection sequencing methods are reviewed. Based on the National Human Genome Research Institute (NHGRI) data, the achieved cost reduction of 1.5 times per yr. from Sep. 2001 to July 2007; 7 times per yr., from Oct. 2007 to Apr. 2010; and 2.5 times per yr. from July 2010 to Jan 2012 are discussed.

  13. Optimization of Lamb wave inspection techniques

    NASA Astrophysics Data System (ADS)

    Alleyne, David N.; Cawley, Peter

    Some problems associated with Lamb wave inspection techniques are briefly reviewed, and factors to be considered when selecting a practical Lamb wave inspection regime and ways to minimize possible problems are discussed. Tests on a butt-welded steel plate with simulated weld defects of different depths demonstrate that, operating below the a1 cut-off frequency with judicious selection of the testing technique, the presence of defects with depths around 30 percent of the plate thickness can be detected reliably from changes in the shape of the received waveform, The 2D Fourier transform method makes it possible to determine the amplitudes of the different propagating Lamb modes over the full frequency range of the input, yielding information which can be used for defect sizing.

  14. Optimizing ECM techniques against monopulse acquisition and tracking radars

    NASA Astrophysics Data System (ADS)

    Kwon, Ki Hoon

    1989-09-01

    Electronic countermeasure (ECM) techniques against monopulse radars, which are generally employed in the Surface-to-Air Missile targeting system, are presented and analyzed. Particularly, these ECM techniques are classified into five different categories, which are; denial jamming, deception jamming, passive countermeasures, decoys, and destructive countermeasures. The techniques are fully discussed. It was found difficult to quantize the jamming effectiveness of individual techniques, because ECM techniques are involved with several complex parameters and they are usually entangled together. Therefore, the methodological approach for optimizing ECM techniques is based on purely conceptual analysis of the techniques.

  15. Recent Advances in Multidisciplinary Analysis and Optimization, part 3

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: aircraft design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  16. Recent Advances in Multidisciplinary Analysis and Optimization, part 1

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  17. Recent Advances in Multidisciplinary Analysis and Optimization, part 2

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  18. Diagnostics of nonlocal plasmas: advanced techniques

    NASA Astrophysics Data System (ADS)

    Mustafaev, Alexander; Grabovskiy, Artiom; Strakhova, Anastasiya; Soukhomlinov, Vladimir

    2014-10-01

    This talk generalizes our recent results, obtained in different directions of plasma diagnostics. First-method of flat single-sided probe, based on expansion of the electron velocity distribution function (EVDF) in series of Legendre polynomials. It will be demonstrated, that flat probe, oriented under different angles with respect to the discharge axis, allow to determine full EVDF in nonlocal plasmas. It is also shown, that cylindrical probe is unable to determine full EVDF. We propose the solution of this problem by combined using the kinetic Boltzmann equation and experimental probe data. Second-magnetic diagnostics. This method is implemented in knudsen diode with surface ionization of atoms (KDSI) and based on measurements of the magnetic characteristics of the KDSI in presence of transverse magnetic field. Using magnetic diagnostics we can investigate the wide range of plasma processes: from scattering cross-sections of electrons to plasma-surface interactions. Third-noncontact diagnostics method for direct measurements of EVDF in remote plasma objects by combination of the flat single-sided probe technique and magnetic polarization Hanley method.

  19. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  20. Evaluation of Advanced Retrieval Techniques in an Experimental Online Catalog.

    ERIC Educational Resources Information Center

    Larson, Ray R.

    1992-01-01

    Discusses subject searching problems in online library catalogs; explains advanced information retrieval (IR) techniques; and describes experiments conducted on a test collection database, CHESHIRE (California Hybrid Extended SMART for Hypertext and Information Retrieval Experimentation), which was created to evaluate IR techniques in online…

  1. Innovative Tools Advance Revolutionary Weld Technique

    NASA Technical Reports Server (NTRS)

    2009-01-01

    The iconic, orange external tank of the space shuttle launch system not only contains the fuel used by the shuttle s main engines during liftoff but also comprises the shuttle s backbone, supporting the space shuttle orbiter and solid rocket boosters. Given the tank s structural importance and the extreme forces (7.8 million pounds of thrust load) and temperatures it encounters during launch, the welds used to construct the tank must be highly reliable. Variable polarity plasma arc welding, developed for manufacturing the external tank and later employed for building the International Space Station, was until 1994 the best process for joining the aluminum alloys used during construction. That year, Marshall Space Flight Center engineers began experimenting with a relatively new welding technique called friction stir welding (FSW), developed in 1991 by The Welding Institute, of Cambridge, England. FSW differs from traditional fusion welding in that it is a solid-state welding technique, using frictional heat and motion to join structural components without actually melting any of the material. The weld is created by a shouldered pin tool that is plunged into the seam of the materials to be joined. The tool traverses the line while rotating at high speeds, generating friction that heats and softens but does not melt the metal. (The heat produced approaches about 80 percent of the metal s melting temperature.) The pin tool s rotation crushes and stirs the plasticized metal, extruding it along the seam as the tool moves forward. The material cools and consolidates, resulting in a weld with superior mechanical properties as compared to those weld properties of fusion welds. The innovative FSW technology promises a number of attractive benefits. Because the welded materials are not melted, many of the undesirables associated with fusion welding porosity, cracking, shrinkage, and distortion of the weld are minimized or avoided. The process is more energy efficient, safe

  2. Optimal and suboptimal control technique for aircraft spin recovery

    NASA Technical Reports Server (NTRS)

    Young, J. W.

    1974-01-01

    An analytic investigation has been made of procedures for effecting recovery from equilibrium spin conditions for three assumed aircraft configurations. Three approaches which utilize conventional aerodynamic controls are investigated. Included are a constant control recovery mode, optimal recoveries, and a suboptimal control logic patterned after optimal recovery results. The optimal and suboptimal techniques are shown to yield a significant improvement in recovery performance over that attained by using a constant control recovery procedure.

  3. An investigation of optimization techniques for drawing computer graphics displays

    NASA Technical Reports Server (NTRS)

    Stocker, F. R.

    1979-01-01

    Techniques for reducing vector data plotting time are studied. The choice of tolerances in optimization and the application of optimization to plots produced on real time interactive display devices are discussed. All results are developed relative to plotting packages and support hardware so that results are useful in real world situations.

  4. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    SciTech Connect

    Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C

    2007-01-01

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficiently optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).

  5. Optimal Advanced Credit Releases in Ecosystem Service Markets

    NASA Astrophysics Data System (ADS)

    BenDor, Todd K.; Guo, Tianshu; Yates, Andrew J.

    2014-03-01

    Ecosystem service markets are popular policy tools for ecosystem protection. Advanced credit releases are an important factor affecting the supply side of ecosystem markets. Under an advanced credit release policy, regulators give ecosystem suppliers a fraction of the total ecosystem credits generated by a restoration project before it is verified that the project actually achieves the required ecological thresholds. In spite of their prominent role in ecosystem markets, there is virtually no regulatory or research literature on the proper design of advanced credit release policies. Using U.S. aquatic ecosystem markets as an example, we develop a principal-agent model of the behavior of regulators and wetland/stream mitigation bankers to determine and explore the optimal degree of advance credit release. The model highlights the tension between regulators' desire to induce market participation, while at the same time ensuring that bankers successfully complete ecological restoration. Our findings suggest several simple guidelines for strengthening advanced credit release policy.

  6. Advances in Poly(4-aminodiphenylaniline) Nanofibers Preparation by Electrospinning Technique.

    PubMed

    Della Pina, C; Busacca, C; Frontera, P; Antonucci, P L; Scarpino, L A; Sironi, A; Falletta, E

    2016-05-01

    Polyaniline (PANI) nanofibers are drawing a great deal of interest from academia and industry due to their multiple applications, especially in biomedical field. PANI nanofibers were successfully electrospun for the first time by MacDiarmid and co-workers at the beginning of the millennium and since then many efforts have been addressed to improve their quality. However, traditional PANI prepared from aniline monomer shows some drawbacks, such as presence of toxic (i.e., benzidine) and inorganic (salts and metals) co-products, that complicate polymer post-treatment, and low solubility in common organic solvents, making hard its processing by electrospinning technique. Some industrial sectors, such as medical and biomedical, need to employ materials free from toxic and polluting species. In this regard, the oxidative polymerization of N-(4-aminophenyl)aniline, aniline dimer, to produce poly(4-aminodiphenylaniline), P4ADA, a kind of PANI, represents an innovative alternative to the traditional synthesis because the obtained polymer results free from carcinogenic and/or polluting co-products, and, moreover, more soluble than traditional PANI. This latter feature can be exploited to obtain P4ADA nanofibers by electrospinning technique. In this paper we report the advances obtained in the P4ADA nanofibers electrospinnig. A comparison among polyethylene oxide (PEO), polymethyl methacrylate (PMMA) and polystyrene (PS), as the second polymer to facilitate the electrospinning process, is shown. In order to increase the conductivity of P4ADA nanofibers, two strategies were adopted and compared: selective insulating binder removal from electrospun nanofibers by a rinsing tratment, afterwards optimizing the minimum amount of binder necessary for the electrospinning process. Moreover, the effect of PEO/P4ADA weight ratio on the fibers morphology and conductivity was highlighted. PMID:27483933

  7. Advances in gamma titanium aluminides and their manufacturing techniques

    NASA Astrophysics Data System (ADS)

    Kothari, Kunal; Radhakrishnan, Ramachandran; Wereley, Norman M.

    2012-11-01

    Gamma titanium aluminides display attractive properties for high temperature applications. For over a decade in the 1990s, the attractive properties of titanium aluminides were outweighed by difficulties encountered in processing and machining at room temperature. But advances in manufacturing technologies, deeper understanding of titanium aluminides microstructure, deformation mechanisms, and advances in micro-alloying, has led to the production of gamma titanium aluminide sheets. An in-depth review of key advances in gamma titanium aluminides is presented, including microstructure, deformation mechanisms, and alloy development. Traditional manufacturing techniques such as ingot metallurgy and investment casting are reviewed and advances via powder metallurgy based manufacturing techniques are discussed. Finally, manufacturing challenges facing gamma titanium aluminides, as well as avenues to overcome them, are discussed.

  8. The application of advanced analytical techniques to direct coal liquefaction

    SciTech Connect

    Brandes, S.D.; Winschel, R.A.; Burke, F.P.; Robbins, G.A.

    1991-12-31

    Consol is coordinating a program designed to bridge the gap between the advanced, modern techniques of the analytical chemist and the application of those techniques by the direct coal liquefaction process developer, and to advance our knowledge of the process chemistry of direct coal liquefaction. The program is designed to provide well-documented samples to researchers who are utilizing techniques potentially useful for the analysis of coal derived samples. The choice of samples and techniques was based on an extensive survey made by Consol of the present status of analytical methodology associated with direct coal liquefaction technology. Sources of information included process developers and analytical chemists. Identified in the survey are a number of broadly characterizable needs. These categories include a need for: A better understanding of the nature of the high molecular weight, non-distillable residual materials (both soluble and insoluble) in the process streams; improved techniques for molecular characterization, heteroatom and hydrogen speciation and a knowledge of the hydrocarbon structural changes across coal liquefaction systems; better methods for sample separation; application of advanced data analysis methods; the use of more advanced predictive models; on-line analytical techniques; and better methods for catalyst monitoring.

  9. Advanced liner-cooling techniques for gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Norgren, C. T.; Riddlebaugh, S. M.

    1985-01-01

    Component research for advanced small gas turbine engines is currently underway at the NASA Lewis Research Center. As part of this program, a basic reverse-flow combustor geometry was being maintained while different advanced liner wall cooling techniques were investigated. Performance and liner cooling effectiveness of the experimental combustor configuration featuring counter-flow film-cooled panels is presented and compared with two previously reported combustors featuring: splash film-cooled liner walls; and transpiration cooled liner walls (Lamilloy).

  10. [Advanced online search techniques and dedicated search engines for physicians].

    PubMed

    Nahum, Yoav

    2008-02-01

    In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines. PMID:18357673

  11. 75 FR 44015 - Certain Semiconductor Products Made by Advanced Lithography Techniques and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-27

    ... COMMISSION Certain Semiconductor Products Made by Advanced Lithography Techniques and Products Containing... importation of certain semiconductor products made by advanced lithography techniques and products containing... certain semiconductor products made by advanced lithography techniques or products containing same...

  12. A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models.

    PubMed

    Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung

    2015-01-01

    Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237

  13. A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models

    PubMed Central

    Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung

    2015-01-01

    Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237

  14. Process sequence optimization for digital microfluidic integration using EWOD technique

    NASA Astrophysics Data System (ADS)

    Yadav, Supriya; Joyce, Robin; Sharma, Akash Kumar; Sharma, Himani; Sharma, Niti Nipun; Varghese, Soney; Akhtar, Jamil

    2016-04-01

    Micro/nano-fluidic MEMS biosensors are the devices that detects the biomolecules. The emerging micro/nano-fluidic devices provide high throughput and high repeatability with very low response time and reduced device cost as compared to traditional devices. This article presents the experimental details for process sequence optimization of digital microfluidics (DMF) using "electrowetting-on-dielectric" (EWOD). Stress free thick film deposition of silicon dioxide using PECVD and subsequent process for EWOD techniques have been optimized in this work.

  15. Application of optimization techniques to vehicle design: A review

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Magee, C. L.

    1984-01-01

    The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.

  16. Advanced Marketing Core Curriculum. Test Items and Assessment Techniques.

    ERIC Educational Resources Information Center

    Smith, Clifton L.; And Others

    This document contains duties and tasks, multiple-choice test items, and other assessment techniques for Missouri's advanced marketing core curriculum. The core curriculum begins with a list of 13 suggested textbook resources. Next, nine duties with their associated tasks are given. Under each task appears one or more citations to appropriate…

  17. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  18. Stochastic optimization techniques for NDE of bridges using vibration signatures

    NASA Astrophysics Data System (ADS)

    Yi, Jin-Hak; Feng, Maria Q.

    2003-08-01

    A baseline model updating is the first step for the model-based non destructive evaluation for civil infrastructures. Many researches have been drawn to obtain a more reliable baseline model. In this study, heuristic optimization techniques (or called as stochastic optimization techniques) including the genetic algorithm, the simulated annealing, and the tabu search, were have been investigated for constructing the reliable baseline model for an instrumented new highway bridge, and also were compared with the result of conventional sensitivity method. The preliminary finite element model of the bridge was successfully updated to a baseline model based on measured vibration data.

  19. Advanced launch system trajectory optimization using suboptimal control

    NASA Technical Reports Server (NTRS)

    Shaver, Douglas A.; Hull, David G.

    1993-01-01

    The maximum-final mass trajectory of a proposed configuration of the Advanced Launch System is presented. A model for the two-stage rocket is given; the optimal control problem is formulated as a parameter optimization problem; and the optimal trajectory is computed using a nonlinear programming code called VF02AD. Numerical results are presented for the controls (angle of attack and velocity roll angle) and the states. After the initial rotation, the angle of attack goes to a positive value to keep the trajectory as high as possible, returns to near zero to pass through the transonic regime and satisfy the dynamic pressure constraint, returns to a positive value to keep the trajectory high and to take advantage of minimum drag at positive angle of attack due to aerodynamic shading of the booster, and then rolls off to negative values to satisfy the constraints. Because the engines cannot be throttled, the maximum dynamic pressure occurs at a single point; there is no maximum dynamic pressure subarc. To test approximations for obtaining analytical solutions for guidance, two additional optimal trajectories are computed: one using untrimmed aerodynamics and one using no atmospheric effects except for the dynamic pressure constraint. It is concluded that untrimmed aerodynamics has a negligible effect on the optimal trajectory and that approximate optimal controls should be able to be obtained by treating atmospheric effects as perturbations.

  20. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously

  1. Techniques for trajectory optimization using a hybrid computer

    NASA Technical Reports Server (NTRS)

    Neely, P. L.

    1975-01-01

    The use of a hybrid computer in the solution of trajectory optimization problems is described. The solution technique utilizes the indirect method and requires iterative computation of the initial condition vector of the co-state variables. Convergence of the iteration is assisted by feedback switching and contour modification. A simulation of the method in an on-line updating scheme is presented.

  2. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  3. Optimal Pid Tuning for Power System Stabilizers Using Adaptive Particle Swarm Optimization Technique

    NASA Astrophysics Data System (ADS)

    Oonsivilai, Anant; Marungsri, Boonruang

    2008-10-01

    An application of the intelligent search technique to find optimal parameters of power system stabilizer (PSS) considering proportional-integral-derivative controller (PID) for a single-machine infinite-bus system is presented. Also, an efficient intelligent search technique, adaptive particle swarm optimization (APSO), is engaged to express usefulness of the intelligent search techniques in tuning of the PID—PSS parameters. Improve damping frequency of system is optimized by minimizing an objective function with adaptive particle swarm optimization. At the same operating point, the PID—PSS parameters are also tuned by the Ziegler-Nichols method. The performance of proposed controller compared to the conventional Ziegler-Nichols PID tuning controller. The results reveal superior effectiveness of the proposed APSO based PID controller.

  4. Satellite tracking by combined optimal estimation and control techniques.

    NASA Technical Reports Server (NTRS)

    Dressler, R. M.; Tabak, D.

    1971-01-01

    Combined optimal estimation and control techniques are applied for the first time to satellite tracking systems. Both radio antenna and optical tracking systems of NASA are considered. The optimal estimation is accomplished using an extended Kalman filter resulting in an estimated state of the satellite and of the tracking system. This estimated state constitutes an input to the optimal controller. The optimal controller treats a linearized system with a quadratic performance index. The maximum principle is applied and a steady-state approximation to the resulting Riccati equation is obtained. A computer program, RATS, implementing this algorithm is described. A feasibility study of real-time implementation, tracking simulations, and parameter sensitivity studies are also reported.

  5. Advanced Packaging Materials and Techniques for High Power TR Module: Standard Flight vs. Advanced Packaging

    NASA Technical Reports Server (NTRS)

    Hoffman, James Patrick; Del Castillo, Linda; Miller, Jennifer; Jenabi, Masud; Hunter, Donald; Birur, Gajanana

    2011-01-01

    The higher output power densities required of modern radar architectures, such as the proposed DESDynI [Deformation, Ecosystem Structure, and Dynamics of Ice] SAR [Synthetic Aperture Radar] Instrument (or DSI) require increasingly dense high power electronics. To enable these higher power densities, while maintaining or even improving hardware reliability, requires advances in integrating advanced thermal packaging technologies into radar transmit/receive (TR) modules. New materials and techniques have been studied and compared to standard technologies.

  6. Advancing Techniques of Radiation Therapy for Rectal Cancer.

    PubMed

    Patel, Sagar A; Wo, Jennifer Y; Hong, Theodore S

    2016-07-01

    Since the advent of radiation therapy for rectal cancer, there has been continual investigation of advancing technologies and techniques that allow for improved dose conformality to target structures while limiting irradiation of surrounding normal tissue. For locally advanced disease, intensity modulated and proton beam radiation therapy both provide more highly conformal treatment volumes that reduce dose to organs at risk, though the clinical benefit in terms of toxicity reduction is unclear. For early stage disease, endorectal contact therapy and high-dose rate brachytherapy may be a definitive treatment option for patients who are poor operative candidates or those with low-lying tumors that desire sphincter-preservation. Finally, there has been growing evidence that supports stereotactic body radiotherapy as a safe and effective salvage treatment for the minority of patients that locally recur following trimodality therapy for locally advanced disease. This review addresses these topics that remain areas of active clinical investigation. PMID:27238474

  7. An Advanced Time Averaging Modelling Technique for Power Electronic Circuits

    NASA Astrophysics Data System (ADS)

    Jankuloski, Goce

    For stable and efficient performance of power converters, a good mathematical model is needed. This thesis presents a new modelling technique for DC/DC and DC/AC Pulse Width Modulated (PWM) converters. The new model is more accurate than the existing modelling techniques such as State Space Averaging (SSA) and Discrete Time Modelling. Unlike the SSA model, the new modelling technique, the Advanced Time Averaging Model (ATAM) includes the averaging dynamics of the converter's output. In addition to offering enhanced model accuracy, application of linearization techniques to the ATAM enables the use of conventional linear control design tools. A controller design application demonstrates that a controller designed based on the ATAM outperforms one designed using the ubiquitous SSA model. Unlike the SSA model, ATAM for DC/AC augments the system's dynamics with the dynamics needed for subcycle fundamental contribution (SFC) calculation. This allows for controller design that is based on an exact model.

  8. Technology development of fabrication techniques for advanced solar dynamic concentrators

    NASA Technical Reports Server (NTRS)

    Richter, Scott W.

    1991-01-01

    The objective of the advanced concentrator program is to develop the technology that will lead to lightweight, highly reflective, accurate, scaleable, and long lived space solar dynamic concentrators. The advanced concentrator program encompasses new and innovative concepts, fabrication techniques, materials selection, and simulated space environmental testing. Fabrication techniques include methods of fabricating the substrates and coating substrate surfaces to produce high quality optical surfaces, acceptable for further coating with vapor deposited optical films. The selected materials to obtain a high quality optical surface include microsheet glass and Eccocoat EP-3 epoxy, with DC-93-500 selected as a candidate silicone adhesive and levelizing layer. The following procedures are defined: cutting, cleaning, forming, and bonding microsheet glass. Procedures are also defined for surface cleaning, and EP-3 epoxy application. The results and analyses from atomic oxygen and thermal cycling tests are used to determine the effects of orbital conditions in a space environment.

  9. Technology development of fabrication techniques for advanced solar dynamic concentrators

    NASA Technical Reports Server (NTRS)

    Richter, Scott W.

    1991-01-01

    The objective of the advanced concentrator program is to develop the technology that will lead to lightweight, highly reflective, accurate, scaleable, and long lived space solar dynamic concentrators. The advanced concentrator program encompasses new and innovative concepts, fabrication techniques, materials selection, and simulated space environmental testing. Fabrication techniques include methods of fabricating the substrates and coating substrate surfaces to produce high-quality optical surfaces, acceptable for further coating with vapor deposited optical films. The selected materials to obtain a high quality optical surface include microsheet glass and Eccocoat EP-3 epoxy, with DC-93-500 selected as a candidate silicone adhesive and levelizing layer. The following procedures are defined: cutting, cleaning, forming, and bonding microsheet glass. Procedures are also defined for surface cleaning, and EP-3 epoxy application. The results and analyses from atomic oxygen and thermal cycling tests are used to determine the effects of orbital conditions in a space environment.

  10. Advance techniques for monitoring human tolerance to positive Gz accelerations

    NASA Technical Reports Server (NTRS)

    Pelligra, R.; Sandler, H.; Rositano, S.; Skrettingland, K.; Mancini, R.

    1973-01-01

    Tolerance to positive g accelerations was measured in ten normal male subjects using both standard and advanced techniques. In addition to routine electrocardiogram, heart rate, respiratory rate, and infrared television, monitoring techniques during acceleration exposure included measurement of peripheral vision loss, noninvasive temporal, brachial, and/or radial arterial blood flow, and automatic measurement of indirect systolic and diastolic blood pressure at 60-sec intervals. Although brachial and radial arterial flow measurements reflected significant cardiovascular changes during and after acceleration, they were inconsistent indices of the onset of grayout or blackout. Temporal arterial blood flow, however, showed a high correlation with subjective peripheral light loss.

  11. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  12. Data Compression Techniques for Advanced Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Bradley, William G.

    1998-01-01

    Advanced space transportation systems, including vehicle state of health systems, will produce large amounts of data which must be stored on board the vehicle and or transmitted to the ground and stored. The cost of storage or transmission of the data could be reduced if the number of bits required to represent the data is reduced by the use of data compression techniques. Most of the work done in this study was rather generic and could apply to many data compression systems, but the first application area to be considered was launch vehicle state of health telemetry systems. Both lossless and lossy compression techniques were considered in this study.

  13. The Advanced Space Plant Culture Device with Live Imaging Technique

    NASA Astrophysics Data System (ADS)

    Zheng, Weibo; Zhang, Tao; Tong, Guanghui

    The live imaging techniques, including the color and fluorescent imags, are very important and useful for space life science. The advanced space plant culture Device (ASPCD) with live imaging Technique, developed for Chinese Spacecraft, would be introduced in this paper. The ASPCD had two plant experimental chambers. Three cameras (two color cameras and one fluorescent camera) were installed in the two chambers. The fluorescent camera could observe flowering genes, which were labeled by GFP. The lighting, nutrient, temperature controling and water recycling were all independent in each chamber. The ASPCD would beed applied to investigate for the growth and development of the high plant under microgravity conditions on board the Chinese Spacecraft.

  14. Three-dimensional hybrid grid generation using advancing front techniques

    NASA Technical Reports Server (NTRS)

    Steinbrenner, John P.; Noack, Ralph W.

    1995-01-01

    A new 3-dimensional hybrid grid generation technique has been developed, based on ideas of advancing fronts for both structured and unstructured grids. In this approach, structured grids are first generate independently around individual components of the geometry. Fronts are initialized on these structure grids, and advanced outward so that new cells are extracted directly from the structured grids. Employing typical advancing front techniques, cells are rejected if they intersect the existing front or fail other criteria When no more viable structured cells exist further cells are advanced in an unstructured manner to close off the overall domain, resulting in a grid of 'hybrid' form. There are two primary advantages to the hybrid formulation. First, generating blocks with limited regard to topology eliminates the bottleneck encountered when a multiple block system is used to fully encapsulate a domain. Individual blocks may be generated free of external constraints, which will significantly reduce the generation time. Secondly, grid points near the body (presumably with high aspect ratio) will still maintain a structured (non-triangular or tetrahedral) character, thereby maximizing grid quality and solution accuracy near the surface.

  15. The Third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization was held on 24-26 Sept. 1990. Sessions were on the following topics: dynamics and controls; multilevel optimization; sensitivity analysis; aerodynamic design software systems; optimization theory; analysis and design; shape optimization; vehicle components; structural optimization; aeroelasticity; artificial intelligence; multidisciplinary optimization; and composites.

  16. A method to objectively optimize coral bleaching prediction techniques

    NASA Astrophysics Data System (ADS)

    van Hooidonk, R. J.; Huber, M.

    2007-12-01

    Thermally induced coral bleaching is a global threat to coral reef health. Methodologies, e.g. the Degree Heating Week technique, have been developed to predict bleaching induced by thermal stress by utilizing remotely sensed sea surface temperature (SST) observations. These techniques can be used as a management tool for Marine Protected Areas (MPA). Predictions are valuable to decision makers and stakeholders on weekly to monthly time scales and can be employed to build public awareness and support for mitigation. The bleaching problem is only expected to worsen because global warming poses a major threat to coral reef health. Indeed, predictive bleaching methods combined with climate model output have been used to forecast the global demise of coral reef ecosystems within coming decades due to climate change. Accuracy of these predictive techniques has not been quantitatively characterized despite the critical role they play. Assessments have typically been limited, qualitative or anecdotal, or more frequently they are simply unpublished. Quantitative accuracy assessment, using well established methods and skill scores often used in meteorology and medical sciences, will enable objective optimization of existing predictive techniques. To accomplish this, we will use existing remotely sensed data sets of sea surface temperature (AVHRR and TMI), and predictive values from techniques such as the Degree Heating Week method. We will compare these predictive values with observations of coral reef health and calculate applicable skill scores (Peirce Skill Score, Hit Rate and False Alarm Rate). We will (a) quantitatively evaluate the accuracy of existing coral reef bleaching predictive methods against state-of- the-art reef health databases, and (b) present a technique that will objectively optimize the predictive method for any given location. We will illustrate this optimization technique for reefs located in Puerto Rico and the US Virgin Islands.

  17. Added Value of Assessing Adnexal Masses with Advanced MRI Techniques

    PubMed Central

    Thomassin-Naggara, I.; Balvay, D.; Rockall, A.; Carette, M. F.; Ballester, M.; Darai, E.; Bazot, M.

    2015-01-01

    This review will present the added value of perfusion and diffusion MR sequences to characterize adnexal masses. These two functional MR techniques are readily available in routine clinical practice. We will describe the acquisition parameters and a method of analysis to optimize their added value compared with conventional images. We will then propose a model of interpretation that combines the anatomical and morphological information from conventional MRI sequences with the functional information provided by perfusion and diffusion weighted sequences. PMID:26413542

  18. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  19. An Optimized Integrator Windup Protection Technique Applied to a Turbofan Engine Control

    NASA Technical Reports Server (NTRS)

    Watts, Stephen R.; Garg, Sanjay

    1995-01-01

    This paper introduces a new technique for providing memoryless integrator windup protection which utilizes readily available optimization software tools. This integrator windup protection synthesis provides a concise methodology for creating integrator windup protection for each actuation system loop independently while assuring both controller and closed loop system stability. The individual actuation system loops' integrator windup protection can then be combined to provide integrator windup protection for the entire system. This technique is applied to an H(exp infinity) based multivariable control designed for a linear model of an advanced afterburning turbofan engine. The resulting transient characteristics are examined for the integrated system while encountering single and multiple actuation limits.

  20. Model reduction using new optimal Routh approximant technique

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Guo, Tong-Yi; Sheih, Leang-San

    1992-01-01

    An optimal Routh approximant of a single-input single-output dynamic system is a reduced-order transfer function of which the denominator is obtained by the Routh approximation method while the numerator is determined by minimizing a time-response integral-squared-error (ISE) criterion. In this paper, a new elegant approach is presented for obtaining the optimal Routh approximants for linear time-invariant continuous-time systems. The approach is based on the Routh canonical expansion, which is a finite-term orthogonal series of rational basis functions, and minimization of the ISE criterion. A procedure for combining the above approach with the bilinear transformation is also presented in order to obtain the optimal bilinear Routh approximants of linear time-invariant discrete-time systems. The proposed technique is simple in formulation and is amenable to practical implementation.

  1. A technique for noise measurement optimization with spectrum analyzers

    NASA Astrophysics Data System (ADS)

    Carniti, P.; Cassina, L.; Gotti, C.; Maino, M.; Pessina, G.

    2015-08-01

    Measuring low noise of electronic devices with a spectrum analyzer requires particular care as the instrument could add significant contributions. A Low Noise Amplifier, LNA, is therefore necessary to be connected between the source to be measured and the instrument, to mitigate its effect at the LNA input. In the present work we suggest a technique for the implementation of the LNA that allows to optimize both low frequency noise and white noise, obtaining outstanding performance in a very broad frequency range.

  2. Approximate optimal guidance for the advanced launch system

    NASA Technical Reports Server (NTRS)

    Feeley, T. S.; Speyer, J. L.

    1993-01-01

    A real-time guidance scheme for the problem of maximizing the payload into orbit subject to the equations of motion for a rocket over a spherical, non-rotating earth is presented. An approximate optimal launch guidance law is developed based upon an asymptotic expansion of the Hamilton - Jacobi - Bellman or dynamic programming equation. The expansion is performed in terms of a small parameter, which is used to separate the dynamics of the problem into primary and perturbation dynamics. For the zeroth-order problem the small parameter is set to zero and a closed-form solution to the zeroth-order expansion term of Hamilton - Jacobi - Bellman equation is obtained. Higher-order terms of the expansion include the effects of the neglected perturbation dynamics. These higher-order terms are determined from the solution of first-order linear partial differential equations requiring only the evaluation of quadratures. This technique is preferred as a real-time, on-line guidance scheme to alternative numerical iterative optimization schemes because of the unreliable convergence properties of these iterative guidance schemes and because the quadratures needed for the approximate optimal guidance law can be performed rapidly and by parallel processing. Even if the approximate solution is not nearly optimal, when using this technique the zeroth-order solution always provides a path which satisfies the terminal constraints. Results for two-degree-of-freedom simulations are presented for the simplified problem of flight in the equatorial plane and compared to the guidance scheme generated by the shooting method which is an iterative second-order technique.

  3. Approximate optimal guidance for the advanced launch system

    NASA Astrophysics Data System (ADS)

    Feeley, T. S.; Speyer, J. L.

    1993-12-01

    A real-time guidance scheme for the problem of maximizing the payload into orbit subject to the equations of motion for a rocket over a spherical, non-rotating earth is presented. An approximate optimal launch guidance law is developed based upon an asymptotic expansion of the Hamilton - Jacobi - Bellman or dynamic programming equation. The expansion is performed in terms of a small parameter, which is used to separate the dynamics of the problem into primary and perturbation dynamics. For the zeroth-order problem the small parameter is set to zero and a closed-form solution to the zeroth-order expansion term of Hamilton - Jacobi - Bellman equation is obtained. Higher-order terms of the expansion include the effects of the neglected perturbation dynamics. These higher-order terms are determined from the solution of first-order linear partial differential equations requiring only the evaluation of quadratures. This technique is preferred as a real-time, on-line guidance scheme to alternative numerical iterative optimization schemes because of the unreliable convergence properties of these iterative guidance schemes and because the quadratures needed for the approximate optimal guidance law can be performed rapidly and by parallel processing. Even if the approximate solution is not nearly optimal, when using this technique the zeroth-order solution always provides a path which satisfies the terminal constraints. Results for two-degree-of-freedom simulations are presented for the simplified problem of flight in the equatorial plane and compared to the guidance scheme generated by the shooting method which is an iterative second-order technique.

  4. Co-Simulation for Advanced Process Design and Optimization

    SciTech Connect

    Stephen E. Zitney

    2009-01-01

    Meeting the increasing demand for clean, affordable, and secure energy is arguably the most important challenge facing the world today. Fossil fuels can play a central role in a portfolio of carbon-neutral energy options provided CO{sub 2} emissions can be dramatically reduced by capturing CO{sub 2} and storing it safely and effectively. Fossil energy industry faces the challenge of meeting aggressive design goals for next-generation power plants with CCS. Process designs will involve large, highly-integrated, and multipurpose systems with advanced equipment items with complex geometries and multiphysics. APECS is enabling software to facilitate effective integration, solution, and analysis of high-fidelity process/equipment (CFD) co-simulations. APECS helps to optimize fluid flow and related phenomena that impact overall power plant performance. APECS offers many advanced capabilities including ROMs, design optimization, parallel execution, stochastic analysis, and virtual plant co-simulations. NETL and its collaborative R&D partners are using APECS to reduce the time, cost, and technical risk of developing high-efficiency, zero-emission power plants with CCS.

  5. Optimization of backward giant circle technique on the asymmetric bars.

    PubMed

    Hiley, Michael J; Yeadon, Maurice R

    2007-11-01

    The release window for a given dismount from the asymmetric bars is the period of time within which release results in a successful dismount. Larger release windows are likely to be associated with more consistent performance because they allow a greater margin for error in timing the release. A computer simulation model was used to investigate optimum technique for maximizing release windows in asymmetric bars dismounts. The model comprised four rigid segments with the elastic properties of the gymnast and bar modeled using damped linear springs. Model parameters were optimized to obtain a close match between simulated and actual performances of three gymnasts in terms of rotation angle (1.5 degrees ), bar displacement (0.014 m), and release velocities (<1%). Three optimizations to maximize the release window were carried out for each gymnast involving no perturbations, 10-ms perturbations, and 20-ms perturbations in the timing of the shoulder and hip joint movements preceding release. It was found that the optimizations robust to 20-ms perturbations produced release windows similar to those of the actual performances whereas the windows for the unperturbed optimizations were up to twice as large. It is concluded that robustness considerations must be included in optimization studies in order to obtain realistic results and that elite performances are likely to be robust to timing perturbations of the order of 20 ms. PMID:18089928

  6. Advanced computer modeling techniques expand belt conveyor technology

    SciTech Connect

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  7. Advanced aeroservoelastic stabilization techniques for hypersonic flight vehicles

    NASA Technical Reports Server (NTRS)

    Chan, Samuel Y.; Cheng, Peter Y.; Myers, Thomas T.; Klyde, David H.; Magdaleno, Raymond E.; Mcruer, Duane T.

    1992-01-01

    Advanced high performance vehicles, including Single-Stage-To-Orbit (SSTO) hypersonic flight vehicles, that are statically unstable, require higher bandwidth flight control systems to compensate for the instability resulting in interactions between the flight control system, the engine/propulsion dynamics, and the low frequency structural modes. Military specifications, such as MIL-F-9490D and MIL-F-87242, tend to limit treatment of structural modes to conventional gain stabilization techniques. The conventional gain stabilization techniques, however, introduce low frequency effective time delays which can be troublesome from a flying qualities standpoint. These time delays can be alleviated by appropriate blending of gain and phase stabilization techniques (referred to as Hybrid Phase Stabilization or HPS) for the low frequency structural modes. The potential of using HPS for compensating structural mode interaction was previously explored. It was shown that effective time delay was significantly reduced with the use of HPS; however, the HPS design was seen to have greater residual response than a conventional gain stablized design. Additional work performed to advance and refine the HPS design procedure, to further develop residual response metrics as a basis for alternative structural stability specifications, and to develop strategies for validating HPS design and specification concepts in manned simulation is presented. Stabilization design sensitivity to structural uncertainties and aircraft-centered requirements are also assessed.

  8. Technique Developed for Optimizing Traveling-Wave Tubes

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    1999-01-01

    A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT s are critical components in deep-space probes, geosynchronous communication satellites, and high-power radar systems. Power efficiency is of paramount importance for TWT s employed in deep-space probes and communications satellites. Consequently, increasing the power efficiency of TWT s has been the primary goal of the TWT group at the NASA Lewis Research Center over the last 25 years. An in-house effort produced a technique (ref. 1) to design TWT's for optimized power efficiency. This technique is based on simulated annealing, which has an advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 2). A simulated annealing algorithm was created and integrated into the NASA TWT computer model (ref. 3). The new technique almost doubled the computed conversion power efficiency of a TWT from 7.1 to 13.5 percent (ref. 1).

  9. Testing aspects of advanced coherent electron cooling technique

    SciTech Connect

    Litvinenko, V.; Jing, Y.; Pinayev, I.; Wang, G.; Samulyak, R.; Ratner, D.

    2015-05-03

    An advanced version of the Coherent-electron Cooling (CeC) based on the micro-bunching instability was proposed. This approach promises significant increase in the bandwidth of the CeC system and, therefore, significant shortening of cooling time in high-energy hadron colliders. In this paper we present our plans of simulating and testing the key aspects of this proposed technique using the set-up of the coherent-electron-cooling proof-of-principle experiment at BNL.

  10. [The role of electronic techniques for advanced neuroelectrophysiology].

    PubMed

    Wang, Min; Zhang, Lijun; Cao, Maoyong

    2008-12-01

    The rapid development in the fields of electroscience, computer science, and biomedical engineering are propelling the electrophysiologyical techniques. Recent technological advances have made it possible to simultaneously record the activity of large numbers of neurons in awake and behaving animals using implanted extracellular electrodes. Several laboratories use chronically implanted electrode arrays in freely moving animals because they allow stable recordings of discriminated single neurons and/or field potentials from up to hundreds of electrodes over long time periods. In this review, we focus on the new technologies for neuroelectrophysiology. PMID:19166233

  11. Automated parameterization of intermolecular pair potentials using global optimization techniques

    NASA Astrophysics Data System (ADS)

    Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk

    2014-12-01

    In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.

  12. Techniques for developing reliability-oriented optimal microgrid architectures

    NASA Astrophysics Data System (ADS)

    Patra, Shashi B.

    2007-12-01

    Alternative generation technologies such as fuel cells, micro-turbines, solar etc. have been the focus of active research in the past decade. These energy sources are small and modular. Because of these advantages, these sources can be deployed effectively at or near locations where they are actually needed, i.e. in the distribution network. This is in contrast to the traditional electricity generation which has been "centralized" in nature. The new technologies can be deployed in a "distributed" manner. Therefore, they are also known as Distributed Energy Resources (DER). It is expected that the use of DER, will grow significantly in the future. Hence, it is prudent to interconnect the energy resources in a meshed or grid-like structure, so as to exploit the reliability and economic benefits of distributed deployment. These grids, which are smaller in scale but similar to the electric transmission grid, are known as "microgrids". This dissertation presents rational methods of building microgrids optimized for cost and subject to system-wide and locational reliability guarantees. The first method is based on dynamic programming and consists of determining the optimal interconnection between microsources and load points, given their locations and the rights of way for possible interconnections. The second method is based on particle swarm optimization. This dissertation describes the formulation of the optimization problem and the solution methods. The applicability of the techniques is demonstrated in two possible situations---design of a microgrid from scratch and expansion of an existing distribution system.

  13. Some advanced testing techniques for concentrator photovoltaic cells and lenses

    SciTech Connect

    Wiczer, J.J.; Chaffin, R.J.; Hibray, R.E.

    1982-09-01

    The authors describe two separate test techniques for evaluating concentrator photovoltaic components. For convenient characterization of concentrator solar cells, they have developed a method for measuring the entire illuminated I-V curve of a photovoltaic cell with a single flash of intense simulated sunlight. This method reduces the heat input to the cell and the time required to test a cell, thus making possible quick indoor measurements of photovoltaic conversion efficiency at concentrated illumination levels without the use of elaborate cell mounting fixtures or heat sink attachments. The other test method provides a technique to analyze the spatially dependent, spectral distribution of intense sunlight collected and focused by lenses designed for use in photovoltaic concentrator systems. This information is important in the design of multijunction photovoltaic receivers, secondary concentrators, and in optimizing the performance of conventional silicon cell concentrator systems.

  14. Optimization Techniques for 3D Graphics Deployment on Mobile Devices

    NASA Astrophysics Data System (ADS)

    Koskela, Timo; Vatjus-Anttila, Jarkko

    2015-03-01

    3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.

  15. Recent Advances in Techniques for Hyperspectral Image Processing

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; Marconcini, Mattia; Tilton, James C.; Trianni, Giovanna

    2009-01-01

    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  16. On combining Laplacian and optimization-based mesh smoothing techniques

    SciTech Connect

    Freitag, L.A.

    1997-07-01

    Local mesh smoothing algorithms have been shown to be effective in repairing distorted elements in automatically generated meshes. The simplest such algorithm is Laplacian smoothing, which moves grid points to the geometric center of incident vertices. Unfortunately, this method operates heuristically and can create invalid meshes or elements of worse quality than those contained in the original mesh. In contrast, optimization-based methods are designed to maximize some measure of mesh quality and are very effective at eliminating extremal angles in the mesh. These improvements come at a higher computational cost, however. In this article the author proposes three smoothing techniques that combine a smart variant of Laplacian smoothing with an optimization-based approach. Several numerical experiments are performed that compare the mesh quality and computational cost for each of the methods in two and three dimensions. The author finds that the combined approaches are very cost effective and yield high-quality meshes.

  17. Surgical techniques for advanced stage pelvic organ prolapse.

    PubMed

    Brown, Douglas N; Strauchon, Christopher; Gonzalez, Hector; Gruber, Daniel

    2016-02-01

    Pelvic organ prolapse is an extremely common condition, with approximately 12% of women requiring surgical correction over their lifetime. This manuscript reviews the most recent literature regarding the comparative efficacy of various surgical repair techniques in the treatment of advanced stage pelvic organ prolapse. Uterosacral ligament suspension has similar anatomic and subjective outcomes when compared to sacrospinous ligament fixation at 12 months and is considered to be equally effective. The use of transvaginal mesh has been shown to be superior to native tissue vaginal repairs with respect to anatomic outcomes but at the cost of a higher complication rate. Minimally invasive sacrocolpopexy appears to be equivalent to abdominal sacrocolpopexy (ASC). Robot-assisted sacrocolpopexy (RSC) and laparoscopic sacrocolpopexy (LSC) appear as effective as abdominal sacrocolpopexy, however, prospective studies of comparing long-term outcomes of ASC, LSC, and RSC in relation to health care costs is paramount in the near future. Surgical correction of advanced pelvic organ prolapse can be accomplished via a variety of proven techniques. Selection of the correct surgical approach is a complex decision process and involves a multitude of factors. When deciding on the most suitable surgical intervention, the chosen route must be individualized for each patient taking into account the specific risks and benefits of each procedure. PMID:26448444

  18. Advanced IMCW Lidar Techniques for ASCENDS CO2 Column Measurements

    NASA Astrophysics Data System (ADS)

    Campbell, Joel; lin, bing; nehrir, amin; harrison, fenton; obland, michael

    2015-04-01

    Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation.

  19. Evolutionary techniques for sensor networks energy optimization in marine environmental monitoring

    NASA Astrophysics Data System (ADS)

    Grimaccia, Francesco; Johnstone, Ron; Mussetta, Marco; Pirisi, Andrea; Zich, Riccardo E.

    2012-10-01

    The sustainable management of coastal and offshore ecosystems, such as for example coral reef environments, requires the collection of accurate data across various temporal and spatial scales. Accordingly, monitoring systems are seen as central tools for ecosystem-based environmental management, helping on one hand to accurately describe the water column and substrate biophysical properties, and on the other hand to correctly steer sustainability policies by providing timely and useful information to decision-makers. A robust and intelligent sensor network that can adjust and be adapted to different and changing environmental or management demands would revolutionize our capacity to wove accurately model, predict, and manage human impacts on our coastal, marine, and other similar environments. In this paper advanced evolutionary techniques are applied to optimize the design of an innovative energy harvesting device for marine applications. The authors implement an enhanced technique in order to exploit in the most effective way the uniqueness and peculiarities of two classical optimization approaches, Particle Swarm Optimization and Genetic Algorithms. Here, this hybrid procedure is applied to a power buoy designed for marine environmental monitoring applications in order to optimize the recovered energy from sea-wave, by selecting the optimal device configuration.

  20. Integration of Advanced Simulation and Visualization for Manufacturing Process Optimization

    NASA Astrophysics Data System (ADS)

    Zhou, Chenn; Wang, Jichao; Tang, Guangwu; Moreland, John; Fu, Dong; Wu, Bin

    2016-05-01

    The integration of simulation and visualization can provide a cost-effective tool for process optimization, design, scale-up and troubleshooting. The Center for Innovation through Visualization and Simulation (CIVS) at Purdue University Northwest has developed methodologies for such integration with applications in various manufacturing processes. The methodologies have proven to be useful for virtual design and virtual training to provide solutions addressing issues on energy, environment, productivity, safety, and quality in steel and other industries. In collaboration with its industrial partnerships, CIVS has provided solutions to companies, saving over US38 million. CIVS is currently working with the steel industry to establish an industry-led Steel Manufacturing Simulation and Visualization Consortium through the support of National Institute of Standards and Technology AMTech Planning Grant. The consortium focuses on supporting development and implementation of simulation and visualization technologies to advance steel manufacturing across the value chain.

  1. Machine learning techniques for energy optimization in mobile embedded systems

    NASA Astrophysics Data System (ADS)

    Donohoo, Brad Kyoshi

    Mobile smartphones and other portable battery operated embedded systems (PDAs, tablets) are pervasive computing devices that have emerged in recent years as essential instruments for communication, business, and social interactions. While performance, capabilities, and design are all important considerations when purchasing a mobile device, a long battery lifetime is one of the most desirable attributes. Battery technology and capacity has improved over the years, but it still cannot keep pace with the power consumption demands of today's mobile devices. This key limiter has led to a strong research emphasis on extending battery lifetime by minimizing energy consumption, primarily using software optimizations. This thesis presents two strategies that attempt to optimize mobile device energy consumption with negligible impact on user perception and quality of service (QoS). The first strategy proposes an application and user interaction aware middleware framework that takes advantage of user idle time between interaction events of the foreground application to optimize CPU and screen backlight energy consumption. The framework dynamically classifies mobile device applications based on their received interaction patterns, then invokes a number of different power management algorithms to adjust processor frequency and screen backlight levels accordingly. The second strategy proposes the usage of machine learning techniques to learn a user's mobile device usage pattern pertaining to spatiotemporal and device contexts, and then predict energy-optimal data and location interface configurations. By learning where and when a mobile device user uses certain power-hungry interfaces (3G, WiFi, and GPS), the techniques, which include variants of linear discriminant analysis, linear logistic regression, non-linear logistic regression, and k-nearest neighbor, are able to dynamically turn off unnecessary interfaces at runtime in order to save energy.

  2. A fuzzy optimal threshold technique for medical images

    NASA Astrophysics Data System (ADS)

    Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.

    2012-01-01

    A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.

  3. A fuzzy optimal threshold technique for medical images

    NASA Astrophysics Data System (ADS)

    Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.

    2011-12-01

    A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.

  4. Advanced Techniques for Removal of Retrievable Inferior Vena Cava Filters

    SciTech Connect

    Iliescu, Bogdan; Haskal, Ziv J.

    2012-08-15

    Inferior vena cava (IVC) filters have proven valuable for the prevention of primary or recurrent pulmonary embolism in selected patients with or at high risk for venous thromboembolic disease. Their use has become commonplace, and the numbers implanted increase annually. During the last 3 years, in the United States, the percentage of annually placed optional filters, i.e., filters than can remain as permanent filters or potentially be retrieved, has consistently exceeded that of permanent filters. In parallel, the complications of long- or short-term filtration have become increasingly evident to physicians, regulatory agencies, and the public. Most filter removals are uneventful, with a high degree of success. When routine filter-retrieval techniques prove unsuccessful, progressively more advanced tools and skill sets must be used to enhance filter-retrieval success. These techniques should be used with caution to avoid damage to the filter or cava during IVC retrieval. This review describes the complex techniques for filter retrieval, including use of additional snares, guidewires, angioplasty balloons, and mechanical and thermal approaches as well as illustrates their specific application.

  5. High-level power analysis and optimization techniques

    NASA Astrophysics Data System (ADS)

    Raghunathan, Anand

    1997-12-01

    This thesis combines two ubiquitous trends in the VLSI design world--the move towards designing at higher levels of design abstraction, and the increasing importance of power consumption as a design metric. Power estimation and optimization tools are becoming an increasingly important part of design flows, driven by a variety of requirements such as prolonging battery life in portable computing and communication devices, thermal considerations and system cooling and packaging costs, reliability issues (e.g. electromigration, ground bounce, and I-R drops in the power network), and environmental concerns. This thesis presents a suite of techniques to automatically perform power analysis and optimization for designs at the architecture or register-transfer, and behavior or algorithm levels of the design hierarchy. High-level synthesis refers to the process of synthesizing, from an abstract behavioral description, a register-transfer implementation that satisfies the desired constraints. High-level synthesis tools typically perform one or more of the following tasks: transformations, module selection, clock selection, scheduling, and resource allocation and assignment (also called resource sharing or hardware sharing). High-level synthesis techniques for minimizing the area, maximizing the performance, and enhancing the testability of the synthesized designs have been investigated. This thesis presents high-level synthesis techniques that minimize power consumption in the synthesized data paths. This thesis investigates the effects of resource sharing on the power consumption in the data path, provides techniques to efficiently estimate power consumption during resource sharing, and resource sharing algorithms to minimize power consumption. The RTL circuit that is obtained from the high-level synthesis process can be further optimized for power by applying power-reducing RTL transformations. This thesis presents macro-modeling and estimation techniques for switching

  6. A Deep-Cutting-Plane Technique for Reverse Convex Optimization.

    PubMed

    Moshirvaziri, K; Amouzegar, M A

    2011-08-01

    A large number of problems in engineering design and in many areas of social and physical sciences and technology lend themselves to particular instances of problems studied in this paper. Cutting-plane methods have traditionally been used as an effective tool in devising exact algorithms for solving convex and large-scale combinatorial optimization problems. Its utilization in nonconvex optimization has been also promising. A cutting plane, essentially a hyperplane defined by a linear inequality, can be used to effectively reduce the computational efforts in search of a global solution. Each cut is generated in order to eliminate a large portion of the search domain. Thus, a deep cut is intuitively superior in which it will exclude a larger set of extraneous points from consideration. This paper is concerned with the development of deep-cutting-plane techniques applied to reverse-convex programs. An upper bound and a lower bound for the optimal value are found, updated, and improved at each iteration. The algorithm terminates when the two bounds collapse or all the generated subdivisions have been fathomed. Finally, computational considerations and numerical results on a set of test problems are discussed. An illustrative example, walking through the steps of the algorithm and explaining the computational process, is presented. PMID:21296710

  7. An optimal merging technique for high-resolution precipitation products

    SciTech Connect

    Houser, Paul

    2011-01-01

    Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutions and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.

  8. Optimized evaporation technique for leachate treatment: Small scale implementation.

    PubMed

    Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz

    2016-04-01

    This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature. PMID:26826455

  9. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Helba, Michael J.; Hill, Janeil B.

    1992-01-01

    The purpose of this research is to provide Space Station Freedom protective structures design insight through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. The goals of the research are: (1) to develop a Monte Carlo simulation tool which will provide top level insight for Space Station protective structures designers; (2) to develop advanced shielding concepts relevant to Space Station Freedom using unique multiple bumper approaches; and (3) to investigate projectile shape effects on protective structures design.

  10. Advanced Techniques for Power System Identification from Measured Data

    SciTech Connect

    Pierre, John W.; Wies, Richard; Trudnowski, Daniel

    2008-11-25

    Time-synchronized measurements provide rich information for estimating a power-system's electromechanical modal properties via advanced signal processing. This information is becoming critical for the improved operational reliability of interconnected grids. A given mode's properties are described by its frequency, damping, and shape. Modal frequencies and damping are useful indicators of power-system stress, usually declining with increased load or reduced grid capacity. Mode shape provides critical information for operational control actions. This project investigated many advanced techniques for power system identification from measured data focusing on mode frequency and damping ratio estimation. Investigators from the three universities coordinated their effort with Pacific Northwest National Laboratory (PNNL). Significant progress was made on developing appropriate techniques for system identification with confidence intervals and testing those techniques on field measured data and through simulation. Experimental data from the western area power system was provided by PNNL and Bonneville Power Administration (BPA) for both ambient conditions and for signal injection tests. Three large-scale tests were conducted for the western area in 2005 and 2006. Measured field PMU (Phasor Measurement Unit) data was provided to the three universities. A 19-machine simulation model was enhanced for testing the system identification algorithms. Extensive simulations were run with this model to test the performance of the algorithms. University of Wyoming researchers participated in four primary activities: (1) Block and adaptive processing techniques for mode estimation from ambient signals and probing signals, (2) confidence interval estimation, (3) probing signal design and injection method analysis, and (4) performance assessment and validation from simulated and field measured data. Subspace based methods have been use to improve previous results from block processing

  11. COAL AND CHAR STUDIES BY ADVANCED EMR TECHNIQUES

    SciTech Connect

    R. Linn Belford; Robert B. Clarkson; Mark J. Nilges; Boris M. Odintsov; Alex I. Smirnov

    2001-04-30

    Advanced electronic magnetic resonance (EMR) as well as nuclear magnetic resonance (NMR) methods have been used to examine properties of coals, chars, and molecular species related to constituents of coal. During the span of this grant, progress was made on construction and applications to coals and chars of two high frequency EMR systems particularly appropriate for such studies--48 GHz and 95 GHz electron magnetic resonance spectrometer, on new low-frequency dynamic nuclear polarization (DNP) experiments to examine the interaction between water and the surfaces of suspended char particulates in slurries, and on a variety of proton nuclear magnetic resonance (NMR) techniques to measure characteristics of the water directly in contact with the surfaces and pore spaces of carbonaceous particulates.

  12. Advanced Fibre Bragg Grating and Microfibre Bragg Grating Fabrication Techniques

    NASA Astrophysics Data System (ADS)

    Chung, Kit Man

    Fibre Bragg gratings (FBGs) have become a very important technology for communication systems and fibre optic sensing. Typically, FBGs are less than 10-mm long and are fabricated using fused silica uniform phase masks which become more expensive for longer length or non-uniform pitch. Generally, interference UV laser beams are employed to make long or complex FBGs, and this technique introduces critical precision and control issues. In this work, we demonstrate an advanced FBG fabrication system that enables the writing of long and complex gratings in optical fibres with virtually any apodisation profile, local phase and Bragg wavelength using a novel optical design in which the incident angles of two UV beams onto an optical fibre can be adjusted simultaneously by moving just one optical component, instead of two optics employed in earlier configurations, to vary the grating pitch. The key advantage of the grating fabrication system is that complex gratings can be fabricated by controlling the linear movements of two translation stages. In addition to the study of advanced grating fabrication technique, we also focus on the inscription of FBGs written in optical fibres with a cladding diameter of several ten's of microns. Fabrication of microfibres was investigated using a sophisticated tapering method. We also proposed a simple but practical technique to filter out the higher order modes reflected from the FBG written in microfibres via a linear taper region while the fundamental mode re-couples to the core. By using this technique, reflection from the microfibre Bragg grating (MFBG) can be effectively single mode, simplifying the demultiplexing and demodulation processes. MFBG exhibits high sensitivity to contact force and an MFBG-based force sensor was also constructed and tested to investigate their suitability for use as an invasive surgery device. Performance of the contact force sensor packaged in a conforming elastomer material compares favourably to one

  13. Advanced imaging techniques for the detection of breast cancer.

    PubMed

    Jochelson, Maxine

    2012-01-01

    Mammography is the only breast imaging examination that has been shown to reduce breast cancer mortality. Population-based sensitivity is 75% to 80%, but sensitivity in high-risk women with dense breasts is only in the range of 50%. Breast ultrasound and contrast-enhanced breast magnetic resonance imaging (MRI) have become additional standard modalities used in the diagnosis of breast cancer. In high-risk women, ultrasound is known to detect approximately four additional cancers per 1,000 women. MRI is exquisitely sensitive for the detection of breast cancer. In high-risk women, it finds an additional four to five cancers per 100 women. However, both ultrasound and MRI are also known to lead to a large number of additional benign biopsies and short-term follow-up examinations. Many new breast imaging tools have improved and are being developed to improve on our current ability to diagnose early-stage breast cancer. These can be divided into two groups. The first group is those that are advances in current techniques, which include digital breast tomosynthesis and contrast-enhanced mammography and ultrasound with elastography or microbubbles. The other group includes new breast imaging platforms such as breast computed tomography (CT) scanning and radionuclide breast imaging. These are exciting advances. However, in this era of cost and radiation containment, it is imperative to look at all of them objectively to see which will provide clinically relevant additional information. PMID:24451711

  14. Advanced techniques for constrained internal coordinate molecular dynamics.

    PubMed

    Wagner, Jeffrey R; Balaraman, Gouthaman S; Niesen, Michiel J M; Larsen, Adrien B; Jain, Abhinandan; Vaidehi, Nagarajan

    2013-04-30

    Internal coordinate molecular dynamics (ICMD) methods provide a more natural description of a protein by using bond, angle, and torsional coordinates instead of a Cartesian coordinate representation. Freezing high-frequency bonds and angles in the ICMD model gives rise to constrained ICMD (CICMD) models. There are several theoretical aspects that need to be developed to make the CICMD method robust and widely usable. In this article, we have designed a new framework for (1) initializing velocities for nonindependent CICMD coordinates, (2) efficient computation of center of mass velocity during CICMD simulations, (3) using advanced integrators such as Runge-Kutta, Lobatto, and adaptive CVODE for CICMD simulations, and (4) cancelling out the "flying ice cube effect" that sometimes arises in Nosé-Hoover dynamics. The Generalized Newton-Euler Inverse Mass Operator (GNEIMO) method is an implementation of a CICMD method that we have developed to study protein dynamics. GNEIMO allows for a hierarchy of coarse-grained simulation models based on the ability to rigidly constrain any group of atoms. In this article, we perform tests on the Lobatto and Runge-Kutta integrators to determine optimal simulation parameters. We also implement an adaptive coarse-graining tool using the GNEIMO Python interface. This tool enables the secondary structure-guided "freezing and thawing" of degrees of freedom in the molecule on the fly during molecular dynamics simulations and is shown to fold four proteins to their native topologies. With these advancements, we envision the use of the GNEIMO method in protein structure prediction, structure refinement, and in studying domain motion. PMID:23345138

  15. Advanced Techniques for Constrained Internal Coordinate Molecular Dynamics

    PubMed Central

    Wagner, Jeffrey R.; Balaraman, Gouthaman S.; Niesen, Michiel J. M.; Larsen, Adrien B.; Jain, Abhinandan; Vaidehi, Nagarajan

    2013-01-01

    Internal coordinate molecular dynamics (ICMD) methods provide a more natural description of a protein by using bond, angle and torsional coordinates instead of a Cartesian coordinate representation. Freezing high frequency bonds and angles in the ICMD model gives rise to constrained ICMD (CICMD) models. There are several theoretical aspects that need to be developed in order to make the CICMD method robust and widely usable. In this paper we have designed a new framework for 1) initializing velocities for non-independent CICMD coordinates, 2) efficient computation of center of mass velocity during CICMD simulations, 3) using advanced integrators such as Runge-Kutta, Lobatto and adaptive CVODE for CICMD simulations, and 4) cancelling out the “flying ice cube effect” that sometimes arises in Nosé-Hoover dynamics. The Generalized Newton-Euler Inverse Mass Operator (GNEIMO) method is an implementation of a CICMD method that we have developed to study protein dynamics. GNEIMO allows for a hierarchy of coarse-grained simulation models based on the ability to rigidly constrain any group of atoms. In this paper, we perform tests on the Lobatto and Runge-Kutta integrators to determine optimal simulation parameters. We also implement an adaptive coarse graining tool using the GNEIMO Python interface. This tool enables the secondary structure-guided “freezing and thawing” of degrees of freedom in the molecule on the fly during MD simulations, and is shown to fold four proteins to their native topologies. With these advancements we envision the use of the GNEIMO method in protein structure prediction, structure refinement, and in studying domain motion. PMID:23345138

  16. Advances in the Rising Bubble Technique for discharge measurement

    NASA Astrophysics Data System (ADS)

    Hilgersom, Koen; Luxemburg, Willem; Willemsen, Geert; Bussmann, Luuk

    2014-05-01

    Already in the 19th century, d'Auria described a discharge measurement technique that applies floats to find the depth-integrated velocity (d'Auria, 1882). The basis of this technique was that the horizontal distance that the float travels on its way to the surface is the image of the integrated velocity profile over depth. Viol and Semenov (1964) improved this method by using air bubbles as floats, but still distances were measured manually until Sargent (1981) introduced a technique that could derive the distances from two photographs simultaneously taken from each side of the river bank. Recently, modern image processing techniques proved to further improve the applicability of the method (Hilgersom and Luxemburg, 2012). In the 2012 article, controlling and determining the rising velocity of an air bubble still appeared a major challenge for the application of this method. Ever since, laboratory experiments with different nozzle and tube sizes lead to advances in our self-made equipment enabling us to produce individual air bubbles with a more constant rising velocity. Also, we introduced an underwater camera to on-site determine the rising velocity, which is dependent on the water temperature and contamination, and therefore is site-specific. Camera measurements of the rising velocity proved successful in a laboratory and field setting, although some improvements to the setup are necessary to capture the air bubbles also at depths where little daylight penetrates. References D'Auria, L.: Velocity of streams; A new method to determine correctly the mean velocity of any perpendicular in rivers and canals, (The) American Engineers, 3, 1882. Hilgersom, K.P. and Luxemburg, W.M.J.: Technical Note: How image processing facilitates the rising bubble technique for discharge measurement, Hydrology and Earth System Sciences, 16(2), 345-356, 2012. Sargent, D.: Development of a viable method of stream flow measurement using the integrating float technique, Proceedings of

  17. Advanced Multi-Junction Photovoltaic Device Optimization For High Temperature Space Applications

    NASA Astrophysics Data System (ADS)

    Sherif, Michael

    2011-10-01

    Almost all solar cells available today for space or terrestrial applications are optimized for low temperature or "room temperature" operations, where cell performances demonstrate favourable efficiency figures. The fact is in many space applications, as well as when using solar concentrators, operating cell temperature are typically highly elevated, where cells outputs are severely depreciated. In this paper, a novel approach for the optimization of multi-junction photovoltaic devices at such high expected operating temperature is presented. The device optimization is carried out on the novel cell physical model previously developed at the Naval Postgraduate School using the SILVACO software tools [1]. Taking into account the high cost of research and experimentation involved with the development of advanced cells, this successful modelling technique was introduced and detailed results were previously presented by the author [2]. The flexibility of the proposed methodology is demonstrated and example results are shown throughout the whole process. The research demonstrated the capability of developing a realistic model of any type of solar cell, as well as thermo-photovoltaic devices. Details of an example model of an InGaP/GaAs/Ge multi-junction cell was prepared and fully simulated. The major stages of the process are explained and the simulation results are compared to published experimental data. An example of cell parameters optimization for high operating temperature is also presented. Individual junction layer optimization was accomplished through the use of a genetic search algorithm implemented in Matlab.

  18. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  19. A technique for integrating engine cycle and aircraft configuration optimization

    NASA Technical Reports Server (NTRS)

    Geiselhart, Karl A.

    1994-01-01

    A method for conceptual aircraft design that incorporates the optimization of major engine design variables for a variety of cycle types was developed. The methodology should improve the lengthy screening process currently involved in selecting an appropriate engine cycle for a given application or mission. The new capability will allow environmental concerns such as airport noise and emissions to be addressed early in the design process. The ability to rapidly perform optimization and parametric variations using both engine cycle and aircraft design variables, and to see the impact on the aircraft, should provide insight and guidance for more detailed studies. A brief description of the aircraft performance and mission analysis program and the engine cycle analysis program that were used is given. A new method of predicting propulsion system weight and dimensions using thermodynamic cycle data, preliminary design, and semi-empirical techniques is introduced. Propulsion system performance and weights data generated by the program are compared with industry data and data generated using well established codes. The ability of the optimization techniques to locate an optimum is demonstrated and some of the problems that had to be solved to accomplish this are illustrated. Results from the application of the program to the analysis of three supersonic transport concepts installed with mixed flow turbofans are presented. The results from the application to a Mach 2.4, 5000 n.mi. transport indicate that the optimum bypass ratio is near 0.45 with less than 1 percent variation in minimum gross weight for bypass ratios ranging from 0.3 to 0.6. In the final application of the program, a low sonic boom fix a takeoff gross weight concept that would fly at Mach 2.0 overwater and at Mach 1.6 overland is compared with a baseline concept of the same takeoff gross weight that would fly Mach 2.4 overwater and subsonically overland. The results indicate that for the design mission

  20. Advanced metaheuristic algorithms for laser optimization in optical accelerator technologies

    NASA Astrophysics Data System (ADS)

    Tomizawa, Hiromitsu

    2011-10-01

    Lasers are among the most important experimental tools for user facilities, including synchrotron radiation and free electron lasers (FEL). In the synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for X-ray-FELs, lasers play important roles as seed light sources or photocathode-illuminating light sources to generate a high-brightness electron bunch. For future accelerators, laser-based techonologies such as electro-optic (EO) sampling to measure ultra-short electron bunches and optical-fiber-based femtosecond timing systems have been intensively developed in the last decade. Therefore, controls and optimizations of laser pulse characteristics are strongly required for many kinds of experiments and improvement of accelerator systems. However, people believe that lasers should be tuned and customized for each requirement manually by experts. This makes it difficult for laser systems to be part of the common accelerator infrastructure. Automatic laser tuning requires sophisticated algorithms, and the metaheuristic algorithm is one of the best solutions. The metaheuristic laser tuning system is expected to reduce the human effort and time required for laser preparations. I have shown some successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles, and a hill-climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each machine requirement.

  1. Design of vibration isolation systems using multiobjective optimization techniques

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The design of vibration isolation systems is considered using multicriteria optimization techniques. The integrated values of the square of the force transmitted to the main mass and the square of the relative displacement between the main mass and the base are taken as the performance indices. The design of a three degrees-of-freedom isolation system with an exponentially decaying type of base disturbance is considered for illustration. Numerical results are obtained using the global criterion, utility function, bounded objective, lexicographic, goal programming, goal attainment and game theory methods. It is found that the game theory approach is superior in finding a better optimum solution with proper balance of the various objective functions.

  2. On improving storm surge forecasting using an adjoint optimal technique

    NASA Astrophysics Data System (ADS)

    Li, Yineng; Peng, Shiqiu; Yan, Jing; Xie, Lian

    2013-12-01

    A three-dimensional ocean model and its adjoint model are used to simultaneously optimize the initial conditions (IC) and the wind stress drag coefficient (Cd) for improving storm surge forecasting. To demonstrate the effect of this proposed method, a number of identical twin experiments (ITEs) with a prescription of different error sources and two real data assimilation experiments are performed. Results from both the idealized and real data assimilation experiments show that adjusting IC and Cd simultaneously can achieve much more improvements in storm surge forecasting than adjusting IC or Cd only. A diagnosis on the dynamical balance indicates that adjusting IC only may introduce unrealistic oscillations out of the assimilation window, which can be suppressed by the adjustment of the wind stress when simultaneously adjusting IC and Cd. Therefore, it is recommended to simultaneously adjust IC and Cd to improve storm surge forecasting using an adjoint technique.

  3. Optimal technique for maximal forward rotating vaults in men's gymnastics.

    PubMed

    Hiley, Michael J; Jackson, Monique I; Yeadon, Maurice R

    2015-08-01

    In vaulting a gymnast must generate sufficient linear and angular momentum during the approach and table contact to complete the rotational requirements in the post-flight phase. This study investigated the optimization of table touchdown conditions and table contact technique for the maximization of rotation potential for forwards rotating vaults. A planar seven-segment torque-driven computer simulation model of the contact phase in vaulting was evaluated by varying joint torque activation time histories to match three performances of a handspring double somersault vault by an elite gymnast. The closest matching simulation was used as a starting point to maximize post-flight rotation potential (the product of angular momentum and flight time) for a forwards rotating vault. It was found that the maximized rotation potential was sufficient to produce a handspring double piked somersault vault. The corresponding optimal touchdown configuration exhibited hip flexion in contrast to the hyperextended configuration required for maximal height. Increasing touchdown velocity and angular momentum lead to additional post-flight rotation potential. By increasing the horizontal velocity at table touchdown, within limits obtained from recorded performances, the handspring double somersault tucked with one and a half twists, and the handspring triple somersault tucked became theoretically possible. PMID:26026290

  4. Optimal exposure techniques for iodinated contrast enhanced breast CT

    NASA Astrophysics Data System (ADS)

    Glick, Stephen J.; Makeev, Andrey

    2016-03-01

    Screening for breast cancer using mammography has been very successful in the effort to reduce breast cancer mortality, and its use has largely resulted in the 30% reduction in breast cancer mortality observed since 1990 [1]. However, diagnostic mammography remains an area of breast imaging that is in great need for improvement. One imaging modality proposed for improving the accuracy of diagnostic workup is iodinated contrast-enhanced breast CT [2]. In this study, a mathematical framework is used to evaluate optimal exposure techniques for contrast-enhanced breast CT. The ideal observer signal-to-noise ratio (i.e., d') figure-of-merit is used to provide a task performance based assessment of optimal acquisition parameters under the assumptions of a linear, shift-invariant imaging system. A parallel-cascade model was used to estimate signal and noise propagation through the detector, and a realistic lesion model with iodine uptake was embedded into a structured breast background. Ideal observer performance was investigated across kVp settings, filter materials, and filter thickness. Results indicated many kVp spectra/filter combinations can improve performance over currently used x-ray spectra.

  5. A review of hemorheology: Measuring techniques and recent advances

    NASA Astrophysics Data System (ADS)

    Sousa, Patrícia C.; Pinho, Fernando T.; Alves, Manuel A.; Oliveira, Mónica S. N.

    2016-02-01

    Significant progress has been made over the years on the topic of hemorheology, not only in terms of the development of more accurate and sophisticated techniques, but also in terms of understanding the phenomena associated with blood components, their interactions and impact upon blood properties. The rheological properties of blood are strongly dependent on the interactions and mechanical properties of red blood cells, and a variation of these properties can bring further insight into the human health state and can be an important parameter in clinical diagnosis. In this article, we provide both a reference for hemorheological research and a resource regarding the fundamental concepts in hemorheology. This review is aimed at those starting in the field of hemodynamics, where blood rheology plays a significant role, but also at those in search of the most up-to-date findings (both qualitative and quantitative) in hemorheological measurements and novel techniques used in this context, including technical advances under more extreme conditions such as in large amplitude oscillatory shear flow or under extensional flow, which impose large deformations comparable to those found in the microcirculatory system and in diseased vessels. Given the impressive rate of increase in the available knowledge on blood flow, this review is also intended to identify areas where current knowledge is still incomplete, and which have the potential for new, exciting and useful research. We also discuss the most important parameters that can lead to an alteration of blood rheology, and which as a consequence can have a significant impact on the normal physiological behavior of blood.

  6. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide emissions from coal-fired boilers

    SciTech Connect

    Sorge, J.N.; Menzies, B.; Smouse, S.M.; Stallings, J.W.

    1995-09-01

    Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide NOx emissions from coal-fired boilers. The primary objective of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control/optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.

  7. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  8. Removing baseline flame's spectrum by using advanced recovering spectrum techniques.

    PubMed

    Arias, Luis; Sbarbaro, Daniel; Torres, Sergio

    2012-09-01

    In this paper, a novel automated algorithm to estimate and remove the continuous baseline from measured flame spectra is proposed. The algorithm estimates the continuous background based on previous information obtained from a learning database of continuous flame spectra. Then, the discontinuous flame emission is calculated by subtracting the estimated continuous baseline from the measured spectrum. The key issue subtending the learning database is that the continuous flame emissions are predominant in the sooty regions, in absence of discontinuous radiation. The proposed algorithm was tested using natural gas and bio-oil flames spectra at different combustion conditions, and the goodness-of-fit coefficient (GFC) quality metric was used to quantify the performance in the estimation process. Additionally, the commonly used first derivative method (FDM) for baseline removing was applied to the same testing spectra in order to compare and to evaluate the proposed technique. The achieved results show that the proposed method is a very attractive tool for designing advanced combustion monitoring strategies of discontinuous emissions. PMID:22945158

  9. Nanocrystalline materials: recent advances in crystallographic characterization techniques

    PubMed Central

    Ringe, Emilie

    2014-01-01

    Most properties of nanocrystalline materials are shape-dependent, providing their exquisite tunability in optical, mechanical, electronic and catalytic properties. An example of the former is localized surface plasmon resonance (LSPR), the coherent oscillation of conduction electrons in metals that can be excited by the electric field of light; this resonance frequency is highly dependent on both the size and shape of a nanocrystal. An example of the latter is the marked difference in catalytic activity observed for different Pd nanoparticles. Such examples highlight the importance of particle shape in nanocrystalline materials and their practical applications. However, one may ask ‘how are nanoshapes created?’, ‘how does the shape relate to the atomic packing and crystallography of the material?’, ‘how can we control and characterize the external shape and crystal structure of such small nanocrystals?’. This feature article aims to give the reader an overview of important techniques, concepts and recent advances related to these questions. Nucleation, growth and how seed crystallography influences the final synthesis product are discussed, followed by shape prediction models based on seed crystallography and thermodynamic or kinetic parameters. The crystallographic implications of epitaxy and orientation in multilayered, core-shell nanoparticles are overviewed, and, finally, the development and implications of novel, spatially resolved analysis tools are discussed. PMID:25485133

  10. Pediatric Cardiopulmonary Resuscitation: Advances in Science, Techniques, and Outcomes

    PubMed Central

    Topjian, Alexis A.; Berg, Robert A.; Nadkarni, Vinay M.

    2009-01-01

    More than 25% of children survive to hospital discharge after in-hospital cardiac arrests, and 5% to 10% survive after out-of-hospital cardiac arrests. This review of pediatric cardiopulmonary resuscitation addresses the epidemiology of pediatric cardiac arrests, mechanisms of coronary blood flow during cardiopulmonary resuscitation, the 4 phases of cardiac arrest resuscitation, appropriate interventions during each phase, special resuscitation circumstances, extracorporeal membrane oxygenation cardiopulmonary resuscitation, and quality of cardiopulmonary resuscitation. The key elements of pathophysiology that impact and match the timing, intensity, duration, and variability of the hypoxic-ischemic insult to evidence-based interventions are reviewed. Exciting discoveries in basic and applied-science laboratories are now relevant for specific subpopulations of pediatric cardiac arrest victims and circumstances (eg, ventricular fibrillation, neonates, congenital heart disease, extracorporeal cardiopulmonary resuscitation). Improving the quality of interventions is increasingly recognized as a key factor for improving outcomes. Evolving training strategies include simulation training, just-in-time and just-in-place training, and crisis-team training. The difficult issue of when to discontinue resuscitative efforts is addressed. Outcomes from pediatric cardiac arrests are improving. Advances in resuscitation science and state-of-the-art implementation techniques provide the opportunity for further improvement in outcomes among children after cardiac arrest. PMID:18977991

  11. Development of advanced strain diagnostic techniques for reactor environments.

    SciTech Connect

    Fleming, Darryn D.; Holschuh, Thomas Vernon,; Miller, Timothy J.; Hall, Aaron Christopher; Urrea, David Anthony,; Parma, Edward J.,

    2013-02-01

    The following research is operated as a Laboratory Directed Research and Development (LDRD) initiative at Sandia National Laboratories. The long-term goals of the program include sophisticated diagnostics of advanced fuels testing for nuclear reactors for the Department of Energy (DOE) Gen IV program, with the future capability to provide real-time measurement of strain in fuel rod cladding during operation in situ at any research or power reactor in the United States. By quantifying the stress and strain in fuel rods, it is possible to significantly improve fuel rod design, and consequently, to improve the performance and lifetime of the cladding. During the past year of this program, two sets of experiments were performed: small-scale tests to ensure reliability of the gages, and reactor pulse experiments involving the most viable samples in the Annulated Core Research Reactor (ACRR), located onsite at Sandia. Strain measurement techniques that can provide useful data in the extreme environment of a nuclear reactor core are needed to characterize nuclear fuel rods. This report documents the progression of solutions to this issue that were explored for feasibility in FY12 at Sandia National Laboratories, Albuquerque, NM.

  12. System Design Techniques for Reducing the Power Requirements of Advanced life Support Systems

    NASA Technical Reports Server (NTRS)

    Finn, Cory; Levri, Julie; Pawlowski, Chris; Crawford, Sekou; Luna, Bernadette (Technical Monitor)

    2000-01-01

    The high power requirement associated with overall operation of regenerative life support systems is a critical Z:p technological challenge. Optimization of individual processors alone will not be sufficient to produce an optimized system. System studies must be used in order to improve the overall efficiency of life support systems. Current research efforts at NASA Ames Research Center are aimed at developing approaches for reducing system power and energy usage in advanced life support systems. System energy integration and energy reuse techniques are being applied to advanced life support, in addition to advanced control methods for efficient distribution of power and thermal resources. An overview of current results of this work will be presented. The development of integrated system designs that reuse waste heat from sources such as crop lighting and solid waste processing systems will reduce overall power and cooling requirements. Using an energy integration technique known as Pinch analysis, system heat exchange designs are being developed that match hot and cold streams according to specific design principles. For various designs, the potential savings for power, heating and cooling are being identified and quantified. The use of state-of-the-art control methods for distribution of resources, such as system cooling water or electrical power, will also reduce overall power and cooling requirements. Control algorithms are being developed which dynamically adjust the use of system resources by the various subsystems and components in order to achieve an overall goal, such as smoothing of power usage and/or heat rejection profiles, while maintaining adequate reserves of food, water, oxygen, and other consumables, and preventing excessive build-up of waste materials. Reductions in the peak loading of the power and thermal systems will lead to lower overall requirements. Computer simulation models are being used to test various control system designs.

  13. Technique to optimize magnetic response of gelatin coated magnetic nanoparticles.

    PubMed

    Parikh, Nidhi; Parekh, Kinnari

    2015-07-01

    The paper describes the results of optimization of magnetic response for highly stable bio-functionalize magnetic nanoparticles dispersion. Concentration of gelatin during in situ co-precipitation synthesis was varied from 8, 23 and 48 mg/mL to optimize magnetic properties. This variation results in a change in crystallite size from 10.3 to 7.8 ± 0.1 nm. TEM measurement of G3 sample shows highly crystalline spherical nanoparticles with a mean diameter of 7.2 ± 0.2 nm and diameter distribution (σ) of 0.27. FTIR spectra shows a shift of 22 cm(-1) at C=O stretching with absence of N-H stretching confirming the chemical binding of gelatin on magnetic nanoparticles. The concept of lone pair electron of the amide group explains the mechanism of binding. TGA shows 32.8-25.2% weight loss at 350 °C temperature substantiating decomposition of chemically bind gelatin. The magnetic response shows that for 8 mg/mL concentration of gelatin, the initial susceptibility and saturation magnetization is the maximum. The cytotoxicity of G3 sample was assessed in Normal Rat Kidney Epithelial Cells (NRK Line) by MTT assay. Results show an increase in viability for all concentrations, the indicative probability of a stimulating action of these particles in the nontoxic range. This shows the potential of this technique for biological applications as the coated particles are (i) superparamagnetic (ii) highly stable in physiological media (iii) possibility of attaching other drug with free functional group of gelatin and (iv) non-toxic. PMID:26152511

  14. Advanced techniques and technology for efficient data storage, access, and transfer

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Miller, Warner

    1991-01-01

    Advanced techniques for efficiently representing most forms of data are being implemented in practical hardware and software form through the joint efforts of three NASA centers. These techniques adapt to local statistical variations to continually provide near optimum code efficiency when representing data without error. Demonstrated in several earlier space applications, these techniques are the basis of initial NASA data compression standards specifications. Since the techniques clearly apply to most NASA science data, NASA invested in the development of both hardware and software implementations for general use. This investment includes high-speed single-chip very large scale integration (VLSI) coding and decoding modules as well as machine-transferrable software routines. The hardware chips were tested in the laboratory at data rates as high as 700 Mbits/s. A coding module's definition includes a predictive preprocessing stage and a powerful adaptive coding stage. The function of the preprocessor is to optimally process incoming data into a standard form data source that the second stage can handle.The built-in preprocessor of the VLSI coder chips is ideal for high-speed sampled data applications such as imaging and high-quality audio, but additionally, the second stage adaptive coder can be used separately with any source that can be externally preprocessed into the 'standard form'. This generic functionality assures that the applicability of these techniques and their recent high-speed implementations should be equally broad outside of NASA.

  15. Optimization technique for problems with an inequality constraint

    NASA Technical Reports Server (NTRS)

    Russell, K. J.

    1972-01-01

    General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.

  16. An optimal merging technique for high-resolution precipitation products: OPTIMAL MERGING OF PRECIPITATION METHOD

    SciTech Connect

    Shrestha, Roshan; Houser, Paul R.; Anantharaj, Valentine G.

    2011-04-01

    Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutions and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.

  17. Optimization of fast dissolving etoricoxib tablets prepared by sublimation technique.

    PubMed

    Patel, D M; Patel, M M

    2008-01-01

    The purpose of this investigation was to develop fast dissolving tablets of etoricoxib. Granules containing etoricoxib, menthol, crospovidone, aspartame and mannitol were prepared by wet granulation technique. Menthol was sublimed from the granules by exposing the granules to vacuum. The porous granules were then compressed in to tablets. Alternatively, tablets were first prepared and later exposed to vacuum. The tablets were evaluated for percentage friability and disintegration time. A 3(2) full factorial design was applied to investigate the combined effect of 2 formulation variables: amount of menthol and crospovidone. The results of multiple regression analysis indicated that for obtaining fast dissolving tablets; optimum amount of menthol and higher percentage of crospovidone should be used. A surface response plots are also presented to graphically represent the effect of the independent variables on the percentage friability and disintegration time. The validity of a generated mathematical model was tested by preparing a checkpoint batch. Sublimation of menthol from tablets resulted in rapid disintegration as compared with the tablets prepared from granules that were exposed to vacuum. The optimized tablet formulation was compared with conventional marketed tablets for percentage drug dissolved in 30 min (Q(30)) and dissolution efficiency after 30 min (DE(30)). From the results, it was concluded that fast dissolving tablets with improved etoricoxib dissolution could be prepared by sublimation of tablets containing suitable subliming agent. PMID:20390084

  18. Optimization of Fast Dissolving Etoricoxib Tablets Prepared by Sublimation Technique

    PubMed Central

    Patel, D. M.; Patel, M. M.

    2008-01-01

    The purpose of this investigation was to develop fast dissolving tablets of etoricoxib. Granules containing etoricoxib, menthol, crospovidone, aspartame and mannitol were prepared by wet granulation technique. Menthol was sublimed from the granules by exposing the granules to vacuum. The porous granules were then compressed in to tablets. Alternatively, tablets were first prepared and later exposed to vacuum. The tablets were evaluated for percentage friability and disintegration time. A 32 full factorial design was applied to investigate the combined effect of 2 formulation variables: amount of menthol and crospovidone. The results of multiple regression analysis indicated that for obtaining fast dissolving tablets; optimum amount of menthol and higher percentage of crospovidone should be used. A surface response plots are also presented to graphically represent the effect of the independent variables on the percentage friability and disintegration time. The validity of a generated mathematical model was tested by preparing a checkpoint batch. Sublimation of menthol from tablets resulted in rapid disintegration as compared with the tablets prepared from granules that were exposed to vacuum. The optimized tablet formulation was compared with conventional marketed tablets for percentage drug dissolved in 30 min (Q30) and dissolution efficiency after 30 min (DE30). From the results, it was concluded that fast dissolving tablets with improved etoricoxib dissolution could be prepared by sublimation of tablets containing suitable subliming agent. PMID:20390084

  19. Weldability and joining techniques for advanced fossil energy system alloys

    SciTech Connect

    Lundin, C.D.; Qiao, C.Y.P.; Liu, W.; Yang, D.; Zhou, G.; Morrison, M.

    1998-05-01

    The efforts represent the concerns for the basic understanding of the weldability and fabricability of the advanced high temperature alloys so necessary to affect increases in the efficiency of the next generation Fossil Energy Power Plants. The effort was divided into three tasks with the first effort dealing with the welding and fabrication behavior of 310HCbN (HR3C), the second task details the studies aimed at understanding the weldability of a newly developed 310TaN high temperature stainless (a modification of 310 stainless) and Task 3 addressed the cladding of austenitic tubing with Iron-Aluminide using the GTAW process. Task 1 consisted of microstructural studies on 310HCbN and the development of a Tube Weldability test which has applications to production welding techniques as well as laboratory weldability assessments. In addition, the evaluation of ex-service 310HCbN which showed fireside erosion and cracking at the attachment weld locations was conducted. Task 2 addressed the behavior of the newly developed 310 TaN modification of standard 310 stainless steel and showed that the weldability was excellent and that the sensitization potential was minimal for normal welding and fabrication conditions. The microstructural evolution during elevated temperature testing was characterized and the second phase particles evolved upon aging were identified. Task 3 details the investigation undertaken to clad 310HCbN tubing with Iron Aluminide and developed welding conditions necessary to provide a crack free cladding. The work showed that both a preheat and a post-heat was necessary for crack free deposits and the effect of a third element on the cracking potential was defined together with the effect of the aluminum level for optimum weldability.

  20. Advanced overlay: sampling and modeling for optimized run-to-run control

    NASA Astrophysics Data System (ADS)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to

  1. Investigation of joining techniques for advanced austenitic alloys

    SciTech Connect

    Lundin, C.D.; Qiao, C.Y.P.; Kikuchi, Y.; Shi, C.; Gill, T.P.S.

    1991-05-01

    Modified Alloys 316 and 800H, designed for high temperature service, have been developed at Oak Ridge National Laboratory. Assessment of the weldability of the advanced austenitic alloys has been conducted at the University of Tennessee. Four aspects of weldability of the advanced austenitic alloys were included in the investigation.

  2. Advancement of an Infra-Red Technique for Whole-Field Concentration Measurements in Fluidized Beds

    PubMed Central

    Medrano, Jose A.; de Nooijer, Niek C. A.; Gallucci, Fausto; van Sint Annaland, Martin

    2016-01-01

    For a better understanding and description of the mass transport phenomena in dense multiphase gas-solids systems such as fluidized bed reactors, detailed and quantitative experimental data on the concentration profiles is required, which demands advanced non-invasive concentration monitoring techniques with a high spatial and temporal resolution. A novel technique based on the selective detection of a gas component in a gas mixture using infra-red properties has been further developed. The first stage development was carried out using a very small sapphire reactor and CO2 as tracer gas. Although the measuring principle was demonstrated, the real application was hindered by the small reactor dimensions related to the high costs and difficult handling of large sapphire plates. In this study, a new system has been developed, that allows working at much larger scales and yet with higher resolution. In the new system, propane is used as tracer gas and quartz as reactor material. In this study, a thorough optimization and calibration of the technique is presented which is subsequently applied for whole-field measurements with high temporal resolution. The developed technique allows the use of a relatively inexpensive configuration for the measurement of detailed concentration fields and can be applied to a large variety of important chemical engineering topics. PMID:26927127

  3. Advancement of an Infra-Red Technique for Whole-Field Concentration Measurements in Fluidized Beds.

    PubMed

    Medrano, Jose A; de Nooijer, Niek C A; Gallucci, Fausto; van Sint Annaland, Martin

    2016-01-01

    For a better understanding and description of the mass transport phenomena in dense multiphase gas-solids systems such as fluidized bed reactors, detailed and quantitative experimental data on the concentration profiles is required, which demands advanced non-invasive concentration monitoring techniques with a high spatial and temporal resolution. A novel technique based on the selective detection of a gas component in a gas mixture using infra-red properties has been further developed. The first stage development was carried out using a very small sapphire reactor and CO₂ as tracer gas. Although the measuring principle was demonstrated, the real application was hindered by the small reactor dimensions related to the high costs and difficult handling of large sapphire plates. In this study, a new system has been developed, that allows working at much larger scales and yet with higher resolution. In the new system, propane is used as tracer gas and quartz as reactor material. In this study, a thorough optimization and calibration of the technique is presented which is subsequently applied for whole-field measurements with high temporal resolution. The developed technique allows the use of a relatively inexpensive configuration for the measurement of detailed concentration fields and can be applied to a large variety of important chemical engineering topics. PMID:26927127

  4. Integrating advanced materials simulation techniques into an automated data analysis workflow at the Spallation Neutron Source

    SciTech Connect

    Borreguero Calvo, Jose M; Campbell, Stuart I; Delaire, Olivier A; Doucet, Mathieu; Goswami, Monojoy; Hagen, Mark E; Lynch, Vickie E; Proffen, Thomas E; Ren, Shelly; Savici, Andrei T; Sumpter, Bobby G

    2014-01-01

    This presentation will review developments on the integration of advanced modeling and simulation techniques into the analysis step of experimental data obtained at the Spallation Neutron Source. A workflow framework for the purpose of refining molecular mechanics force-fields against quasi-elastic neutron scattering data is presented. The workflow combines software components to submit model simulations to remote high performance computers, a message broker interface for communications between the optimizer engine and the simulation production step, and tools to convolve the simulated data with the experimental resolution. A test application shows the correction to a popular fixed-charge water model in order to account polarization effects due to the presence of solvated ions. Future enhancements to the refinement workflow are discussed. This work is funded through the DOE Center for Accelerating Materials Modeling.

  5. Advanced Communication System Time domain Modeling Techniques Study (ASYSTD)

    NASA Technical Reports Server (NTRS)

    Fashano, M.; Gagliardi, R. M.; Sullivan, J. A.

    1972-01-01

    ASYSTD activities are presented dealing with signal to noise ratio and bit error rate measurement, distortion measurement, optimization feasibility, and the definition and systems design implications of mean square error for nonideal, orthogonally encoded channels.

  6. Dynamic modeling and optimal joint torque coordination of advanced robotic systems

    NASA Astrophysics Data System (ADS)

    Kang, Hee-Jun

    The development is documented of an efficient dynamic modeling algorithm and the subsequent optimal joint input load coordination of advanced robotic systems for industrial application. A closed-form dynamic modeling algorithm for the general closed-chain robotic linkage systems is presented. The algorithm is based on the transfer of system dependence from a set of open chain Lagrangian coordinates to any desired system generalized coordinate set of the closed-chain. Three different techniques for evaluation of the kinematic closed chain constraints allow the representation of the dynamic modeling parameters in terms of system generalized coordinates and have no restriction with regard to kinematic redundancy. The total computational requirement of the closed-chain system model is largely dependent on the computation required for the dynamic model of an open kinematic chain. In order to improve computational efficiency, modification of an existing open-chain KIC based dynamic formulation is made by the introduction of the generalized augmented body concept. This algorithm allows a 44 pct. computational saving over the current optimized one (O(N4), 5995 when N = 6). As means of resolving redundancies in advanced robotic systems, local joint torque optimization is applied for effectively using actuator power while avoiding joint torque limits. The stability problem in local joint torque optimization schemes is eliminated by using fictitious dissipating forces which act in the necessary null space. The performance index representing the global torque norm is shown to be satisfactory. In addition, the resulting joint motion trajectory becomes conservative, after a transient stage, for repetitive cyclic end-effector trajectories. The effectiveness of the null space damping method is shown. The modular robot, which is built of well defined structural modules from a finite-size inventory and is controlled by one general computer system, is another class of evolving

  7. Optimization techniques in molecular structure and function elucidation.

    PubMed

    Sahinidis, Nikolaos V

    2009-12-01

    This paper discusses recent optimization approaches to the protein side-chain prediction problem, protein structural alignment, and molecular structure determination from X-ray diffraction measurements. The machinery employed to solve these problems has included algorithms from linear programming, dynamic programming, combinatorial optimization, and mixed-integer nonlinear programming. Many of these problems are purely continuous in nature. Yet, to this date, they have been approached mostly via combinatorial optimization algorithms that are applied to discrete approximations. The main purpose of the paper is to offer an introduction and motivate further systems approaches to these problems. PMID:20160866

  8. Recent advances in sample preparation techniques for effective bioanalytical methods.

    PubMed

    Kole, Prashant Laxman; Venkatesh, Gantala; Kotecha, Jignesh; Sheshala, Ravi

    2011-01-01

    This paper reviews the recent developments in bioanalysis sample preparation techniques and gives an update on basic principles, theory, applications and possibilities for automation, and a comparative discussion on the advantages and limitation of each technique. Conventional liquid-liquid extraction (LLE), protein precipitation (PP) and solid-phase extraction (SPE) techniques are now been considered as methods of the past. The last decade has witnessed a rapid development of novel sample preparation techniques in bioanalysis. Developments in SPE techniques such as selective sorbents and in the overall approach to SPE, such as hybrid SPE and molecularly imprinted polymer SPE, have been addressed. Considerable literature has been published in the area of solid-phase micro-extraction and its different versions, e.g. stir bar sorptive extraction, and their application in the development of selective and sensitive bioanalytical methods. Techniques such as dispersive solid-phase extraction, disposable pipette extraction and micro-extraction by packed sorbent offer a variety of extraction phases and provide unique advantages to bioanalytical methods. On-line SPE utilizing column-switching techniques is rapidly gaining acceptance in bioanalytical applications. PP sample preparation techniques such as PP filter plates/tubes offer many advantages like removal of phospholipids and proteins in plasma/serum. Newer approaches to conventional LLE techniques (salting-out LLE) are also covered in this review article. PMID:21154887

  9. Recent advances in integrated multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.

    1992-01-01

    A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.

  10. Recent advances in microscopic techniques for visualizing leukocytes in vivo

    PubMed Central

    Jain, Rohit; Tikoo, Shweta; Weninger, Wolfgang

    2016-01-01

    Leukocytes are inherently motile and interactive cells. Recent advances in intravital microscopy approaches have enabled a new vista of their behavior within intact tissues in real time. This brief review summarizes the developments enabling the tracking of immune responses in vivo. PMID:27239292

  11. Recent advances in microscopic techniques for visualizing leukocytes in vivo.

    PubMed

    Jain, Rohit; Tikoo, Shweta; Weninger, Wolfgang

    2016-01-01

    Leukocytes are inherently motile and interactive cells. Recent advances in intravital microscopy approaches have enabled a new vista of their behavior within intact tissues in real time. This brief review summarizes the developments enabling the tracking of immune responses in vivo. PMID:27239292

  12. Bricklaying Curriculum: Advanced Bricklaying Techniques. Instructional Materials. Revised.

    ERIC Educational Resources Information Center

    Turcotte, Raymond J.; Hendrix, Laborn J.

    This curriculum guide is designed to assist bricklaying instructors in providing performance-based instruction in advanced bricklaying. Included in the first section of the guide are units on customized or architectural masonry units; glass block; sills, lintels, and copings; and control (expansion) joints. The next two units deal with cut,…

  13. Advanced NDE techniques for quantitative characterization of aircraft

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S.; Winfree, William P.

    1990-01-01

    Recent advances in nondestructive evaluation (NDE) at NASA Langley Research Center and their applications that have resulted in quantitative assessment of material properties based on thermal and ultrasonic measurements are reviewed. Specific applications include ultrasonic determination of bolt tension, ultrasonic and thermal characterization of bonded layered structures, characterization of composite materials, and disbonds in aircraft skins.

  14. Principled negotiation and distributed optimization for advanced air traffic management

    NASA Astrophysics Data System (ADS)

    Wangermann, John Paul

    Today's aircraft/airspace system faces complex challenges. Congestion and delays are widespread as air traffic continues to grow. Airlines want to better optimize their operations, and general aviation wants easier access to the system. Additionally, the accident rate must decline just to keep the number of accidents each year constant. New technology provides an opportunity to rethink the air traffic management process. Faster computers, new sensors, and high-bandwidth communications can be used to create new operating models. The choice is no longer between "inflexible" strategic separation assurance and "flexible" tactical conflict resolution. With suitable operating procedures, it is possible to have strategic, four-dimensional separation assurance that is flexible and allows system users maximum freedom to optimize operations. This thesis describes an operating model based on principled negotiation between agents. Many multi-agent systems have agents that have different, competing interests but have a shared interest in coordinating their actions. Principled negotiation is a method of finding agreement between agents with different interests. By focusing on fundamental interests and searching for options for mutual gain, agents with different interests reach agreements that provide benefits for both sides. Using principled negotiation, distributed optimization by each agent can be coordinated leading to iterative optimization of the system. Principled negotiation is well-suited to aircraft/airspace systems. It allows aircraft and operators to propose changes to air traffic control. Air traffic managers check the proposal maintains required aircraft separation. If it does, the proposal is either accepted or passed to agents whose trajectories change as part of the proposal for approval. Aircraft and operators can use all the data at hand to develop proposals that optimize their operations, while traffic managers can focus on their primary duty of ensuring

  15. Backscattered Electron Microscopy as an Advanced Technique in Petrography.

    ERIC Educational Resources Information Center

    Krinsley, David Henry; Manley, Curtis Robert

    1989-01-01

    Three uses of this method with sandstone, desert varnish, and granite weathering are described. Background information on this technique is provided. Advantages of this type of microscopy are stressed. (CW)

  16. A Secure Test Technique for Pipelined Advanced Encryption Standard

    NASA Astrophysics Data System (ADS)

    Shi, Youhua; Togawa, Nozomu; Yanagisawa, Masao; Ohtsuki, Tatsuo

    In this paper, we presented a Design-for-Secure-Test (DFST) technique for pipelined AES to guarantee both the security and the test quality during testing. Unlike previous works, the proposed method can keep all the secrets inside and provide high test quality and fault diagnosis ability as well. Furthermore, the proposed DFST technique can significantly reduce test application time, test data volume, and test generation effort as additional benefits.

  17. Toward Optimized Bioclogging and Biocementation Through Combining Advanced Geophysical Monitoring and Reactive Transport Modeling Approaches

    NASA Astrophysics Data System (ADS)

    Hubbard, C. G.; Hubbard, S. S.; Wu, Y.; Surasani, V.; Ajo Franklin, J. B.; Commer, M.; Dou, S.; Kwon, T.; Li, L.; Fouke, B. W.; Coates, J. D.

    2012-12-01

    Bioclogging and biocementation offer exciting opportunities for solutions to diverse problems ranging from soil stabilization to microbially enhanced hydrocarbon recovery. The effectiveness of bioclogging and biocementation strategies is governed by processes and properties ranging from microbial metabolism at the submicron scale, to changes in pore geometry at the pore scale, to geological heterogeneities at the field scale. Optimization of these strategies requires advances in mechanistic reactive transport modeling and geophysical monitoring methodologies. Our research focuses on (i) performing laboratory experiments to refine understanding of reaction networks and to quantify changes in hydrological properties (e.g. permeability), the evolution of biominerals and geophysical responses (focusing on seismic and electrical techniques); (ii) developing and using a reactive transport simulator capable of predicting the induced metabolic processes to numerically explore how to optimize the desired effect; and (iii) using loosely coupled reactive transport and geophysical simulators to explore detectability and resolvability of induced bioclogging and biocementation processes at the field scale using time-lapse geophysical methods. Here we present examples of our research focused on three different microbially-mediated methods to enhance hydrocarbon recovery through selective clogging of reservior thief zones, including: (a) biopolymer clogging through dextran production; (b) biomineral clogging through iron oxide precipitation; and (c) biomineral clogging through carbonate precipitation. We will compare the utility of these approaches for enhancing hydrocarbon recovery and will describe the utility of geophysical methods to remotely monitor associated field treatments.

  18. Optimism, Social Support, and Mental Health Outcomes in Patients with Advanced Cancer

    PubMed Central

    Applebaum, Allison J.; Stein, Emma M.; Lord-Bessen, Jennifer; Pessin, Hayley; Rosenfeld, Barry; Breitbart, William

    2014-01-01

    Objective Optimism and social support serve as protective factors against distress in medically ill patients. Very few studies have specifically explored the ways in which these variables interact to impact quality of life (QOL), particularly among patients with advanced cancer. The present study examined the role of optimism as a moderator of the relationship between social support and anxiety, depression, hopelessness, and QOL among patients with advanced cancer. Methods Participants (N = 168) completed self-report assessments of psychosocial, spiritual, and physical well-being, including social support, optimism, hopelessness, depressive and anxious symptoms, and QOL. Hierarchical multiple regression analyses were conducted to determine the extent to which social support and optimism were associated with depressive and anxious symptomatology, hopelessness and QOL, and the potential role of optimism as a moderator of the relationship between social support and these variables. Results Higher levels of optimism were significantly associated with fewer anxious and depressive symptoms, less hopelessness and better QOL. Higher levels of perceived social support were also significantly associated with better QOL. Additionally, optimism moderated the relationship between social support and anxiety, such that there was a strong negative association between social support and anxiety for participants with low optimism. Conclusions This study highlights the importance of optimism and social support in the QOL of patients with advanced cancer. As such, interventions that attend to patients’ expectations for positive experiences and the expansion of social support should be the focus of future clinical and research endeavors. PMID:24123339

  19. Coal and Coal Constituent Studies by Advanced EMR Techniques.

    SciTech Connect

    Belford, R.L.; Clarkson, R.B.; Odintsov, B.; Ceroke, P.J.

    1997-09-30

    Advanced electronic magnetic resonance (EMR) methods are used to examine properties of coals, chars, and molecular species related to constituents of coal. During this grant period, progress was made on a high frequency EMR system particularly appropriate for such studies and on low-frequency dynamic nuclear polarization (DNP) to examine the interaction between fluids such as water and the surface of suspended char particles.

  20. Coal and char studies by advanced EMR techniques

    SciTech Connect

    Belford, R.L.; Clarkson, R.B.; Odintsov, B.M.

    1998-09-30

    Advanced magnetic resonance (EMR) methods are used to examine properties of coals, chars, and molecular species related to constituents of coal. During this grant period, further progress was made on proton NMR and low-frequency dynamic nuclear polarization (DNP) to examine the interaction between fluids such as water and the surface of suspended char particles. Effects of char particle size on water nuclear spin relaxation, T2, were measured.

  1. COAL AND COAL CONSTITUENT STUDIES BY ADVANCED EMR TECHNIQUES

    SciTech Connect

    R. Linn Belford; Robert B. Clarkson

    1997-03-28

    Advanced electronic magnetic resonance (EMR) methods are used to examine properties of coals, chars, and molecular species related to constituents of coal. During this grant period, progress was made on setting up a separate high frequency EMR system particularly appropriate for such studies and exploring the use of low-frequency dynamic nuclear polarization (DNP) to examine the interaction between fluids such as water and the surface of suspended char particles.

  2. Coal and char studies by advanced EMR techniques

    SciTech Connect

    Belford, R.L.; Clarkson, R.B.; Odintsov, B.M.

    1999-03-31

    Advanced magnetic resonance (EMR) methods are used to examine properties of coals, chars, and molecular species related to constituents of coal. During this grant period, further progress was made on proton NMR and low-frequency dynamic nuclear polarization (DNP) to examine the interaction between fluids such as water and the surface of suspended char particles. Effects of char particle size and type on water nuclear spin relaxation, T2, were measured and modeled.

  3. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1987-01-01

    Optimization techniques applied to passive measures for in-orbit spacecraft survivability, is a six-month study, designed to evaluate the effectiveness of the geometric programming (GP) optimization technique in determining the optimal design of a meteoroid and space debris protection system for the Space Station Core Module configuration. Geometric Programming was found to be superior to other methods in that it provided maximum protection from impact problems at the lowest weight and cost.

  4. Adjoint Techniques for Topology Optimization of Structures Under Damage Conditions

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.

    2000-01-01

    The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation (Haftka and Gurdal, 1992) in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers (Akgun et al., 1998a and 1999). It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages (Haftka et al., 1983). A common method for topology optimization is that of compliance minimization (Bendsoe, 1995) which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local

  5. Design of high speed proprotors using multiobjective optimization techniques

    NASA Technical Reports Server (NTRS)

    Mccarthy, Thomas R.; Chattopadhyay, Aditi

    1992-01-01

    An integrated, multiobjective optimization procedure is developed for the design of high speed proprotors with the coupling of aerodynamic, dynamic, aeroelastic, and structural criteria. The objectives are to maximize propulsive efficiency in high speed cruise and rotor figure of merit in hover. Constraints are imposed on rotor blade aeroelastic stability in cruise and on total blade weight. Two different multiobjective formulation procedures, the Min summation of beta and the K-S function approaches are used to formulate the two-objective optimization problems.

  6. Optimizing Basic French Skills Utilizing Multiple Teaching Techniques.

    ERIC Educational Resources Information Center

    Skala, Carol

    This action research project examined the impact of foreign language teaching techniques on the language acquisition and retention of 19 secondary level French I students, focusing on student perceptions of the effectiveness and ease of four teaching techniques: total physical response, total physical response storytelling, literature approach,…

  7. OPTIMA: advanced methods for the analysis, integration, and optimization of PRISMA mission products

    NASA Astrophysics Data System (ADS)

    Guzzi, Donatella; Pippi, Ivan; Aiazzi, Bruno; Baronti, Stefano; Carlà, Roberto; Lastri, Cinzia; Nardino, Vanni; Raimondi, Valentina; Santurri, Leonardo; Selva, Massimo; Alparone, Luciano; Garzelli, Andrea; Lopinto, Ettore; Ananasso, Cristina; Barducci, Alessandro

    2015-10-01

    PRISMA is an Earth observation system that combines a hyperspectral sensor with a panchromatic, medium-resolution camera. OPTIMA is one of the five independent scientific research projects funded by the Italian Space Agency in the framework of PRISMA mission for the development of added-value algorithms and advanced applications. The main goal of OPTIMA is to increase and to strengthen the applications of PRISMA through the implementation of advanced methodologies for the analysis, integration and optimization of level 1 and 2 products. The project is comprehensive of several working packages: data simulation, data quality, data optimization, data processing and integration and, finally, evaluation of some applications related to natural hazards. Several algorithms implemented during the project employ high-speed autonomous procedures for the elaboration of the upcoming images acquired by PRISMA. To assess the performances of the developed algorithms and products, an end-to-end simulator of the instrument has been implemented. Data quality analysis has been completed by introducing noise modeling. Stand-alone procedures of radiometric and atmospheric corrections have been developed, allowing the retrieval of at-ground spectral reflectance maps. Specific studies about image enhancement, restoration and pan-sharpening have been carried out for providing added-value data. Regarding the mission capability of monitoring environmental processes and disasters, different techniques for estimating surface humidity and for analyzing burned areas have been investigated. Finally, calibration and validation activities utilizing the CAL/VAL test site managed by CNR-IFAC and located inside the Regional Park of San Rossore (Pisa), Italy have been considered.

  8. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Hodges, Dewey H.; Leung, Martin S.; Bless, Robert R.

    1991-01-01

    The proposed investigation on a Matched Asymptotic Expansion (MAE) method was carried out. It was concluded that the method of MAE is not applicable to launch vehicle ascent trajectory optimization due to a lack of a suitable stretched variable. More work was done on the earlier regular perturbation approach using a piecewise analytic zeroth order solution to generate a more accurate approximation. In the meantime, a singular perturbation approach using manifold theory is also under current investigation. Work on a general computational environment based on the use of MACSYMA and the weak Hamiltonian finite element method continued during this period. This methodology is capable of the solution of a large class of optimal control problems.

  9. Advanced optimization of permanent magnet wigglers using a genetic algorithm

    SciTech Connect

    Hajima, Ryoichi

    1995-12-31

    In permanent magnet wigglers, magnetic imperfection of each magnet piece causes field error. This field error can be reduced or compensated by sorting magnet pieces in proper order. We showed a genetic algorithm has good property for this sorting scheme. In this paper, this optimization scheme is applied to the case of permanent magnets which have errors in the direction of field. The result shows the genetic algorithm is superior to other algorithms.

  10. Automatic optimization of metrology sampling scheme for advanced process control

    NASA Astrophysics Data System (ADS)

    Chue, Chuei-Fu; Huang, Chun-Yen; Shih, Chiang-Lin

    2011-03-01

    In order to ensure long-term profitability, driving the operational costs down and improving the yield of a DRAM manufacturing process are continuous efforts. This includes optimal utilization of the capital equipment. The costs of metrology needed to ensure yield are contributing to the overall costs. As the shrinking of device dimensions continues, the costs of metrology are increasing because of the associated tightening of the on-product specifications requiring more metrology effort. The cost-of-ownership reduction is tackled by increasing the throughput and availability of metrology systems. However, this is not the only way to reduce metrology effort. In this paper, we discuss how the costs of metrology can be improved by optimizing the recipes in terms of the sampling layout, thereby eliminating metrology that does not contribute to yield. We discuss results of sampling scheme optimization for on-product overlay control of two DRAM manufacturing processes at Nanya Technology Corporation. For a 6x DRAM production process, we show that the reduction of metrology waste can be as high as 27% and overlay can be improved by 36%, comparing with a baseline sampling scheme. For a 4x DRAM process, having tighter overlay specs, a gain of ca. 0.5nm on-product overlay could be achieved, without increasing the metrology effort relative to the original sampling plan.

  11. Nondestructive Evaluation of Thick Concrete Using Advanced Signal Processing Techniques

    SciTech Connect

    Clayton, Dwight A; Barker, Alan M; Santos-Villalobos, Hector J; Albright, Austin P; Hoegh, Kyle; Khazanovich, Lev

    2015-09-01

    The purpose of the U.S. Department of Energy Office of Nuclear Energy’s Light Water Reactor Sustainability (LWRS) Program is to develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the operating lifetimes of nuclear power plants (NPPs) beyond 60 years [1]. Since many important safety structures in an NPP are constructed of concrete, inspection techniques must be developed and tested to evaluate the internal condition. In-service containment structures generally do not allow for the destructive measures necessary to validate the accuracy of these inspection techniques. This creates a need for comparative testing of the various nondestructive evaluation (NDE) measurement techniques on concrete specimens with known material properties, voids, internal microstructure flaws, and reinforcement locations.

  12. Advanced implementations of the iterative multi region technique

    NASA Astrophysics Data System (ADS)

    Kaburcuk, Fatih

    The integration of the finite-difference time-domain (FDTD) method into the iterative multi-region (IMR) technique, an iterative approach used to solve large-scale electromagnetic scattering and radiation problems, is presented in this dissertation. The idea of the IMR technique is to divide a large problem domain into smaller subregions, solve each subregion separately, and combine the solutions of subregions after introducing the effect of interaction to obtain solutions at multiple frequencies for the large domain. Solution of the subregions using the frequency domain solvers has been the preferred approach as such solutions using time domain solvers require computationally expensive bookkeeping of time signals between subregions. In this contribution we present an algorithm that makes it feasible to use the FDTD method, a time domain numerical technique, in the IMR technique to obtain solutions at a pre-specified number of frequencies in a single simulation. As a result, a considerable reduction in memory storage requirements and computation time is achieved. A hybrid method integrated into the IMR technique is also presented in this work. This hybrid method combines the desirable features of the method of moments (MoM) and the FDTD method to solve large-scale radiation problems more efficiently. The idea of this hybrid method based on the IMR technique is to divide an original problem domain into unconnected subregions and use the more appropriate method in each domain. The most prominent feature of this proposed method is to obtain solutions at multiple frequencies in a single IMR simulation by constructing time-limited waveforms. The performance of the proposed method is investigated numerically using different configurations composed of two, three, and four objects.

  13. Updates in advanced diffusion-weighted magnetic resonance imaging techniques in the evaluation of prostate cancer

    PubMed Central

    Vargas, Hebert Alberto; Lawrence, Edward Malnor; Mazaheri, Yousef; Sala, Evis

    2015-01-01

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is considered part of the standard imaging protocol for the evaluation of patients with prostate cancer. It has been proven valuable as a functional tool for qualitative and quantitative analysis of prostate cancer beyond anatomical MRI sequences such as T2-weighted imaging. This review discusses ongoing controversies in DW-MRI acquisition, including the optimal number of b-values to be used for prostate DWI, and summarizes the current literature on the use of advanced DW-MRI techniques. These include intravoxel incoherent motion imaging, which better accounts for the non-mono-exponential behavior of the apparent diffusion coefficient as a function of b-value and the influence of perfusion at low b-values. Another technique is diffusion kurtosis imaging (DKI). Metrics from DKI reflect excess kurtosis of tissues, representing its deviation from Gaussian diffusion behavior. Preliminary results suggest that DKI findings may have more value than findings from conventional DW-MRI for the assessment of prostate cancer. PMID:26339460

  14. Application of advanced coating techniques to rocket engine components

    NASA Technical Reports Server (NTRS)

    Verma, S. K.

    1988-01-01

    The materials problem in the space shuttle main engine (SSME) is reviewed. Potential coatings and the method of their application for improved life of SSME components are discussed. A number of advanced coatings for turbine blade components and disks are being developed and tested in a multispecimen thermal fatigue fluidized bed facility at IIT Research Institute. This facility is capable of producing severe strains of the degree present in blades and disk components of the SSME. The potential coating systems and current efforts at IITRI being taken for life extension of the SSME components are summarized.

  15. Transcranial Doppler: Techniques and advanced applications: Part 2

    PubMed Central

    Sharma, Arvind K.; Bathala, Lokesh; Batra, Amit; Mehndiratta, Man Mohan; Sharma, Vijay K.

    2016-01-01

    Transcranial Doppler (TCD) is the only diagnostic tool that can provide continuous information about cerebral hemodynamics in real time and over extended periods. In the previous paper (Part 1), we have already presented the basic ultrasound physics pertaining to TCD, insonation methods, and various flow patterns. This article describes various advanced applications of TCD such as detection of right-to-left shunt, emboli monitoring, vasomotor reactivity (VMR), monitoring of vasospasm in subarachnoid hemorrhage (SAH), monitoring of intracranial pressure, its role in stoke prevention in sickle cell disease, and as a supplementary test for confirmation of brain death. PMID:27011639

  16. Transcranial Doppler: Techniques and advanced applications: Part 2.

    PubMed

    Sharma, Arvind K; Bathala, Lokesh; Batra, Amit; Mehndiratta, Man Mohan; Sharma, Vijay K

    2016-01-01

    Transcranial Doppler (TCD) is the only diagnostic tool that can provide continuous information about cerebral hemodynamics in real time and over extended periods. In the previous paper (Part 1), we have already presented the basic ultrasound physics pertaining to TCD, insonation methods, and various flow patterns. This article describes various advanced applications of TCD such as detection of right-to-left shunt, emboli monitoring, vasomotor reactivity (VMR), monitoring of vasospasm in subarachnoid hemorrhage (SAH), monitoring of intracranial pressure, its role in stoke prevention in sickle cell disease, and as a supplementary test for confirmation of brain death. PMID:27011639

  17. Advances in Direct Detection Doppler Lidar Technology and Techniques

    NASA Technical Reports Server (NTRS)

    Gentry, Bruce; Einaudi, Franco (Technical Monitor)

    2001-01-01

    In this paper we will describe the ground based Doppler lidar system which is mounted in a modified delivery van to allow field deployment and operations. The system includes an aerosol double edge receiver optimized for aerosol backscatter Doppler measurements at 1064 nm and a molecular double edge receiver which operates at 355 nm. The lidar system will be described including details of the injection seeded diode pumped laser transmitter and the piezoelectrically tunable high spectral resolution Fabry Perot etalon which is used to measure the Doppler shift. Examples of tropospheric wind profiles obtained with the system will also be presented to demonstrate its capabilities.

  18. In Situ Techniques for Monitoring Electrochromism: An Advanced Laboratory Experiment

    ERIC Educational Resources Information Center

    Saricayir, Hakan; Uce, Musa; Koca, Atif

    2010-01-01

    This experiment employs current technology to enhance and extend existing lab content. The basic principles of spectroscopic and electroanalytical techniques and their use in determining material properties are covered in some detail in many undergraduate chemistry programs. However, there are limited examples of laboratory experiments with in…

  19. Advances in reduction techniques for tire contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1995-01-01

    Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.

  20. Benefits of advanced software techniques for mission planning systems

    NASA Technical Reports Server (NTRS)

    Gasquet, A.; Parrod, Y.; Desaintvincent, A.

    1994-01-01

    The increasing complexity of modern spacecraft, and the stringent requirement for maximizing their mission return, call for a new generation of Mission Planning Systems (MPS). In this paper, we discuss the requirements for the Space Mission Planning and the benefits which can be expected from Artificial Intelligence techniques through examples of applications developed by Matra Marconi Space.

  1. Advances in Optimizing Weather Driven Electric Power Systems.

    NASA Astrophysics Data System (ADS)

    Clack, C.; MacDonald, A. E.; Alexander, A.; Dunbar, A. D.; Xie, Y.; Wilczak, J. M.

    2014-12-01

    The importance of weather-driven renewable energies for the United States (and global) energy portfolio is growing. The main perceived problems with weather-driven renewable energies are their intermittent nature, low power density, and high costs. The National Energy with Weather System Simulator (NEWS) is a mathematical optimization tool that allows the construction of weather-driven energy sources that will work in harmony with the needs of the system. For example, it will match the electric load, reduce variability, decrease costs, and abate carbon emissions. One important test run included existing US carbon-free power sources, natural gas power when needed, and a High Voltage Direct Current power transmission network. This study shows that the costs and carbon emissions from an optimally designed national system decrease with geographic size. It shows that with achievable estimates of wind and solar generation costs, that the US could decrease its carbon emissions by up to 80% by the early 2030s, without an increase in electric costs. The key requirement would be a 48 state network of HVDC transmission, creating a national market for electricity not possible in the current AC grid. These results were found without the need for storage. Further, we tested the effect of changing natural gas fuel prices on the optimal configuration of the national electric power system. Another test that was carried out was an extension to global regions. The extension study shows that the same properties found in the US study extend to the most populous regions of the planet. The extra test is a simplified version of the US study, and is where much more research can be carried out. We compare our results to other model results.

  2. Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Wang, C.

    1989-01-01

    A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.

  3. Asynchronous global optimization techniques for medium and large inversion problems

    SciTech Connect

    Pereyra, V.; Koshy, M.; Meza, J.C.

    1995-04-01

    We discuss global optimization procedures adequate for seismic inversion problems. We explain how to save function evaluations (which may involve large scale ray tracing or other expensive operations) by creating a data base of information on what parts of parameter space have already been inspected. It is also shown how a correct parallel implementation using PVM speeds up the process almost linearly with respect to the number of processors, provided that the function evaluations are expensive enough to offset the communication overhead.

  4. Optimization techniques for OpenCL-based linear algebra routines

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Fox, Paul; Humphrey, John; Kuller, Aryeh; Kelmelis, Eric; Prather, Dennis W.

    2014-06-01

    The OpenCL standard for general-purpose parallel programming allows a developer to target highly parallel computations towards graphics processing units (GPUs), CPUs, co-processing devices, and field programmable gate arrays (FPGAs). The computationally intense domains of linear algebra and image processing have shown significant speedups when implemented in the OpenCL environment. A major benefit of OpenCL is that a routine written for one device can be run across many different devices and architectures; however, a kernel optimized for one device may not exhibit high performance when executed on a different device. For this reason kernels must typically be hand-optimized for every target device family. Due to the large number of parameters that can affect performance, hand tuning for every possible device is impractical and often produces suboptimal results. For this work, we focused on optimizing the general matrix multiplication routine. General matrix multiplication is used as a building block for many linear algebra routines and often comprises a large portion of the run-time. Prior work has shown this routine to be a good candidate for high-performance implementation in OpenCL. We selected several candidate algorithms from the literature that are suitable for parameterization. We then developed parameterized kernels implementing these algorithms using only portable OpenCL features. Our implementation queries device information supplied by the OpenCL runtime and utilizes this as well as user input to generate a search space that satisfies device and algorithmic constraints. Preliminary results from our work confirm that optimizations are not portable from one device to the next, and show the benefits of automatic tuning. Using a standard set of tuning parameters seen in the literature for the NVIDIA Fermi architecture achieves a performance of 1.6 TFLOPS on an AMD 7970 device, while automatically tuning achieves a peak of 2.7 TFLOPS

  5. Multi-objective optimization of combined Brayton and inverse Brayton cycles using advanced optimization algorithms

    NASA Astrophysics Data System (ADS)

    Venkata Rao, R.; Patel, Vivek

    2012-08-01

    This study explores the use of teaching-learning-based optimization (TLBO) and artificial bee colony (ABC) algorithms for determining the optimum operating conditions of combined Brayton and inverse Brayton cycles. Maximization of thermal efficiency and specific work of the system are considered as the objective functions and are treated simultaneously for multi-objective optimization. Upper cycle pressure ratio and bottom cycle expansion pressure of the system are considered as design variables for the multi-objective optimization. An application example is presented to demonstrate the effectiveness and accuracy of the proposed algorithms. The results of optimization using the proposed algorithms are validated by comparing with those obtained by using the genetic algorithm (GA) and particle swarm optimization (PSO) on the same example. Improvement in the results is obtained by the proposed algorithms. The results of effect of variation of the algorithm parameters on the convergence and fitness values of the objective functions are reported.

  6. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Hodges, Dewey H.

    1990-01-01

    A regular perturbation analysis is presented. Closed-loop simulations were performed with a first order correction including all of the atmospheric terms. In addition, a method was developed for independently checking the accuracy of the analysis and the rather extensive programming required to implement the complete first order correction with all of the aerodynamic effects included. This amounted to developing an equivalent Hamiltonian computed from the first order analysis. A second order correction was also completed for the neglected spherical Earth and back-pressure effects. Finally, an analysis was begun on a method for dealing with control inequality constraints. The results on including higher order corrections do show some improvement for this application; however, it is not known at this stage if significant improvement will result when the aerodynamic forces are included. The weak formulation for solving optimal problems was extended in order to account for state inequality constraints. The formulation was tested on three example problems and numerical results were compared to the exact solutions. Development of a general purpose computational environment for the solution of a large class of optimal control problems is under way. An example, along with the necessary input and the output, is given.

  7. An Optimal Cell Detection Technique for Automated Patch Clamping

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    While there are several hardware techniques for the automated patch clamping of cells that describe the equipment apparatus used for patch clamping, very few explain the science behind the actual technique of locating the ideal cell for a patch clamping procedure. We present a machine vision approach to patch clamping cell selection by developing an intelligent algorithm technique that gives the user the ability to determine the good cell to patch clamp in an image within one second. This technique will aid the user in determining the best candidates for patch clamping and will ultimately save time, increase efficiency and reduce cost. The ultimate goal is to combine intelligent processing with instrumentation and controls in order to produce a complete turnkey automated patch clamping system capable of accurately and reliably patch clamping cells with a minimum amount of human intervention. We present a unique technique that identifies good patch clamping cell candidates based on feature metrics of a cell's (x, y) position, major axis length, minor axis length, area, elongation, roundness, smoothness, angle of orientation, thinness and whether or not the cell is only particularly in the field of view. A patent is pending for this research.

  8. Characterization of PTFE Using Advanced Thermal Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Blumm, J.; Lindemann, A.; Meyer, M.; Strasser, C.

    2010-10-01

    Polytetrafluoroethylene (PTFE) is a synthetic fluoropolymer used in numerous industrial applications. It is often referred to by its trademark name, Teflon. Thermal characterization of a PTFE material was carried out using various thermal analysis and thermophysical properties test techniques. The transformation energetics and specific heat were measured employing differential scanning calorimetry. The thermal expansion and the density changes were determined employing pushrod dilatometry. The viscoelastic properties (storage and loss modulus) were analyzed using dynamic mechanical analysis. The thermal diffusivity was measured using the laser flash technique. Combining thermal diffusivity data with specific heat and density allows calculation of the thermal conductivity of the polymer. Measurements were carried out from - 125 °C up to 150 °C. Additionally, measurements of the mechanical properties were carried out down to - 170 °C. The specific heat tests were conducted into the fully molten regions up to 370 °C.

  9. Developments and advances concerning the hyperpolarisation technique SABRE.

    PubMed

    Mewis, Ryan E

    2015-10-01

    To overcome the inherent sensitivity issue in NMR and MRI, hyperpolarisation techniques are used. Signal Amplification By Reversible Exchange (SABRE) is a hyperpolarisation technique that utilises parahydrogen, a molecule that possesses a nuclear singlet state, as the source of polarisation. A metal complex is required to break the singlet order of parahydrogen and, by doing so, facilitates polarisation transfer to analyte molecules ligated to the same complex through the J-coupled network that exists. The increased signal intensities that the analyte molecules possess as a result of this process have led to investigations whereby their potential as MRI contrast agents has been probed and to understand the fundamental processes underpinning the polarisation transfer mechanism. As well as discussing literature relevant to both of these areas, the chemical structure of the complex, the physical constraints of the polarisation transfer process and the successes of implementing SABRE at low and high magnetic fields are discussed. PMID:26264565

  10. Advance techniques for monitoring human tolerance to +Gz accelerations.

    NASA Technical Reports Server (NTRS)

    Pelligra, R.; Sandler, H.; Rositano, S.; Skrettingland, K.; Mancini, R.

    1972-01-01

    Standard techniques for monitoring the acceleration-stressed human subject have been augmented by measuring (1) temporal, brachial and/or radial arterial blood flow, and (2) indirect systolic and diastolic blood pressure at 60-sec intervals. Results show that the response of blood pressure to positive accelerations is complex and dependent on an interplay of hydrostatic forces, diminishing venous return, redistribution of blood, and other poorly defined compensatory reflexes.

  11. Development of processing techniques for advanced thermal protection materials

    NASA Technical Reports Server (NTRS)

    Selvaduray, Guna S.

    1994-01-01

    The effort, which was focused on the research and development of advanced materials for use in Thermal Protection Systems (TPS), has involved chemical and physical testing of refractory ceramic tiles, fabrics, threads and fibers. This testing has included determination of the optical properties, thermal shock resistance, high temperature dimensional stability, and tolerance to environmental stresses. Materials have also been tested in the Arc Jet 2 x 9 Turbulent Duct Facility (TDF), the 1 atmosphere Radiant Heat Cycler, and the Mini-Wind Tunnel Facility (MWTF). A significant part of the effort hitherto has gone towards modifying and upgrading the test facilities so that meaningful tests can be carried out. Another important effort during this period has been the creation of a materials database. Computer systems administration and support have also been provided. These are described in greater detail below.

  12. Advanced materials and techniques for fibre-optic sensing

    NASA Astrophysics Data System (ADS)

    Henderson, Philip J.

    2014-06-01

    Fibre-optic monitoring systems came of age in about 1999 upon the emergence of the world's first significant commercialising company - a spin-out from the UK's collaborative MAST project. By using embedded fibre-optic technology, the MAST project successfully measured transient strain within high-performance composite yacht masts. Since then, applications have extended from smart composites into civil engineering, energy, military, aerospace, medicine and other sectors. Fibre-optic sensors come in various forms, and may be subject to embedment, retrofitting, and remote interrogation. The unique challenges presented by each implementation require careful scrutiny before widespread adoption can take place. Accordingly, various aspects of design and reliability are discussed spanning a range of representative technologies that include resonant microsilicon structures, MEMS, Bragg gratings, advanced forms of spectroscopy, and modern trends in nanotechnology. Keywords: Fibre-optic sensors, fibre Bragg gratings, MEMS, MOEMS, nanotechnology, plasmon.

  13. Advanced techniques for characterization of ion beam modified materials

    DOE PAGESBeta

    Zhang, Yanwen; Debelle, Aurélien; Boulle, Alexandre; Kluth, Patrick; Tuomisto, Filip

    2014-10-30

    Understanding the mechanisms of damage formation in materials irradiated with energetic ions is essential for the field of ion-beam materials modification and engineering. Utilizing incident ions, electrons, photons, and positrons, various analysis techniques, including Rutherford backscattering spectrometry (RBS), electron RBS, Raman spectroscopy, high-resolution X-ray diffraction, small-angle X-ray scattering, and positron annihilation spectroscopy, are routinely used or gaining increasing attention in characterizing ion beam modified materials. The distinctive information, recent developments, and some perspectives in these techniques are reviewed in this paper. Applications of these techniques are discussed to demonstrate their unique ability for studying ion-solid interactions and the corresponding radiationmore » effects in modified depths ranging from a few nm to a few tens of μm, and to provide information on electronic and atomic structure of the materials, defect configuration and concentration, as well as phase stability, amorphization and recrystallization processes. Finally, such knowledge contributes to our fundamental understanding over a wide range of extreme conditions essential for enhancing material performance and also for design and synthesis of new materials to address a broad variety of future energy applications.« less

  14. Advanced techniques for characterization of ion beam modified materials

    SciTech Connect

    Zhang, Yanwen; Debelle, Aurélien; Boulle, Alexandre; Kluth, Patrick; Tuomisto, Filip

    2014-10-30

    Understanding the mechanisms of damage formation in materials irradiated with energetic ions is essential for the field of ion-beam materials modification and engineering. Utilizing incident ions, electrons, photons, and positrons, various analysis techniques, including Rutherford backscattering spectrometry (RBS), electron RBS, Raman spectroscopy, high-resolution X-ray diffraction, small-angle X-ray scattering, and positron annihilation spectroscopy, are routinely used or gaining increasing attention in characterizing ion beam modified materials. The distinctive information, recent developments, and some perspectives in these techniques are reviewed in this paper. Applications of these techniques are discussed to demonstrate their unique ability for studying ion-solid interactions and the corresponding radiation effects in modified depths ranging from a few nm to a few tens of μm, and to provide information on electronic and atomic structure of the materials, defect configuration and concentration, as well as phase stability, amorphization and recrystallization processes. Finally, such knowledge contributes to our fundamental understanding over a wide range of extreme conditions essential for enhancing material performance and also for design and synthesis of new materials to address a broad variety of future energy applications.

  15. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    NASA Astrophysics Data System (ADS)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  16. Space shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.

  17. Application of optimal data assimilation techniques in oceanography

    SciTech Connect

    Miller, R.N.

    1996-12-31

    Application of optimal data assimilation methods in oceanography is, if anything, more important than it is in numerical weather prediction, due to the sparsity of data. Here, a general framework is presented and practical examples taken from the author`s work are described, with the purpose of conveying to the reader some idea of the state of the art of data assimilation in oceanography. While no attempt is made to be exhaustive, references to other lines of research are included. Major challenges to the community include design of statistical error models and handling of strong nonlinearity.

  18. Advances in parameter estimation techniques applied to flexible structures

    NASA Technical Reports Server (NTRS)

    Maben, Egbert; Zimmerman, David C.

    1994-01-01

    In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.

  19. Advances in dental veneers: materials, applications, and techniques

    PubMed Central

    Pini, Núbia Pavesi; Aguiar, Flávio Henrique Baggio; Lima, Débora Alves Nunes Leite; Lovadino, José Roberto; Terada, Raquel Sano Suga; Pascotto, Renata Corrêa

    2012-01-01

    Laminate veneers are a conservative treatment of unaesthetic anterior teeth. The continued development of dental ceramics offers clinicians many options for creating highly aesthetic and functional porcelain veneers. This evolution of materials, ceramics, and adhesive systems permits improvement of the aesthetic of the smile and the self-esteem of the patient. Clinicians should understand the latest ceramic materials in order to be able to recommend them and their applications and techniques, and to ensure the success of the clinical case. The current literature was reviewed to search for the most important parameters determining the long-term success, correct application, and clinical limitations of porcelain veneers. PMID:23674920

  20. Advances in dental local anesthesia techniques and devices: An update

    PubMed Central

    Saxena, Payal; Gupta, Saurabh K.; Newaskar, Vilas; Chandra, Anil

    2013-01-01

    Although local anesthesia remains the backbone of pain control in dentistry, researches are going to seek new and better means of managing the pain. Most of the researches are focused on improvement in the area of anesthetic agents, delivery devices and technique involved. Newer technologies have been developed that can assist the dentist in providing enhanced pain relief with reduced injection pain and fewer adverse effects. This overview will enlighten the practicing dentists regarding newer devices and methods of rendering pain control comparing these with the earlier used ones on the basis of research and clinical studies available. PMID:24163548

  1. Decomposition technique and optimal trajectories for the aeroassisted flight experiment

    NASA Technical Reports Server (NTRS)

    Miele, A.; Wang, T.; Deaton, A. W.

    1990-01-01

    An actual geosynchronous Earth orbit-to-low Earth orbit (GEO-to-LEO) transfer is considered with reference to the aeroassisted flight experiment (AFE) spacecraft, and optimal trajectories are determined by minimizing the total characteristic velocity. The optimization is performed with respect to the time history of the controls (angle of attack and angle of bank), the entry path inclination and the flight time being free. Two transfer maneuvers are considered: direct ascent (DA) to LEO and indirect ascent (IA) to LEO via parking Earth orbit (PEO). By taking into account certain assumptions, the complete system can be decoupled into two subsystems: one describing the longitudinal motion and one describing the lateral motion. The angle of attack history, the entry path inclination, and the flight time are determined via the longitudinal motion subsystem. In this subsystem, the difference between the instantaneous bank angle and a constant bank angle is minimized in the least square sense subject to the specified orbital inclination requirement. Both the angles of attack and the angle of bank are shown to be constant. This result has considerable importance in the design of nominal trajectories to be used in the guidance of AFE and aeroassisted orbital transfer (AOT) vehicles.

  2. Advanced techniques in reliability model representation and solution

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  3. Advanced terahertz techniques for quality control and counterfeit detection

    NASA Astrophysics Data System (ADS)

    Ahi, Kiarash; Anwar, Mehdi

    2016-04-01

    This paper reports our invented methods for detection of counterfeit electronic. These versatile techniques are also handy in quality control applications. Terahertz pulsed laser systems are capable of giving the material characteristics and thus make it possible to distinguish between the materials used in authentic components and their counterfeit clones. Components with material defects can also be distinguished in section in this manner. In this work different refractive indices and absorption coefficients were observed for counterfeit components compared to their authentic counterparts. Existence of unexpected ingredient materials was detected in counterfeit components by Fourier Transform analysis of the transmitted terahertz pulse. Thicknesses of different layers are obtainable by analyzing the reflected terahertz pulse. Existence of unexpected layers is also detectable in this manner. Recycled, sanded and blacktopped counterfeit electronic components were detected as a result of these analyses. Counterfeit ICs with die dislocations were detected by depicting the terahertz raster scanning data in a coordinate plane which gives terahertz images. In the same manner, raster scanning of the reflected pulse gives terahertz images of the surfaces of the components which were used to investigate contaminant materials and sanded points on the surfaces. The results of the later technique, reveals the recycled counterfeit components.

  4. Advanced coding techniques for few mode transmission systems.

    PubMed

    Okonkwo, Chigo; van Uden, Roy; Chen, Haoshuo; de Waardt, Huug; Koonen, Ton

    2015-01-26

    We experimentally verify the advantage of employing advanced coding schemes such as space-time coding and 4 dimensional modulation formats to enhance the transmission performance of a 3-mode transmission system. The performance gain of space-time block codes for extending the optical signal-to-noise ratio tolerance in multiple-input multiple-output optical coherent spatial division multiplexing transmission systems with respect to single-mode transmission performance are evaluated. By exploiting the spatial diversity that few-mode-fibers offer, with respect to single mode fiber back-to-back performance, significant OSNR gains of 3.2, 4.1, 4.9, and 6.8 dB at the hard-decision forward error correcting limit are demonstrated for DP-QPSK 8, 16 and 32 QAM, respectively. Furthermore, by employing 4D constellations, 6 × 28Gbaud 128 set partitioned quadrature amplitude modulation is shown to outperform conventional 8 QAM transmission performance, whilst carrying an additional 0.5 bit/symbol. PMID:25835899

  5. Advanced Cell Culture Techniques for Cancer Drug Discovery

    PubMed Central

    Lovitt, Carrie J.; Shelper, Todd B.; Avery, Vicky M.

    2014-01-01

    Human cancer cell lines are an integral part of drug discovery practices. However, modeling the complexity of cancer utilizing these cell lines on standard plastic substrata, does not accurately represent the tumor microenvironment. Research into developing advanced tumor cell culture models in a three-dimensional (3D) architecture that more prescisely characterizes the disease state have been undertaken by a number of laboratories around the world. These 3D cell culture models are particularly beneficial for investigating mechanistic processes and drug resistance in tumor cells. In addition, a range of molecular mechanisms deconstructed by studying cancer cells in 3D models suggest that tumor cells cultured in two-dimensional monolayer conditions do not respond to cancer therapeutics/compounds in a similar manner. Recent studies have demonstrated the potential of utilizing 3D cell culture models in drug discovery programs; however, it is evident that further research is required for the development of more complex models that incorporate the majority of the cellular and physical properties of a tumor. PMID:24887773

  6. Recent Advances in Spaceborne Precipitation Radar Measurement Techniques and Technology

    NASA Technical Reports Server (NTRS)

    Im, Eastwood; Durden, Stephen L.; Tanelli, Simone

    2006-01-01

    NASA is currently developing advanced instrument concepts and technologies for future spaceborne atmospheric radars, with an over-arching objective of making such instruments more capable in supporting future science needs and more cost effective. Two such examples are the Second-Generation Precipitation Radar (PR-2) and the Nexrad-In-Space (NIS). PR-2 is a 14/35-GHz dual-frequency rain radar with a deployable 5-meter, wide-swath scanned membrane antenna, a dual-polarized/dual-frequency receiver, and a realtime digital signal processor. It is intended for Low Earth Orbit (LEO) operations to provide greatly enhanced rainfall profile retrieval accuracy while consuming only a fraction of the mass of the current TRMM Precipitation Radar (PR). NIS is designed to be a 35-GHz Geostationary Earth Orbiting (GEO) radar for providing hourly monitoring of the life cycle of hurricanes and tropical storms. It uses a 35-m, spherical, lightweight membrane antenna and Doppler processing to acquire 3-dimensional information on the intensity and vertical motion of hurricane rainfall.

  7. Coal and Coal Constituent Studies by Advanced EMR Techniques

    SciTech Connect

    Alex I. Smirnov; Mark J. Nilges; R. Linn Belford; Robert B. Clarkson

    1998-03-31

    Advanced electronic magnetic resonance (EMR) methods are used to examine properties of coals, chars, and molecular species related to constituents of coal. We have achieved substantial progress on upgrading the high field (HF) EMR (W-band, 95 GHz) spectrometers that are especially advantageous for such studies. Particularly, we have built a new second W-band instrument (Mark II) in addition to our Mark I. Briefly, Mark II features: (i) an Oxford custom-built 7 T superconducting magnet which is scannable from 0 to 7 T at up to 0.5 T/min; (ii) water-cooled coaxial solenoid with up to ±550 G scan under digital (15 bits resolution) computer control; (iii) custom-engineered precision feed-back circuit, which is used to drive this solenoid, is based on an Ultrastab 860R sensor that has linearity better than 5 ppm and resolution of 0.05 ppm; (iv) an Oxford CF 1200 cryostat for variable temperature studies from 1.8 to 340 K. During this grant period we have completed several key upgrades of both Mark I and II, particularly microwave bridge, W-band probehead, and computer interfaces. We utilize these improved instruments for HF EMR studies of spin-spin interaction and existence of different paramagnetic species in carbonaceous solids.

  8. Optimized distortion correction technique for echo planar imaging.

    PubMed

    Chen, N K; Wyrwicz, A M

    2001-03-01

    A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. PMID:11241714

  9. A technique for optimizing the design of power semiconductor devices

    NASA Technical Reports Server (NTRS)

    Schlegel, E. S.

    1976-01-01

    A technique is described that provides a basis for predicting whether any device design change will improve or degrade the unavoidable trade-off that must be made between the conduction loss and the turn-off speed of fast-switching high-power thyristors. The technique makes use of a previously reported method by which, for a given design, this trade-off was determined for a wide range of carrier lifetimes. It is shown that by extending this technique, one can predict how other design variables affect this trade-off. The results show that for relatively slow devices the design can be changed to decrease the current gains to improve the turn-off time without significantly degrading the losses. On the other hand, for devices having fast turn-off times design changes can be made to increase the current gain to decrease the losses without a proportionate increase in the turn-off time. Physical explanations for these results are proposed.

  10. Advanced fabrication techniques for hydrogen-cooled engine structures

    NASA Technical Reports Server (NTRS)

    Buchmann, O. A.; Arefian, V. V.; Warren, H. A.; Vuigner, A. A.; Pohlman, M. J.

    1985-01-01

    Described is a program for development of coolant passage geometries, material systems, and joining processes that will produce long-life hydrogen-cooled structures for scramjet applications. Tests were performed to establish basic material properties, and samples constructed and evaluated to substantiate fabrication processes and inspection techniques. Results of the study show that the basic goal of increasing the life of hydrogen-cooled structures two orders of magnitude relative to that of the Hypersonic Research Engine can be reached with available means. Estimated life is 19000 cycles for the channels and 16000 cycles for pin-fin coolant passage configurations using Nickel 201. Additional research is required to establish the fatigue characteristics of dissimilar-metal coolant passages (Nickel 201/Inconel 718) and to investigate the embrittling effects of the hydrogen coolant.

  11. Advanced experimental techniques for transonic wind tunnels - Final lecture

    NASA Technical Reports Server (NTRS)

    Kilgore, Robert A.

    1987-01-01

    A philosophy of experimental techniques is presented, suggesting that in order to be successful, one should like what one does, have the right tools, stick to the job, avoid diversions, work hard, interact with people, be informed, keep it simple, be self sufficient, and strive for perfection. Sources of information, such as bibliographies, newsletters, technical reports, and technical contacts and meetings are recommended. It is pointed out that adaptive-wall test sections eliminate or reduce wall interference effects, and magnetic suspension and balance systems eliminate support-interference effects, while the problem of flow quality remains with all wind tunnels. It is predicted that in the future it will be possible to obtain wind tunnel results at the proper Reynolds number, and the effects of flow unsteadiness, wall interference, and support interference will be eliminated or greatly reduced.

  12. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Leung, Martin S. K.

    1995-01-01

    The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.

  13. Multi-objective direct optimization of dynamic acceptance and lifetime for potential upgrades of the Advanced Photon Source.

    SciTech Connect

    Borland, M.; Sajaev, V.; Emery, L.; Xiao, A.; Accelerator Systems Division

    2010-08-24

    The Advanced Photon Source (APS) is a 7 GeV storage ring light source that has been in operation for well over a decade. In the near future, the ring may be upgraded, including changes to the lattice such as provision of several long straight sections (LSS). Because APS beamlines are nearly fully built out, we have limited freedom to place LSSs in a symmetric fashion. Arbitrarily-placed LSSs will drastically reduce the symmetry of the optics and would typically be considered unworkable. We apply a recently-developed multi-objective direct optimization technique that relies on particle tracking to compute the dynamic aperture and Touschek lifetime. We show that this technique is able to tune sextupole strengths and select the working point in such a way as to recover the dynamic and momentum acceptances. We also show the results of experimental tests of lattices developed using these techniques.

  14. Optimization of segmented alignment marks for advanced semiconductor fabrication processes

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Lu, Zhijian G.; Williams, Gary; Zach, Franz X.; Liegl, Bernhard

    2001-08-01

    The continued downscaling of semiconductor fabrication ground rule has imposed increasingly tighter overlay tolerances, which becomes very challenging at the 100 nm lithographic node. Such tight tolerances will require very high performance in alignment. Past experiences indicate that good alignment depends largely on alignment signal quality, which, however, can be strongly affected by chip design and various fabrication processes. Under some extreme circumstances, they can even be reduced to the non- usable limit. Therefore, a systematic understanding of alignment marks and a method to predict alignment performance based on mark design are necessary. Motivated by this, we have performed a detailed study of bright field segmented alignment marks that are used in current state-of- the-art fabrication processes. We find that alignment marks at different lithographic levels can be organized into four basic categories: trench mark, metal mark, damascene mark, and combo mark. The basic principles of these four types of marks turn out to be so similar that they can be characterized within the theoretical framework of a simple model based on optical gratings. An analytic expression has been developed for such model and it has been tested using computer simulation with the rigorous time-domain finite- difference (TD-FD) algorithm TEMPEST. Consistent results have been obtained; indicating that mark signal can be significantly improved through the optimization of mark lateral dimensions, such as segment pitch and segment width. We have also compared simulation studies against experimental data for alignment marks at one typical lithographic level and a good agreement is found.

  15. An Enhanced Multi-Objective Optimization Technique for Comprehensive Aerospace Design

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    2000-01-01

    An enhanced multiobjective formulation technique, capable of emphasizing specific objective functions during the optimization process, has been demonstrated on a complex multidisciplinary design application. The Kreisselmeier-Steinhauser (K-S) function approach, which has been used successfully in a variety of multiobjective optimization problems, has been modified using weight factors which enables the designer to emphasize specific design objectives during the optimization process. The technique has been implemented in two distinctively different problems. The first is a classical three bar truss problem and the second is a high-speed aircraft (a doubly swept wing-body configuration) application in which the multiobjective optimization procedure simultaneously minimizes the sonic boom and the drag-to-lift ratio (C(sub D)/C(sub L)) of the aircraft while maintaining the lift coefficient within prescribed limits. The results are compared with those of an equally weighted K-S multiobjective optimization. Results demonstrate the effectiveness of the enhanced multiobjective optimization procedure.

  16. Investigation on the use of optimization techniques for helicopter airframe vibrations design studies

    NASA Technical Reports Server (NTRS)

    Sreekanta Murthy, T.

    1992-01-01

    Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.

  17. Advanced Computational and Experimental Techniques for Nacelle Liner Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Jones, Michael G.; Brown, Martha C.; Nark, Douglas

    2009-01-01

    The Curved Duct Test Rig (CDTR) has been developed to investigate sound propagation through a duct of size comparable to the aft bypass duct of typical aircraft engines. The axial dimension of the bypass duct is often curved and this geometric characteristic is captured in the CDTR. The semiannular bypass duct is simulated by a rectangular test section in which the height corresponds to the circumferential dimension and the width corresponds to the radial dimension. The liner samples are perforate over honeycomb core and are installed on the side walls of the test section. The top and bottom surfaces of the test section are acoustically rigid to simulate a hard wall bifurcation or pylon. A unique feature of the CDTR is the control system that generates sound incident on the liner test section in specific modes. Uniform air flow, at ambient temperature and flow speed Mach 0.275, is introduced through the duct. Experiments to investigate configuration effects such as curvature along the flow path on the acoustic performance of a sample liner are performed in the CDTR and reported in this paper. Combinations of treated and acoustically rigid side walls are investigated. The scattering of modes of the incident wave, both by the curvature and by the asymmetry of wall treatment, is demonstrated in the experimental results. The effect that mode scattering has on total acoustic effectiveness of the liner treatment is also shown. Comparisons of measured liner attenuation with numerical results predicted by an analytic model based on the parabolic approximation to the convected Helmholtz equation are reported. The spectra of attenuation produced by the analytic model are similar to experimental results for both walls treated, straight and curved flow path, with plane wave and higher order modes incident. The numerical model is used to define the optimized resistance and reactance of a liner that significantly improves liner attenuation in the frequency range 1900-2400 Hz. A

  18. Simulation of an advanced techniques of ion propulsion Rocket system

    NASA Astrophysics Data System (ADS)

    Bakkiyaraj, R.

    2016-07-01

    The ion propulsion rocket system is expected to become popular with the development of Deuterium,Argon gas and Hexagonal shape Magneto hydrodynamic(MHD) techniques because of the stimulation indirectly generated the power from ionization chamber,design of thrust range is 1.2 N with 40 KW of electric power and high efficiency.The proposed work is the study of MHD power generation through ionization level of Deuterium gas and combination of two gaseous ions(Deuterium gas ions + Argon gas ions) at acceleration stage.IPR consists of three parts 1.Hexagonal shape MHD based power generator through ionization chamber 2.ion accelerator 3.Exhaust of Nozzle.Initially the required energy around 1312 KJ/mol is carrying out the purpose of deuterium gas which is changed to ionization level.The ionized Deuterium gas comes out from RF ionization chamber to nozzle through MHD generator with enhanced velocity then after voltage is generated across the two pairs of electrode in MHD.it will produce thrust value with the help of mixing of Deuterium ion and Argon ion at acceleration position.The simulation of the IPR system has been carried out by MATLAB.By comparing the simulation results with the theoretical and previous results,if reaches that the proposed method is achieved of thrust value with 40KW power for simulating the IPR system.

  19. Advanced Manufacturing Techniques Demonstrated for Fabricating Developmental Hardware

    NASA Technical Reports Server (NTRS)

    Redding, Chip

    2004-01-01

    NASA Glenn Research Center's Engineering Development Division has been working in support of innovative gas turbine engine systems under development by Glenn's Combustion Branch. These one-of-a-kind components require operation under extreme conditions. High-temperature ceramics were chosen for fabrication was because of the hostile operating environment. During the designing process, it became apparent that traditional machining techniques would not be adequate to produce the small, intricate features for the conceptual design, which was to be produced by stacking over a dozen thin layers with many small features that would then be aligned and bonded together into a one-piece unit. Instead of using traditional machining, we produced computer models in Pro/ENGINEER (Parametric Technology Corporation (PTC), Needham, MA) to the specifications of the research engineer. The computer models were exported in stereolithography standard (STL) format and used to produce full-size rapid prototype polymer models. These semi-opaque plastic models were used for visualization and design verification. The computer models also were exported in International Graphics Exchange Specification (IGES) format and sent to Glenn's Thermal/Fluids Design & Analysis Branch and Applied Structural Mechanics Branch for profiling heat transfer and mechanical strength analysis.

  20. Advances in Current Rating Techniques for Flexible Printed Circuits

    NASA Technical Reports Server (NTRS)

    Hayes, Ron

    2014-01-01

    Twist Capsule Assemblies are power transfer devices commonly used in spacecraft mechanisms that require electrical signals to be passed across a rotating interface. Flexible printed circuits (flex tapes, see Figure 2) are used to carry the electrical signals in these devices. Determining the current rating for a given trace (conductor) size can be challenging. Because of the thermal conditions present in this environment the most appropriate approach is to assume that the only means by which heat is removed from the trace is thru the conductor itself, so that when the flex tape is long the temperature rise in the trace can be extreme. While this technique represents a worst-case thermal situation that yields conservative current ratings, this conservatism may lead to overly cautious designs when not all traces are used at their full rated capacity. A better understanding of how individual traces behave when they are not all in use is the goal of this research. In the testing done in support of this paper, a representative flex tape used for a flight Solar Array Drive Assembly (SADA) application was tested by energizing individual traces (conductors in the tape) in a vacuum chamber and the temperatures of the tape measured using both fine-gauge thermocouples and infrared thermographic imaging. We find that traditional derating schemes used for bundles of wires do not apply for the configuration tested. We also determine that single active traces located in the center of a flex tape operate at lower temperatures than those on the outside edges.

  1. Advances in array detectors for X-ray diffraction techniques.

    PubMed

    Hanley, Quentin S; Denton, M Bonner

    2005-09-01

    Improved focal plane array detector systems are described which can provide improved readout speeds, random addressing and even be employed to simultaneously measure position, intensity and energy. This latter capability promises to rekindle interests in Laue techniques. Simulations of three varieties of foil mask spectrometer in both on- and off-axis configurations indicate that systems of stacked silicon detectors can provide energy measurements within 1% of the true value based on the use of single 'foils' and approximately 10000 photons. An eight-detector hybrid design can provide energy coverage from 4 to 60 keV. Energy resolution can be improved by increased integration time or higher flux experiments. An off-axis spectrometer design in which the angle between the incident beam and the detector system is 45 degrees results in a shift in the optimum energy response of the spectrometer system. In the case of a 200 microm-thick silicon absorber, the energy optimum shifts from 8.7 keV to 10.3 keV as the angle of incidence goes from 0 to 45 degrees. These new designs make better use of incident photons, lower the impact of source flicker through simultaneous rather than sequential collection of intensities, and improve the energy range relative to previously reported systems. PMID:16120985

  2. Recent advances in the surface forces apparatus (SFA) technique

    NASA Astrophysics Data System (ADS)

    Israelachvili, J.; Min, Y.; Akbulut, M.; Alig, A.; Carver, G.; Greene, W.; Kristiansen, K.; Meyer, E.; Pesika, N.; Rosenberg, K.; Zeng, H.

    2010-03-01

    The surface forces apparatus (SFA) has been used for many years to measure the physical forces between surfaces, such as van der Waals (including Casimir) and electrostatic forces in vapors and liquids, adhesion and capillary forces, forces due to surface and liquid structure (e.g. solvation and hydration forces), polymer, steric and hydrophobic interactions, bio-specific interactions as well as friction and lubrication forces. Here we describe recent developments in the SFA technique, specifically the SFA 2000, its simplicity of operation and its extension into new areas of measurement of both static and dynamic forces as well as both normal and lateral (shear and friction) forces. The main reason for the greater simplicity of the SFA 2000 is that it operates on one central simple-cantilever spring to generate both coarse and fine motions over a total range of seven orders of magnitude (from millimeters to ångstroms). In addition, the SFA 2000 is more spacious and modulated so that new attachments and extra parts can easily be fitted for performing more extended types of experiments (e.g. extended strain friction experiments and higher rate dynamic experiments) as well as traditionally non-SFA type experiments (e.g. scanning probe microscopy and atomic force microscopy) and for studying different types of systems.

  3. Advanced signal processing technique for damage detection in steel tubes

    NASA Astrophysics Data System (ADS)

    Amjad, Umar; Yadav, Susheel Kumar; Dao, Cac Minh; Dao, Kiet; Kundu, Tribikram

    2016-04-01

    In recent years, ultrasonic guided waves gained attention for reliable testing and characterization of metals and composites. Guided wave modes are excited and detected by PZT (Lead Zirconate Titanate) transducers either in transmission or reflection mode. In this study guided waves are excited and detected in the transmission mode and the phase change of the propagating wave modes are recorded. In most of the other studies reported in the literature, the change in the received signal strength (amplitude) is investigated with varying degrees of damage while in this study the change in phase is correlated with the extent of damage. Feature extraction techniques are used for extracting phase and time-frequency information. The main advantage of this approach is that the bonding condition between the transducer and the specimen does not affect the phase while it can affect the strength of recorded signal. Therefore, if the specimen is not damaged but the transducer-specimen bonding is deteriorated then the received signal strength is altered but the phase remains same and thus false positive predictions for damage can be avoided.

  4. Plasmon spectroscopy: Theoretical and numerical calculations, and optimization techniques

    NASA Astrophysics Data System (ADS)

    Rodríguez-Oliveros, Rogelio; Paniagua-Domínguez, Ramón; Sánchez-Gil, José A.; Macías, Demetrio

    2016-02-01

    We present an overview of recent advances in plasmonics, mainly concerning theoretical and numerical tools required for the rigorous determination of the spectral properties of complex-shape nanoparticles exhibiting strong localized surface plasmon resonances (LSPRs). Both quasistatic approaches and full electrodynamic methods are described, providing a thorough comparison of their numerical implementations. Special attention is paid to surface integral equation formulations, giving examples of their performance in complicated nanoparticle shapes of interest for their LSPR spectra. In this regard, complex (single) nanoparticle configurations (nanocrosses and nanorods) yield a hierarchy of multiple-order LSPR s with evidence of a rich symmetric or asymmetric (Fano-like) LSPR line shapes. In addition, means to address the design of complex geometries to retrieve LSPR spectra are commented on, with special interest in biologically inspired algorithms. Thewealth of LSPRbased applications are discussed in two choice examples, single-nanoparticle surface-enhanced Raman scattering (SERS) and optical heating, and multifrequency nanoantennas for fluorescence and nonlinear optics.

  5. Advanced techniques for determining long term compatibility of materials with propellants

    NASA Technical Reports Server (NTRS)

    Green, R. L.; Stebbins, J. P.; Smith, A. W.; Pullen, K. E.

    1973-01-01

    A method for the prediction of propellant-material compatibility for periods of time up to ten years is presented. Advanced sensitive measurement techniques used in the prediction method are described. These include: neutron activation analysis, radioactive tracer technique, and atomic absorption spectroscopy with a graphite tube furnace sampler. The results of laboratory tests performed to verify the prediction method are presented.

  6. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    SciTech Connect

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  7. Biotechnology Apprenticeship for Secondary-Level Students: Teaching Advanced Cell Culture Techniques for Research

    ERIC Educational Resources Information Center

    Lewis, Jennifer R.; Kotur, Mark S.; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A.; Ferrell, Nick; Sullivan, Kathryn D.; Ferrari, Mauro

    2002-01-01

    The purpose of this article is to discuss "small-group apprenticeships (SGAs)" as a method to instruct cell culture techniques to high school participants. The study aimed to teach cell culture practices and to introduce advanced imaging techniques to solve various biomedical engineering problems. Participants designed and completed experiments…

  8. Biotechnology Apprenticeship for Secondary-Level Students: Teaching Advanced Cell Culture Techniques for Research.

    ERIC Educational Resources Information Center

    Lewis, Jennifer R.; Kotur, Mark S.; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A.; Ferrell, Nick; Sullivan, Kathryn D.; Ferrari, Mauro

    2002-01-01

    Discusses small-group apprenticeships (SGAs) as a method for introducing cell culture techniques to high school participants. Teaches cell culture practices and introduces advance imaging techniques to solve various biomedical engineering problems. Clarifies and illuminates the value of small-group laboratory apprenticeships. (Author/KHR)

  9. Optimization of hydrostatic transmissions by means of virtual instrumentation technique

    NASA Astrophysics Data System (ADS)

    Ion Guta, Dragos Daniel; Popescu, Teodor Costinel; Dumitrescu, Catalin

    2010-11-01

    Obtaining mathematical models, as close as possible to physical phenomena which are intended to be replicated or improved, help us in deciding how to optimize them. The introduction of computers in monitoring and controlling processes caused changes in technological systems. With support from the methods for identification of processes and from the power of numerical computing equipment, researchers and designers can shorten the period for development of applications in various fields by generating a solution as close as possible to reality, since the design stage [1]. The paper presents a hybrid solution of modeling / simulation of a hydrostatic transmission with mixed adjustment. For simulation and control of the examined process we have used two distinct environments, AMESim and LabVIEW. The proposed solution allows coupling of the system's model to the software control modules developed using virtual instrumentation. Simulation network of the analyzed system was "tuned" and validated by an actual model of the process. This paper highlights some aspects regarding energy and functional advantages of hydraulic transmissions based on adjustable volumetric machines existing in their primary and secondary sectors [2].

  10. On Advanced Estimation Techniques for Exoplanet Detection and Characterization Using Ground-based Coronagraphs

    PubMed Central

    Lawson, Peter R.; Poyneer, Lisa; Barrett, Harrison; Frazin, Richard; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gładysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jérôme; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Pearson, Iain; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry

    2015-01-01

    The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012. PMID:26347393

  11. On Advanced Estimation Techniques for Exoplanet Detection and Characterization using Ground-based Coronagraphs

    NASA Technical Reports Server (NTRS)

    Lawson, Peter; Frazin, Richard

    2012-01-01

    The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012

  12. On Advanced Estimation Techniques for Exoplanet Detection and Characterization using Ground-Based Coronagraphs

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.; Frazin, Richard; Barrett, Harrison; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gladysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jerome; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Perrin, Marshall; Poyneer, Lisa; Pueyo, Laurent; Savransky, Dmitry; Soummer, Remi

    2012-01-01

    The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We provide a formal comparison of techniques through a blind data challenge and evaluate performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012.

  13. Endoscopic therapy for early gastric cancer: Standard techniques and recent advances in ESD

    PubMed Central

    Kume, Keiichiro

    2014-01-01

    The technique of endoscopic submucosal dissection (ESD) is now a well-known endoscopic therapy for early gastric cancer. ESD was introduced to resect large specimens of early gastric cancer in a single piece. ESD can provide precision of histologic diagnosis and can also reduce the recurrence rate. However, the drawback of ESD is its technical difficulty, and, consequently, it is associated with a high rate of complications, the need for advanced endoscopic techniques, and a lengthy procedure time. Various advances in the devices and techniques used for ESD have contributed to overcoming these drawbacks. PMID:24914364

  14. Comparison of Artificial Immune System and Particle Swarm Optimization Techniques for Error Optimization of Machine Vision Based Tool Movements

    NASA Astrophysics Data System (ADS)

    Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod

    2015-10-01

    In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.

  15. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide emissions from coal-fired boilers

    SciTech Connect

    Sorge, J.N.; Larrimore, C.L.; Slatsky, M.D.; Menzies, W.R.; Smouse, S.M.; Stallings, J.W.

    1997-12-31

    This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The primary objectives of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.

  16. Calculation of free fall trajectories based on numerical optimization techniques

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development of a means of computing free-fall (nonthrusting) trajectories from one specified point in the solar system to another specified point in the solar system in a given amount of time was studied. The problem is that of solving a two-point boundary value problem for which the initial slope is unknown. Two standard methods of attack exist for solving two-point boundary value problems. The first method is known as the initial value or shooting method. The second method of attack for two-point boundary value problems is to approximate the nonlinear differential equations by an appropriate linearized set. Parts of both boundary value problem solution techniques described above are used. A complete velocity history is guessed such that the corresponding position history satisfies the given boundary conditions at the appropriate times. An iterative procedure is then followed until the last guessed velocity history and the velocity history obtained from integrating the acceleration history agree to some specified tolerance everywhere along the trajectory.

  17. Optimized digital filtering techniques for radiation detection with HPGe detectors

    NASA Astrophysics Data System (ADS)

    Salathe, Marco; Kihm, Thomas

    2016-02-01

    This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of ~1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.

  18. Geospatial Analysis and Optimization of Fleet Logistics to Exploit Alternative Fuels and Advanced Transportation Technologies: Preprint

    SciTech Connect

    Sparks, W.; Singer, M.

    2010-06-01

    This paper describes how the National Renewable Energy Laboratory (NREL) is developing geographical information system (GIS) tools to evaluate alternative fuel availability in relation to garage locations and to perform automated fleet-wide optimization to determine where to deploy alternative fuel and advanced technology vehicles and fueling infrastructure.

  19. Techniques for optimizing human-machine information transfer related to real-time interactive display systems

    NASA Technical Reports Server (NTRS)

    Granaas, Michael M.; Rhea, Donald C.

    1989-01-01

    In recent years the needs of ground-based researcher-analysts to access real-time engineering data in the form of processed information has expanded rapidly. Fortunately, the capacity to deliver that information has also expanded. The development of advanced display systems is essential to the success of a research test activity. Those developed at the National Aeronautics and Space Administration (NASA), Western Aeronautical Test Range (WATR), range from simple alphanumerics to interactive mapping and graphics. These unique display systems are designed not only to meet basic information display requirements of the user, but also to take advantage of techniques for optimizing information display. Future ground-based display systems will rely heavily not only on new technologies, but also on interaction with the human user and the associated productivity with that interaction. The psychological abilities and limitations of the user will become even more important in defining the difference between a usable and a useful display system. This paper reviews the requirements for development of real-time displays; the psychological aspects of design such as the layout, color selection, real-time response rate, and interactivity of displays; and an analysis of some existing WATR displays.

  20. A technique for locating function roots and for satisfying equality constraints in optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.

  1. A technique for locating function roots and for satisfying equality constraints in optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1992-01-01

    A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.

  2. Practical Framework for an Electron Beam Induced Current Technique Based on a Numerical Optimization Approach

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Hideshi; Soeda, Takeshi

    2015-03-01

    A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.

  3. A Novel Microcharacterization Technique in the Measurement of Strain and Orientation Gradient in Advanced Materials

    NASA Technical Reports Server (NTRS)

    Garmestai, H.; Harris, K.; Lourenco, L.

    1997-01-01

    Representation of morphology and evolution of the microstructure during processing and their relation to properties requires proper experimental techniques. Residual strains, lattice distortion, and texture (micro-texture) at the interface and the matrix of a layered structure or a functionally gradient material and their variation are among parameters important in materials characterization but hard to measure with present experimental techniques. Current techniques available to measure changes in interred material parameters (residual stress, micro-texture, microplasticity) produce results which are either qualitative or unreliable. This problem becomes even more complicated in the case of a temperature variation. These parameters affect many of the mechanical properties of advanced materials including stress-strain relation, ductility, creep, and fatigue. A review of some novel experimental techniques using recent advances in electron microscopy is presented here to measure internal stress, (micro)texture, interracial strength and (sub)grain formation and realignment. Two of these techniques are combined in the chamber of an Environmental Scanning Electron Microscope to measure strain and orientation gradients in advanced materials. These techniques which include Backscattered Kikuchi Diffractometry (BKD) and Microscopic Strain Field Analysis are used to characterize metallic and intermetallic matrix composites and superplastic materials. These techniques are compared with the more conventional x-ray diffraction and indentation techniques.

  4. Optimal Number of Endoscopic Biopsies in Diagnosis of Advanced Gastric and Colorectal Cancer

    PubMed Central

    Choi, Yeowon; Choi, Hyo Sun; Jeon, Woo Kyu; Kim, Byung Ik; Park, Dong Il; Cho, Yong Kyun; Kim, Hong Joo; Park, Jung Ho

    2012-01-01

    Endoscopic biopsy is necessary to confirm a histopathologic diagnosis. Currently, 6 to 8 biopsies are recommended for diagnosis of a suspected malignant lesion. However, multiple biopsies may result in several problems, such as an increased risk of bleeding, procedure prolongation, and increased workload to pathologists. The aim of this study was to clarify the optimal number of endoscopic biopsy specimens required in diagnosis of advanced gastrointestinal cancer. Patients who were diagnosed with advanced gastrointestinal cancer during endoscopy were included. Five specimens were obtained sequentially from viable tissue of the cancer margin. Experienced pathologists evaluated each specimen and provided diagnoses. A total of 91 patients were enrolled. Fifty-nine subjects had advanced gastric cancer, and 32 had advanced colon cancer. Positive diagnosis rates of the first, second, and third advanced gastric cancer specimens were 81.3%, 94.9%, and 98.3%, respectively, while positive diagnosis rates of advanced colon cancer specimens were 78.1%, 87.5%, and 93.8%. Further biopsies did not increase positive diagnosis cumulative rates. This study demonstrated that three specimens were sufficient to make correct pathologic diagnoses in advanced gastrointestinal cancer. Therefore, we recommend 3 or 4 biopsies from viable tissue in advanced gastrointestinal cancer to make a pathologic diagnosis during endoscopy. PMID:22219611

  5. A damage identification technique based on embedded sensitivity analysis and optimization processes

    NASA Astrophysics Data System (ADS)

    Yang, Chulho; Adams, Douglas E.

    2014-07-01

    A vibration based structural damage identification method, using embedded sensitivity functions and optimization algorithms, is discussed in this work. The embedded sensitivity technique requires only measured or calculated frequency response functions to obtain the sensitivity of system responses to each component parameter. Therefore, this sensitivity analysis technique can be effectively used for the damage identification process. Optimization techniques are used to minimize the difference between the measured frequency response functions of the damaged structure and those calculated from the baseline system using embedded sensitivity functions. The amount of damage can be quantified directly in engineering units as changes in stiffness, damping, or mass. Various factors in the optimization process and structural dynamics are studied to enhance the performance and robustness of the damage identification process. This study shows that the proposed technique can improve the accuracy of damage identification with less than 2 percent error of estimation.

  6. Modulation/demodulation techniques for satellite communications. Part 2: Advanced techniques. The linear channel

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1982-01-01

    A theory is presented for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the linear satellite channel. The underlying principle used is the development of receiver structures based on the maximum-likelihood decision rule. The application of the performance prediction tools, e.g., channel cutoff rate and bit error probability transfer function bounds to these modulation/demodulation techniques.

  7. Advanced combustion techniques for controlling NO sub x emissions of high altitude cruise aircraft

    NASA Technical Reports Server (NTRS)

    Rudey, R. A.; Reck, G. M.

    1976-01-01

    An array of experiments designed to explore the potential of advanced combustion techniques for controlling the emissions of aircraft into the upper atmosphere was discussed. Of particular concern are the oxides of nitrogen (NOx) emissions into the stratosphere. The experiments utilize a wide variety of approaches varying from advanced combustor concepts to fundamental flame tube experiments. Results are presented which indicate that substantial reductions in cruise NOx emissions should be achievable in future aircraft engines. A major NASA program is described which focuses the many fundamental experiments into a planned evolution and demonstration of the prevaporized-premixed combustion technique in a full-scale engine.

  8. POC-Scale Testing of an Advanced Fine Coal Dewatering Equipment/Technique

    SciTech Connect

    Karekh, B K; Tao, D; Groppo, J G

    1998-08-28

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 mm) clean coal. Economical dewatering of an ultra-fine clean coal product to a 20% level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy's program to show that ultra-clean coal could be effectively dewatered to 20% or lower moisture using either conventional or advanced dewatering techniques. The cost-sharing contract effort is for 45 months beginning September 30, 1994. This report discusses technical progress made during the quarter from January 1 - March 31, 1998.

  9. Optimizing molecular properties using a relative index of thermodynamic stability and global optimization techniques.

    PubMed

    Fournier, René; Mohareb, Amir

    2016-01-14

    We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of CumSnn (+) (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each CumSnn (+) species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV. PMID:26772561

  10. Optimizing molecular properties using a relative index of thermodynamic stability and global optimization techniques

    NASA Astrophysics Data System (ADS)

    Fournier, René; Mohareb, Amir

    2016-01-01

    We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of Cu m Snn + (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each Cu m S nn + species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV.

  11. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  12. Modulation/demodulation techniques for satellite communications. Part 3: Advanced techniques. The nonlinear channel

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1982-01-01

    A theory for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the nonlinear satellite channel is presented. The underlying principle used throughout is the development of receiver structures based on the maximum likelihood decision rule and aproximations to it. The bit error probability transfer function bounds developed in great detail in Part 4 is applied to these modulation/demodulation techniques. The effects of the various degrees of receiver mismatch are considered both theoretically and by numerous illustrative examples.

  13. Parameter Optimization in an atmospheric GCM using the Simultaneous Perturbation Stochastic Approximation (SPSA) technique

    NASA Astrophysics Data System (ADS)

    Agarwal, Reema; Köhl, Armin; Stammer, Detlef

    2013-04-01

    We present an application of a multivariate parameter optimization technique to a global primitive equation Atmospheric GCM. The technique is based upon the Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm, in which gradients of the objective function are approximated. This technique has some advantages over other optimization procedures (such as Green's function or the Adjoint methods) like robustness to noise in the objective function and ability to find the actual minimum in case of multiple minima. Another useful feature of the technique is its simplicity and cost effectiveness. The atmospheric GCM used is the coarse resolution PLAnet SIMulator (PLASIM). In order to identify the parameters to be used in the optimization procedure, a series of sensitivity experiments with 12 different parameters was performed and subsequently 5 parameters related to cloud radiation parameterization to which the GCM was highly sensitive were finally selected. The optimization technique is applied and the selected parameters were simultaneously tuned and tested for a period of 1-year GCM integrations. The performance of the technique is judged by the behavior of model's cost function, which includes temperature, precipitation, humidity and flux contributions. The method is found to be useful for reducing the model's cost function against both identical twin data as well as ECMWF ERA-40 reanalysis data.

  14. Cost-Optimal Design of a 3-Phase Core Type Transformer by Gradient Search Technique

    NASA Astrophysics Data System (ADS)

    Basak, R.; Das, A.; Sensarma, A. K.; Sanyal, A. N.

    2014-04-01

    3-phase core type transformers are extensively used as power and distribution transformers in power system and their cost is a sizable proportion of the total system cost. Therefore they should be designed cost-optimally. The design methodology for reaching cost-optimality has been discussed in details by authors like Ramamoorty. It has also been discussed in brief in some of the text-books of electrical design. The paper gives a method for optimizing design, in presence of constraints specified by the customer and the regulatory authorities, through gradient search technique. The starting point has been chosen within the allowable parameter space the steepest decent path has been followed for convergence. The step length has been judiciously chosen and the program has been maneuvered to avoid local minimal points. The method appears to be best as its convergence is quickest amongst different optimizing techniques.

  15. Characterization techniques for semiconductors and nanostructures: a review of recent advances

    NASA Astrophysics Data System (ADS)

    Acher, Olivier

    2015-01-01

    Optical spectroscopy techniques are widely used for the characterization of semiconductors and nanostructures. Confocal Raman microscopy is useful to retrieve chemical and molecular information at the ultimate submicrometer resolution of optical microscopy. Fast imaging capabilities, 3D confocal ability, and multiple excitation wavelengths, have increased the power of the technique while making it simpler to use for material scientists. Recently, the development of the Tip Enhanced Raman Spectroscopy (TERS) has opened the way to the use of Raman information at nanoscale, by combining the resolution of scanning probe microscopy and chemical selectivity of Raman spectroscopy. Significant advances have been reported in the field of profiling the atomic composition of multilayers, using the Glow Discharge Optical Emission Spectroscopy technique, including real-time determination of etched depth by interferometry. This allows the construction of precise atomic profiles of sophisticated multilayers with a few nm resolution. Ellipsometry is another widely used technique to determine the profile of multilayers, and recent development have provided enhanced spatial resolution useful for the investigation of patterned materials. In addition to the advances of the different characterization techniques, the capability to observe the same regions at micrometer scale at different stages of material elaboration, or with different instrument, is becoming a critical issue. Several advances have been made to allow precise re-localization and co-localization of observation with different complementary characterization techniques.

  16. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  17. Application of response surface techniques to helicopter rotor blade optimization procedure

    NASA Technical Reports Server (NTRS)

    Henderson, Joseph Lynn; Walsh, Joanne L.; Young, Katherine C.

    1995-01-01

    In multidisciplinary optimization problems, response surface techniques can be used to replace the complex analyses that define the objective function and/or constraints with simple functions, typically polynomials. In this work a response surface is applied to the design optimization of a helicopter rotor blade. In previous work, this problem has been formulated with a multilevel approach. Here, the response surface takes advantage of this decomposition and is used to replace the lower level, a structural optimization of the blade. Problems that were encountered and important considerations in applying the response surface are discussed. Preliminary results are also presented that illustrate the benefits of using the response surface.

  18. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence.

    PubMed

    Srinivasan, Thenmozhi; Palanisamy, Balasubramanie

    2015-01-01

    Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM), with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets. PMID:26495413

  19. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence

    PubMed Central

    Srinivasan, Thenmozhi; Palanisamy, Balasubramanie

    2015-01-01

    Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM), with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets. PMID:26495413

  20. Gradient-based multiobjective optimization using a distance constraint technique and point replacement

    NASA Astrophysics Data System (ADS)

    Sato, Yuki; Izui, Kazuhiro; Yamada, Takayuki; Nishiwaki, Shinji

    2016-07-01

    This paper proposes techniques to improve the diversity of the searching points during the optimization process in an Aggregative Gradient-based Multiobjective Optimization (AGMO) method, so that well-distributed Pareto solutions are obtained. First to be discussed is a distance constraint technique, applied among searching points in the objective space when updating design variables, that maintains a minimum distance between the points. Next, a scheme is introduced that deals with updated points that violate the distance constraint, by deleting the offending points and introducing new points in areas of the objective space where searching points are sparsely distributed. Finally, the proposed method is applied to example problems to illustrate its effectiveness.

  1. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  2. 75 FR 81643 - In the Matter of Certain Semiconductor Products Made by Advanced Lithography Techniques and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ... certain claims of U.S. Patent No. 6,042,998. 75 FR. 44,015 (July 27, 2010). The complaint named two... COMMISSION In the Matter of Certain Semiconductor Products Made by Advanced Lithography Techniques and... for ] importation, and sale within the United States after importation of certain...

  3. Advanced Diffusion-Weighted Magnetic Resonance Imaging Techniques of the Human Spinal Cord

    PubMed Central

    Andre, Jalal B.; Bammer, Roland

    2012-01-01

    Unlike those of the brain, advances in diffusion-weighted imaging (DWI) of the human spinal cord have been challenged by the more complicated and inhomogeneous anatomy of the spine, the differences in magnetic susceptibility between adjacent air and fluid-filled structures and the surrounding soft tissues, and the inherent limitations of the initially used echo-planar imaging techniques used to image the spine. Interval advances in DWI techniques for imaging the human spinal cord, with the specific aims of improving the diagnostic quality of the images, and the simultaneous reduction in unwanted artifacts have resulted in higher-quality images that are now able to more accurately portray the complicated underlying anatomy and depict pathologic abnormality with improved sensitivity and specificity. Diffusion tensor imaging (DTI) has benefited from the advances in DWI techniques, as DWI images form the foundation for all tractography and DTI. This review provides a synopsis of the many recent advances in DWI of the human spinal cord, as well as some of the more common clinical uses for these techniques, including DTI and tractography. PMID:22158130

  4. Real-time application of advanced three-dimensional graphic techniques for research aircraft simulation

    NASA Technical Reports Server (NTRS)

    Davis, Steven B.

    1990-01-01

    Visual aids are valuable assets to engineers for design, demonstration, and evaluation. Discussed here are a variety of advanced three-dimensional graphic techniques used to enhance the displays of test aircraft dynamics. The new software's capabilities are examined and possible future uses are considered.

  5. Recognizing and Managing Complexity: Teaching Advanced Programming Concepts and Techniques Using the Zebra Puzzle

    ERIC Educational Resources Information Center

    Crabtree, John; Zhang, Xihui

    2015-01-01

    Teaching advanced programming can be a challenge, especially when the students are pursuing different majors with diverse analytical and problem-solving capabilities. The purpose of this paper is to explore the efficacy of using a particular problem as a vehicle for imparting a broad set of programming concepts and problem-solving techniques. We…

  6. Fabrication of advanced electrochemical energy materials using sol-gel processing techniques

    NASA Technical Reports Server (NTRS)

    Chu, C. T.; Chu, Jay; Zheng, Haixing

    1995-01-01

    Advanced materials play an important role in electrochemical energy devices such as batteries, fuel cells, and electrochemical capacitors. They are being used as both electrodes and electrolytes. Sol-gel processing is a versatile solution technique used in fabrication of ceramic materials with tailored stoichiometry, microstructure, and properties. The application of sol-gel processing in the fabrication of advanced electrochemical energy materials will be presented. The potentials of sol-gel derived materials for electrochemical energy applications will be discussed along with some examples of successful applications. Sol-gel derived metal oxide electrode materials such as V2O5 cathodes have been demonstrated in solid-slate thin film batteries; solid electrolytes materials such as beta-alumina for advanced secondary batteries had been prepared by the sol-gel technique long time ago; and high surface area transition metal compounds for capacitive energy storage applications can also be synthesized with this method.

  7. Optimizing advanced propeller designs by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, Magdi H.

    1988-01-01

    A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.

  8. Aerodynamic optimization by simultaneously updating flow variables and design parameters with application to advanced propeller designs

    NASA Technical Reports Server (NTRS)

    Rizk, Magdi H.

    1988-01-01

    A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.

  9. Optimization of Heat-Sink Cooling Structure in EAST with Hydraulic Expansion Technique

    NASA Astrophysics Data System (ADS)

    Xu, Tiejun; Huang, Shenghong; Xie, Han; Song, Yuntao; Zhan, Ping; Ji, Xiang; Gao, Daming

    2011-12-01

    Considering utilization of the original chromium-bronze material, two processing techniques including hydraulic expansion and high temperature vacuum welding were proposed for the optimization of heat-sink structure in EAST. The heat transfer performance of heat-sink with or without cooling tube was calculated and different types of connection between tube and heat-sink were compared by conducting a special test. It is shown from numerical analysis that the diameter of heat-sink channel can be reduced from 12 mm to 10 mm. Compared with the original sample, the thermal contact resistance between tube and heat-sink for welding sample can reduce the heat transfer performance by 10%, while by 20% for the hydraulic expansion sample. However, the welding technique is more complicated and expensive than hydraulic expansion technique. Both the processing technique and the heat transfer performance of heat-sink prototype should be further considered for the optimization of heat-sink structure in EAST.

  10. Optimizing zonal advection of the Advanced Research WRF (ARW) dynamics for Intel MIC

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Weather Research and Forecast (WRF) model is the most widely used community weather forecast and research model in the world. There are two distinct varieties of WRF. The Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we will use Intel Intel Many Integrated Core (MIC) architecture to substantially increase the performance of a zonal advection subroutine for optimization. It is of the most time consuming routines in the ARW dynamics core. Advection advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 2.4x.

  11. Optimizing meridional advection of the Advanced Research WRF (ARW) dynamics for Intel Xeon Phi coprocessor

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    The most widely used community weather forecast and research model in the world is the Weather Research and Forecast (WRF) model. Two distinct varieties of WRF exist. The one we are interested is the Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we optimize a meridional (north-south direction) advection subroutine for Intel Xeon Phi coprocessor. Advection is of the most time consuming routines in the ARW dynamics core. It advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.2x.

  12. A technique for optimally designing engineering structures with manufacturing tolerances accounted for

    NASA Astrophysics Data System (ADS)

    Tabakov, P. Y.; Walker, M.

    2007-01-01

    Accurate optimal design solutions for most engineering structures present considerable difficulties due to the complexity and multi-modality of the functional design space. The situation is made even more complex when potential manufacturing tolerances must be accounted for in the optimizing process. The present study provides an in-depth analysis of the problem, and then a technique for determining the optimal design of engineering structures, with manufacturing tolerances in the design variables accounted for, is proposed and demonstrated. The examples used to demonstrate the technique involve the design optimization of simple fibre-reinforced laminated composite structures. The technique is simple, easy to implement and, at the same time, very efficient. It is assumed that the probability of any tolerance value occurring within the tolerance band, compared with any other, is equal, and thus it is a worst-case scenario approach. In addition, the technique is non-probabilistic. A genetic algorithm with fitness sharing, including a micro-genetic algorithm, has been found to be very suitable to use, and implemented in the technique. The numerical examples presented in the article deal with buckling load design optimization of an laminated angle ply plate, and evaluation of the maximum burst pressure in a thick laminated anisotropic pressure vessel. Both examples clearly demonstrate the impact of manufacturing tolerances on the overall performance of a structure and emphasize the importance of accounting for such tolerances in the design optimization phase. This is particularly true of the pressure vessel. The results show that when the example tolerances are accounted for, the maximum design pressure is reduced by 60.2% (in the case of a single layer vessel), and when five layers are specified, if the nominal fibre orientations are implemented and the example tolerances are incurred during fabrication, the actual design pressure could be 64% less than predicted.

  13. Locally Advanced Lung Cancer: An Optimal Setting for Vaccines and Other Immunotherapies

    PubMed Central

    Iyengar, Puneeth; Gerber, David E.

    2013-01-01

    Lung cancer has traditionally been considered relatively resistant to immunotherapies. However, recent advances in the understanding of tumor-associated antigens, anti-tumor immune responses, and tumor immunosuppression mechanisms have resulted in a number of promising immunomodulatory therapies such as vaccines and checkpoint inhibitors. Locally advanced non-small cell lung cancer (NSCLC) is an optimal setting for these treatments because standard therapies such as surgery, radiation, and chemotherapy may enhance anti-tumor immune effects by debulking the tumor, increasing tumor antigen presentation, and promoting T-cell response and trafficking. Clinical trials incorporating immunomodulatory agents into combined modality therapy of locally advanced NSCLC have shown promising results. Future challenges include identifying biomarkers to predict those patients most likely to benefit from this approach, radiographic assessment of treatment effects, the timing and dosing of combined modality treatment including immunotherapies, and avoidance of potentially overlapping toxicities. PMID:23708072

  14. A technique optimization protocol and the potential for dose reduction in digital mammography

    SciTech Connect

    Ranger, Nicole T.; Lo, Joseph Y.; Samei, Ehsan

    2010-03-15

    Digital mammography requires revisiting techniques that have been optimized for prior screen/film mammography systems. The objective of the study was to determine optimized radiographic technique for a digital mammography system and demonstrate the potential for dose reduction in comparison to the clinically established techniques based on screen- film. An objective figure of merit (FOM) was employed to evaluate a direct-conversion amorphous selenium (a-Se) FFDM system (Siemens Mammomat Novation{sup DR}, Siemens AG Medical Solutions, Erlangen, Germany) and was derived from the quotient of the squared signal-difference-to-noise ratio to mean glandular dose, for various combinations of technique factors and breast phantom configurations including kilovoltage settings (23-35 kVp), target/filter combinations (Mo-Mo and W-Rh), breast-equivalent plastic in various thicknesses (2-8 cm) and densities (100% adipose, 50% adipose/50% glandular, and 100% glandular), and simulated mass and calcification lesions. When using a W-Rh spectrum, the optimized FOM results for the simulated mass and calcification lesions showed highly consistent trends with kVp for each combination of breast density and thickness. The optimized kVp ranged from 26 kVp for 2 cm 100% adipose breasts to 30 kVp for 8 cm 100% glandular breasts. The use of the optimized W-Rh technique compared to standard Mo-Mo techniques provided dose savings ranging from 9% for 2 cm thick, 100% adipose breasts, to 63% for 6 cm thick, 100% glandular breasts, and for breasts with a 50% adipose/50% glandular composition, from 12% for 2 cm thick breasts up to 57% for 8 cm thick breasts.

  15. A technique optimization protocol and the potential for dose reduction in digital mammography

    PubMed Central

    Ranger, Nicole T.; Lo, Joseph Y.; Samei, Ehsan

    2010-01-01

    Digital mammography requires revisiting techniques that have been optimized for prior screen∕film mammography systems. The objective of the study was to determine optimized radiographic technique for a digital mammography system and demonstrate the potential for dose reduction in comparison to the clinically established techniques based on screen- film. An objective figure of merit (FOM) was employed to evaluate a direct-conversion amorphous selenium (a-Se) FFDM system (Siemens Mammomat NovationDR, Siemens AG Medical Solutions, Erlangen, Germany) and was derived from the quotient of the squared signal-difference-to-noise ratio to mean glandular dose, for various combinations of technique factors and breast phantom configurations including kilovoltage settings (23–35 kVp), target∕filter combinations (Mo–Mo and W–Rh), breast-equivalent plastic in various thicknesses (2–8 cm) and densities (100% adipose, 50% adipose∕50% glandular, and 100% glandular), and simulated mass and calcification lesions. When using a W–Rh spectrum, the optimized FOM results for the simulated mass and calcification lesions showed highly consistent trends with kVp for each combination of breast density and thickness. The optimized kVp ranged from 26 kVp for 2 cm 100% adipose breasts to 30 kVp for 8 cm 100% glandular breasts. The use of the optimized W–Rh technique compared to standard Mo–Mo techniques provided dose savings ranging from 9% for 2 cm thick, 100% adipose breasts, to 63% for 6 cm thick, 100% glandular breasts, and for breasts with a 50% adipose∕50% glandular composition, from 12% for 2 cm thick breasts up to 57% for 8 cm thick breasts. PMID:20384232

  16. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  17. Advanced techniques for determining long term compatibility of materials with propellants

    NASA Technical Reports Server (NTRS)

    Green, R. L.

    1972-01-01

    The search for advanced measurement techniques for determining long term compatibility of materials with propellants was conducted in several parts. A comprehensive survey of the existing measurement and testing technology for determining material-propellant interactions was performed. Selections were made from those existing techniques which were determined could meet or be made to meet the requirements. Areas of refinement or changes were recommended for improvement of others. Investigations were also performed to determine the feasibility and advantages of developing and using new techniques to achieve significant improvements over existing ones. The most interesting demonstration was that of the new technique, the volatile metal chelate analysis. Rivaling the neutron activation analysis in terms of sensitivity and specificity, the volatile metal chelate technique was fully demonstrated.

  18. Expert system and process optimization techniques for real-time monitoring and control of plasma processes

    NASA Astrophysics Data System (ADS)

    Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.

    1991-03-01

    To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).

  19. Comparative Studies of Particle Swarm Optimization Techniques for Reactive Power Allocation Planning in Power Systems

    NASA Astrophysics Data System (ADS)

    Fukuyama, Yoshikazu

    This paper compares particle swarm optimization (PSO) techniques for a reactive power allocation planning problem in power systems. The problem can be formulated as a mixed-integer nonlinear optimization problem (MINLP). The PSO based methods determines a reactive power allocation strategy with continuous and discrete state variables such as automatic voltage regulator (AVR) operating values of electric power generators, tap positions of on-load tap changer (OLTC) of transformers, and the number of reactive power compensation equipment. Namely, this paper investigates applicability of PSO techniques to one of the practical MINLPs in power systems. Four variations of PSO: PSO with inertia weight approach (IWA), PSO with constriction factor approach (CFA), hybrid particle swarm optimization (HPSO) with IWA, and HPSO with CFA are compared. The four methods are applied to the standard IEEE14 bus system and a practical 112 bus system.

  20. Neural network and polynomial-based response surface techniques for supersonic turbine design optimization

    NASA Astrophysics Data System (ADS)

    Papila, Nilay Uzgoren

    Turbine performance directly affects engine specific impulse, thrust-to-weight ratio, and cost in a rocket propulsion system. This dissertation focuses on methodology and application of employing optimization techniques, with the neural network (NN) and polynomial-based response surface method (RSM), for supersonic turbine optimization. The research is relevant to NASA's reusable launching vehicle initiatives. It is demonstrated that accuracy of the response surface (RS) approximations can be improved with combined utilization of the NN and polynomial techniques, and higher emphases on data in regions of interests. The design of experiment methodology is critical while performing optimization in efficient and effective manners. In physical applications, both preliminary design and detailed shape design optimization are investigated. For preliminary design level, single-, two-, and three-stage turbines are considered with the number of design variables increasing from six to 11 and then to 15, in accordance with the number of stages. A major goal of the preliminary optimization effort is to balance the desire of maximizing aerodynamic performance and minimizing weight. To ascertain required predictive capability of the RSM, a two-level domain refinement approach (windowing) has been adopted. The accuracy of the predicted optimal design points based on this strategy is shown to be satisfactory. The results indicate that the two-stage turbine is the optimum configuration with the higher efficiency corresponding to smaller weights. It is demonstrated that the criteria for selecting the database exhibit significant impact on the efficiency and effectiveness of the construction of the response surface. Based on the optimized preliminary design outcome, shape optimization is performed for vanes and blades of a two-stage supersonic turbine, involving O(10) design variables. It is demonstrated that a major merit of the RS-based optimization approach is that it enables one

  1. Optimizing Performance on Linux Clusters Using Advanced Communication Protocols: Achieving Over 10 Teraflops on a 8.6 Teraflops Linpack-Rated Linux Cluster

    SciTech Connect

    Krishnan, Manoj Kumar; Nieplocha, Jarek

    2005-04-26

    Advancements in high-performance networks (Quadrics, Infiniband or Myrinet) continue to improve the efficiency of modern clusters. However, the average application efficiency is as small fraction of the peak as the system’s efficiency. This paper describes techniques for optimizing application performance on Linux clusters using Remote Memory Access communication protocols. The effectiveness of these optimizations is presented in the context of an application kernel, dense matrix multiplication. The result was achieving over 10 teraflops on HP Linux cluster on which LINPACK performance is measured as 8.6 teraflops.

  2. Prognostic role of bowel involvement in optimally cytoreduced advanced ovarian cancer: a retrospective study

    PubMed Central

    2014-01-01

    Background Optimal debulking surgery is postulated to be useful in survival of ovarian cancer patients. Some studies highlighted the possible role of bowel surgery in this topic. We wanted to evaluate the role of bowel involvement in patients with advanced epithelial ovarian cancer who underwent optimal cytoreduction. Methods Between 1997 and 2004, 301 patients with advanced epithelial cancer underwent surgery at Department of Gynecological Oncology of Centro di Riferimento Oncologico (CRO) National Cancer Institute Aviano (PN) Italy. All underwent maximal surgical effort, including bowel and upper abdominal procedure, in order to achieve optimal debulking (R < 0.5 cm). PFS and OS were compared with residual disease, grading and surgical procedures. Results Optimal cytoreduction was achieved in 244 patients (81.0%); R0 in 209 women (69.4.%) and R < 0.5 in 35 (11.6%). Bowel resection was performed in 116 patients (38.5%): recto-sigmoidectomy alone (69.8%), upper bowel resection only (14.7%) and both recto-sigmoidectomy and other bowel resection (15.5%). Pelvic peritonectomy and upper abdomen procedures were carried out in 202 (67.1%) and 82 (27.2%) patients respectively. Among the 284 patients available for follow-up, PFS and OS were significantly better in patients with R < 0.5. Among the 229 patients with optimal debulking (R < 0.5), 137 patients (59.8%) developed recurrent disease or progression. In the 229 R < 0.5 group, bowel involvement was associated with decreased PFS and OS in G1-2 patients whereas in G3 patients OS, but not PFS, was adversely affected. In the 199 patients with R0, PFS and OS were significantly better (p < 0.01) for G1-2 patients without bowel involvement whereas only significant OS (p < 0.05) was observed in G3 patients without bowel involvement versus G3 patients with bowel involvement. Conclusions Optimal cytoreduction (R < 0.5 cm and R0) is the most important prognostic factor for advanced epithelial ovarian cancer. In the optimally

  3. Probability-based least square support vector regression metamodeling technique for crashworthiness optimization problems

    NASA Astrophysics Data System (ADS)

    Wang, Hu; Li, Enying; Li, G. Y.

    2011-03-01

    This paper presents a crashworthiness design optimization method based on a metamodeling technique. The crashworthiness optimization is a highly nonlinear and large scale problem, which is composed various nonlinearities, such as geometry, material and contact and needs a large number expensive evaluations. In order to obtain a robust approximation efficiently, a probability-based least square support vector regression is suggested to construct metamodels by considering structure risk minimization. Further, to save the computational cost, an intelligent sampling strategy is applied to generate sample points at the stage of design of experiment (DOE). In this paper, a cylinder, a full vehicle frontal collision is involved. The results demonstrate that the proposed metamodel-based optimization is efficient and effective in solving crashworthiness, design optimization problems.

  4. Identification of a reflection boundary coefficient in an acoustic wave equation by optimal control techniques

    SciTech Connect

    Lenhart, S. |; Protopopescu, V.; Yong, J.

    1997-12-31

    The authors apply optimal control techniques to find approximate solutions to an inverse problem for the acoustic wave equation. The inverse problem (assumed here to have a solution) is to determine the boundary reflection coefficient from partial measurements of the acoustic signal. The sought reflection coefficient is treated as a control and the goal--quantified by an approximate functional--is to drive the model solution close to the experimental data by adjusting this coefficient. The problem is solved by finding the optimal control that minimizes the approximate functional. Then by driving the cost of the control to zero one proves that the corresponding sequence of optimal controls represents a converging sequence of estimates for the solution of the inverse problem. Compared to classical regularization methods (e.g., Tikhonov coupled with optimization schemes), their approach yields: (1) a systematic procedure to solve inverse problems of identification type and (ii) an explicit expression for the approximations of the solution.

  5. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. PMID:24464989

  6. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1993-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  7. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1992-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  8. Multicycle Optimization of Advanced Gas-Cooled Reactor Loading Patterns Using Genetic Algorithms

    SciTech Connect

    Ziver, A. Kemal; Carter, Jonathan N.; Pain, Christopher C.; Oliveira, Cassiano R.E. de; Goddard, Antony J. H.; Overton, Richard S.

    2003-02-15

    A genetic algorithm (GA)-based optimizer (GAOPT) has been developed for in-core fuel management of advanced gas-cooled reactors (AGRs) at HINKLEY B and HARTLEPOOL, which employ on-load and off-load refueling, respectively. The optimizer has been linked to the reactor analysis code PANTHER for the automated evaluation of loading patterns in a two-dimensional geometry, which is collapsed from the three-dimensional reactor model. GAOPT uses a directed stochastic (Monte Carlo) algorithm to generate initial population members, within predetermined constraints, for use in GAs, which apply the standard genetic operators: selection by tournament, crossover, and mutation. The GAOPT is able to generate and optimize loading patterns for successive reactor cycles (multicycle) within acceptable CPU times even on single-processor systems. The algorithm allows radial shuffling of fuel assemblies in a multicycle refueling optimization, which is constructed to aid long-term core management planning decisions. This paper presents the application of the GA-based optimization to two AGR stations, which apply different in-core management operational rules. Results obtained from the testing of GAOPT are discussed.

  9. Experiences at Langley Research Center in the application of optimization techniques to helicopter airframes for vibration reduction

    NASA Technical Reports Server (NTRS)

    Sreekanta Murthy, T.; Kvaternik, Raymond G.

    1991-01-01

    A NASA/industry rotorcraft structural dynamics program known as Design Analysis Methods for VIBrationS (DAMVIBS) was initiated at Langley Research Center in 1984 with the objective of establishing the technology base needed by the industry for developing an advanced finite-element-based vibrations design analysis capability for airframe structures. As a part of the in-house activities contributing to that program, a study was undertaken to investigate the use of formal, nonlinear programming-based, numerical optimization techniques for airframe vibrations design work. Considerable progress has been made in connection with that study since its inception in 1985. This paper presents a unified summary of the experiences and results of that study. The formulation and solution of airframe optimization problems are discussed. Particular attention is given to describing the implementation of a new computational procedure based on MSC/NASTRAN and CONstrained function MINimization (CONMIN) in a computer program system called DYNOPT for the optimization of airframes subject to strength, frequency, dynamic response, and fatigue constraints. The results from the application of the DYNOPT program to the Bell AH-1G helicopter are presented and discussed.

  10. Performance and operating results from the demonstration of advanced combustion techniques for wall-fired boilers

    SciTech Connect

    Sorge, J.N.; Baldwin, A.L.

    1993-11-01

    This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The primary objective of the demonstration is to determine the long-term performance of advanced overfire air and low NO{sub x} burners applied in a stepwise fashion to a 500 MW boiler. A 50 percent NO{sub x} reduction target has been established for the project. The focus of this paper is to present the effects of excess oxygen level and burner settings on NO{sub x} emissions and unburned carbon levels and recent results from the phase of the project when low NO{sub x} burners were used in conjunction with advanced overfire air.

  11. Advances in the surface modification techniques of bone-related implants for last 10 years

    PubMed Central

    Qiu, Zhi-Ye; Chen, Cen; Wang, Xiu-Mei; Lee, In-Seop

    2014-01-01

    At the time of implanting bone-related implants into human body, a variety of biological responses to the material surface occur with respect to surface chemistry and physical state. The commonly used biomaterials (e.g. titanium and its alloy, Co–Cr alloy, stainless steel, polyetheretherketone, ultra-high molecular weight polyethylene and various calcium phosphates) have many drawbacks such as lack of biocompatibility and improper mechanical properties. As surface modification is very promising technology to overcome such problems, a variety of surface modification techniques have been being investigated. This review paper covers recent advances in surface modification techniques of bone-related materials including physicochemical coating, radiation grafting, plasma surface engineering, ion beam processing and surface patterning techniques. The contents are organized with different types of techniques to applicable materials, and typical examples are also described. PMID:26816626

  12. Unified Instrumentation: Examining the Simultaneous Application of Advanced Measurement Techniques for Increased Wind Tunnel Testing Capability

    NASA Technical Reports Server (NTRS)

    Fleming, Gary A. (Editor); Bartram, Scott M.; Humphreys, William M., Jr.; Jenkins, Luther N.; Jordan, Jeffrey D.; Lee, Joseph W.; Leighty, Bradley D.; Meyers, James F.; South, Bruce W.; Cavone, Angelo A.; Ingram, JoAnne L.

    2002-01-01

    A Unified Instrumentation Test examining the combined application of Pressure Sensitive Paint, Projection Moire Interferometry, Digital Particle Image Velocimetry, Doppler Global Velocimetry, and Acoustic Microphone Array has been conducted at the NASA Langley Research Center. The fundamental purposes of conducting the test were to: (a) identify and solve compatibility issues among the techniques that would inhibit their simultaneous application in a wind tunnel, and (b) demonstrate that simultaneous use of advanced instrumentation techniques is feasible for increasing tunnel efficiency and identifying control surface actuation / aerodynamic reaction phenomena. This paper provides summary descriptions of each measurement technique used during the Unified Instrumentation Test, their implementation for testing in a unified fashion, and example results identifying areas of instrument compatibility and incompatibility. Conclusions are drawn regarding the conditions under which the measurement techniques can be operated simultaneously on a non-interference basis. Finally, areas requiring improvement for successfully applying unified instrumentation in future wind tunnel tests are addressed.

  13. POC-scale testing of an advanced fine coal dewatering equipment/technique

    SciTech Connect

    1998-09-01

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 pm) clean coal. Economical dewatering of an ultra-fine clean-coal product to a 20% level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy`s program to show that ultra-clean coal could be effectively dewatered to 20% or lower moisture using either conventional or advanced dewatering techniques. The cost-sharing contract effort is for 36 months beginning September 30, 1994. This report discusses technical progress made during the quarter from July 1 - September 30, 1997.

  14. An example of requirements for Advanced Subsonic Civil Transport (ASCT) flight control system using structured techniques

    NASA Technical Reports Server (NTRS)

    Mclees, Robert E.; Cohen, Gerald C.

    1991-01-01

    The requirements are presented for an Advanced Subsonic Civil Transport (ASCT) flight control system generated using structured techniques. The requirements definition starts from initially performing a mission analysis to identify the high level control system requirements and functions necessary to satisfy the mission flight. The result of the study is an example set of control system requirements partially represented using a derivative of Yourdon's structured techniques. Also provided is a research focus for studying structured design methodologies and in particular design-for-validation philosophies.

  15. New test techniques and analytical procedures for understanding the behavior of advanced propellers

    NASA Technical Reports Server (NTRS)

    Stefko, G. L.; Bober, L. J.; Neumann, H. E.

    1983-01-01

    Analytical procedures and experimental techniques were developed to improve the capability to design advanced high speed propellers. Some results from the propeller lifting line and lifting surface aerodynamic analysis codes are compared with propeller force data, probe data and laser velocimeter data. In general, the code comparisons with data indicate good qualitative agreement. A rotating propeller force balance demonstrated good accuracy and reduced test time by 50 percent. Results from three propeller flow visualization techniques are shown which illustrate some of the physical phenomena occurring on these propellers.

  16. The investigation of advanced remote sensing techniques for the measurement of aerosol characteristics

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Becher, J.

    1979-01-01

    Advanced remote sensing techniques and inversion methods for the measurement of characteristics of aerosol and gaseous species in the atmosphere were investigated. Of particular interest were the physical and chemical properties of aerosols, such as their size distribution, number concentration, and complex refractive index, and the vertical distribution of these properties on a local as well as global scale. Remote sensing techniques for monitoring of tropospheric aerosols were developed as well as satellite monitoring of upper tropospheric and stratospheric aerosols. Computer programs were developed for solving multiple scattering and radiative transfer problems, as well as inversion/retrieval problems. A necessary aspect of these efforts was to develop models of aerosol properties.

  17. Combined preputial advancement and phallopexy as a revision technique for treating paraphimosis in a dog.

    PubMed

    Wasik, S M; Wallace, A M

    2014-11-01

    A 7-year-old neutered male Jack Russell terrier-cross was presented for signs of recurrent paraphimosis, despite previous surgical enlargement of the preputial ostium. Revision surgery was performed using a combination of preputial advancement and phallopexy, which resulted in complete and permanent coverage of the glans penis by the prepuce, and at 1 year postoperatively, no recurrence of paraphimosis had been observed. The combined techniques allow preservation of the normal penile anatomy, are relatively simple to perform and provide a cosmetic result. We recommend this combination for the treatment of paraphimosis in the dog, particularly when other techniques have failed. PMID:25348145

  18. Advanced digital modulation: Communication techniques and monolithic GaAs technology

    NASA Technical Reports Server (NTRS)

    Wilson, S. G.; Oliver, J. D., Jr.; Kot, R. C.; Richards, C. R.

    1983-01-01

    Communications theory and practice are merged with state-of-the-art technology in IC fabrication, especially monolithic GaAs technology, to examine the general feasibility of a number of advanced technology digital transmission systems. Satellite-channel models with (1) superior throughput, perhaps 2 Gbps; (2) attractive weight and cost; and (3) high RF power and spectrum efficiency are discussed. Transmission techniques possessing reasonably simple architectures capable of monolithic fabrication at high speeds were surveyed. This included a review of amplitude/phase shift keying (APSK) techniques and the continuous-phase-modulation (CPM) methods, of which MSK represents the simplest case.

  19. Study of advanced techniques for determining the long term performance of components

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The application of existing and new technology to the problem of determining the long-term performance capability of liquid rocket propulsion feed systems is discussed. The long term performance of metal to metal valve seats in a liquid propellant fuel system is stressed. The approaches taken in conducting the analysis are: (1) advancing the technology of characterizing components through the development of new or more sensitive techniques and (2) improving the understanding of the physical of degradation.

  20. Nde of Advanced Automotive Composite Materials that Apply Ultrasound Infrared Thermography Technique

    NASA Astrophysics Data System (ADS)

    Choi, Seung-Hyun; Park, Soo-Keun; Kim, Jae-Yeol

    The infrared thermographic nondestructive inspection technique is a quality inspection and stability assessment method used to diagnose the physical characteristics and defects by detecting the infrared ray radiated from the object without destructing it. Recently, the nondestructive inspection and assessment that use the ultrasound-infrared thermography technique are widely adopted in diverse areas. The ultrasound-infrared thermography technique uses the phenomenon that the ultrasound wave incidence to an object with cracks or defects on its mating surface generates local heat on the surface. The car industry increasingly uses composite materials for their lightweight, strength, and environmental resistance. In this study, the car piston passed through the ultrasound-infrared thermography technique for nondestructive testing, among the composite material car parts. This study also examined the effects of the frequency and power to optimize the nondestructive inspection.

  1. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    SciTech Connect

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

  2. Optimization technique for improved microwave transmission from multi-solar power satellites

    NASA Technical Reports Server (NTRS)

    Arndt, G. D.; Kerwin, E. M.

    1982-01-01

    An optimization technique for generating antenna illumination tapers allows improved microwave transmission efficiencies from proposed solar power satellite (SPS) systems and minimizes sidelobe levels to meet preset environmental standards. The cumulative microwave power density levels from 50 optimized SPS systems are calculated at the centroids of each of the 3073 counties in the continental United States. These cumulative levels are compared with Environmental Protection Agency (EPA) measured levels of electromagnetic radiation in seven eastern cities. Effects of rectenna relocations upon the power levels/population exposure rates are also studied.

  3. Optimization technique for improved microwave transmission from multi-solar power satellites

    SciTech Connect

    Arndt, G.D.; Kerwin, E.M.

    1982-08-01

    An optimization technique for generating antenna illumination tapers allows improved microwave transmission efficiencies from proposed solar power satellite (SPS) systems and minimizes sidelobe levels to meet preset environmental standards. The cumulative microwave power density levels from 50 optimized SPS systems are calculated at the centroids of each of the 3073 counties in the continental United States. These cumulative levels are compared with Environmental Protection Agency (EPA) measured levels of electromagnetic radiation in seven eastern cities. Effects of rectenna relocations upon the power levels/population exposure rates are also studied.

  4. Efficient Fast Stereo Acoustic Echo Cancellation Based on Pairwise Optimal Weight Realization Technique

    NASA Astrophysics Data System (ADS)

    Yukawa, Masahiro; Murakoshi, Noriaki; Yamada, Isao

    2006-12-01

    In stereophonic acoustic echo cancellation (SAEC) problem, fast and accurate tracking of echo path is strongly required for stable echo cancellation. In this paper, we propose a class of efficient fast SAEC schemes with linear computational complexity (with respect to filter length). The proposed schemes are based on pairwise optimal weight realization (POWER) technique, thus realizing a "best" strategy (in the sense of pairwise and worst-case optimization) to use multiple-state information obtained by preprocessing. Numerical examples demonstrate that the proposed schemes significantly improve the convergence behavior compared with conventional methods in terms of system mismatch as well as echo return loss enhancement (ERLE).

  5. Comparison of Two Spatial Optimization Techniques: A Framework to Solve Multiobjective Land Use Distribution Problems

    NASA Astrophysics Data System (ADS)

    Meyer, Burghard Christian; Lescot, Jean-Marie; Laplana, Ramon

    2009-02-01

    Two spatial optimization approaches, developed from the opposing perspectives of ecological economics and landscape planning and aimed at the definition of new distributions of farming systems and of land use elements, are compared and integrated into a general framework. The first approach, applied to a small river catchment in southwestern France, uses SWAT (Soil and Water Assessment Tool) and a weighted goal programming model in combination with a geographical information system (GIS) for the determination of optimal farming system patterns, based on selected objective functions to minimize deviations from the goals of reducing nitrogen and maintaining income. The second approach, demonstrated in a suburban landscape near Leipzig, Germany, defines a GIS-based predictive habitat model for the search of unfragmented regions suitable for hare populations ( Lepus europaeus), followed by compromise optimization with the aim of planning a new habitat structure distribution for the hare. The multifunctional problem is solved by the integration of the three landscape functions (“production of cereals,” “resistance to soil erosion by water,” and “landscape water retention”). Through the comparison, we propose a framework for the definition of optimal land use patterns based on optimization techniques. The framework includes the main aspects to solve land use distribution problems with the aim of finding the optimal or best land use decisions. It integrates indicators, goals of spatial developments and stakeholders, including weighting, and model tools for the prediction of objective functions and risk assessments. Methodological limits of the uncertainty of data and model outcomes are stressed. The framework clarifies the use of optimization techniques in spatial planning.

  6. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  7. POC-scale testing of an advanced fine coal dewatering equipment/technique

    SciTech Connect

    Groppo, J.G.; Parekh, B.K.; Rawls, P.

    1995-11-01

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 {mu}m) clean coal. Economical dewatering of an ultra-fine clean coal product to a 20 percent level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy`s program to show that ultra-clean coal could be effectively dewatered to 20 percent or lower moisture using either conventional or advanced dewatering techniques. As the contract title suggests, the main focus of the program is on proof-of-concept testing of a dewatering technique for a fine clean coal product. The coal industry is reluctant to use the advanced fine coal recovery technology due to the non-availability of an economical dewatering process. in fact, in a recent survey conducted by U.S. DOE and Battelle, dewatering of fine clean coal was identified as the number one priority for the coal industry. This project will attempt to demonstrate an efficient and economic fine clean coal slurry dewatering process.

  8. Influence of robust optimization in intensity-modulated proton therapy with different dose delivery techniques

    SciTech Connect

    Liu Wei; Li Yupeng; Li Xiaoqiang; Cao Wenhua; Zhang Xiaodong

    2012-06-15

    Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique's sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans' sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT's sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the

  9. Optimal Time Advance In Terminal Area Arrivals: Throughput vs. Fuel Savings

    NASA Technical Reports Server (NTRS)

    Sadovsky, Alexander V .; Swenson, Harry N.; Haskell, William B.; Rakas, Jasenka

    2011-01-01

    The current operational practice in scheduling air traffic arriving at an airport is to adjust flight schedules by delay, i.e. a postponement of an aircrafts arrival at a scheduled location, to manage safely the FAA-mandated separation constraints between aircraft. To meet the observed and forecast growth in traffic demand, however, the practice of time advance (speeding up an aircraft toward a scheduled location) is envisioned for future operations as a practice additional to delay. Time advance has two potential advantages. The first is the capability to minimize, or at least reduce, the excess separation (the distances between pairs of aircraft immediately in-trail) and thereby to increase the throughput of the arriving traffic. The second is to reduce the total traffic delay when the traffic sample is below saturation density. A cost associated with time advance is the fuel expenditure required by an aircraft to speed up. We present an optimal control model of air traffic arriving in a terminal area and solve it using the Pontryagin Maximum Principle. The admissible controls allow time advance, as well as delay, some of the way. The cost function reflects the trade-off between minimizing two competing objectives: excess separation (negatively correlated with throughput) and fuel burn. A number of instances are solved using three different methods, to demonstrate consistency of solutions.

  10. Advanced Time-Resolved Fluorescence Microscopy Techniques for the Investigation of Peptide Self-Assembly

    NASA Astrophysics Data System (ADS)

    Anthony, Neil R.

    The ubiquitous cross beta sheet peptide motif is implicated in numerous neurodegenerative diseases while at the same time offers remarkable potential for constructing isomorphic high-performance bionanomaterials. Despite an emerging understanding of the complex folding landscape of cross beta structures in determining disease etiology and final structure, we lack knowledge of the critical initial stages of nucleation and growth. In this dissertation, I advance our understanding of these key stages in the cross-beta nucleation and growth pathways using cutting-edge microscopy techniques. In addition, I present a new combined time-resolved fluorescence analysis technique with the potential to advance our current understanding of subtle molecular level interactions that play a pivotal role in peptide self-assembly. Using the central nucleating core of Alzheimer's Amyloid-beta protein, Abeta(16 22), as a model system, utilizing electron, time-resolved, and non-linear microscopy, I capture the initial and transient nucleation stages of peptide assembly into the cross beta motif. In addition, I have characterized the nucleation pathway, from monomer to paracrystalline nanotubes in terms of morphology and fluorescence lifetime, corroborating the predicted desolvation process that occurs prior to cross-beta nucleation. Concurrently, I have identified unique heterogeneous cross beta domains contained within individual nanotube structures, which have potential bionanomaterials applications. Finally, I describe a combined fluorescence theory and analysis technique that dramatically increases the sensitivity of current time-resolved techniques. Together these studies demonstrate the potential for advanced microscopy techniques in the identification and characterization of the cross-beta folding pathway, which will further our understanding of both amyloidogenesis and bionanomaterials.

  11. Optimization of an intraocular lens for correction of advanced corneal refractive errors.

    PubMed

    Wadbro, Eddie; Hallberg, Per; Schedin, Staffan

    2016-06-01

    Based on numerical 3D ray tracing, we propose a new procedure to optimize personalized intra-ocular lenses (IOLs). The 3D ray tracing was based on measured corneal elevation data from patients who suffered from advanced keratoconus. A mathematical shape description of the posterior IOL surface, by means of a tensor product cubic Hermite spline, was implemented. The optimized lenses provide significantly reduced aberrations. Our results include a trade-off study that suggests that it is possible to considerably reduce the aberrations with only minor perturbations of an ideal spherical lens. The proposed procedure can be applied for correction of aberrations of any optical system by modifying a single surface. PMID:27411190

  12. Optimization of an Advanced Hybrid Wing Body Concept Using HCDstruct Version 1.2

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse R.; Gern, Frank H.

    2016-01-01

    Hybrid Wing Body (HWB) aircraft concepts continue to be promising candidates for achieving the simultaneous fuel consumption and noise reduction goals set forth by NASA's Environmentally Responsible Aviation (ERA) project. In order to evaluate the projected benefits, improvements in structural analysis at the conceptual design level were necessary; thus, NASA researchers developed the Hybrid wing body Conceptual Design and structural optimization (HCDstruct) tool to perform aeroservoelastic structural optimizations of advanced HWB concepts. In this paper, the authors present substantial updates to the HCDstruct tool and related analysis, including: the addition of four inboard and eight outboard control surfaces and two all-movable tail/rudder assemblies, providing a full aeroservoelastic analysis capability; the implementation of asymmetric load cases for structural sizing applications; and a methodology for minimizing control surface actuation power using NASTRAN SOL 200 and HCDstruct's aeroservoelastic finite-element model (FEM).

  13. Advanced computational tools for optimization and uncertainty quantification of carbon capture processes

    SciTech Connect

    Miller, David C.; Ng, Brenda; Eslick, John

    2014-01-01

    Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and universities that is developing, demonstrating, and deploying a suite of multi-scale modeling and simulation tools. One significant computational tool is FOQUS, a Framework for Optimization and Quantification of Uncertainty and Sensitivity, which enables basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to rapidly synthesize and optimize a process and determine the level of uncertainty associated with the resulting process. The overall approach of CCSI is described with a more detailed discussion of FOQUS and its application to carbon capture systems.

  14. Finite element method for optimal guidance of an advanced launch vehicle

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.; Calise, Anthony J.; Leung, Martin

    1992-01-01

    A temporal finite element based on a mixed form of Hamilton's weak principle is summarized for optimal control problems. The resulting weak Hamiltonian finite element method is extended to allow for discontinuities in the states and/or discontinuities in the system equations. An extension of the formulation to allow for control inequality constraints is also presented. The formulation does not require element quadrature, and it produces a sparse system of nonlinear algebraic equations. To evaluate its feasibility for real-time guidance applications, this approach is applied to the trajectory optimization of a four-state, two-stage model with inequality constraints for an advanced launch vehicle. Numerical results for this model are presented and compared to results from a multiple-shooting code. The results show the accuracy and computational efficiency of the finite element method.

  15. A weak Hamiltonian finite element method for optimal guidance of an advanced launch vehicle

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Calise, Anthony J.; Bless, Robert R.; Leung, Martin

    1989-01-01

    A temporal finite-element method based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables, which are expanded in terms of nodal values and simple shape functions. Time derivatives of the states and costates do not appear in the governing variational equation; the only quantities whose time derivatives appear therein are virtual states and virtual costates. Numerical results are presented for an elementary trajectory optimization problem; they show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The feasibility of this approach for real-time guidance applications is evaluated. A simplified model for an advanced launch vehicle application that is suitable for finite-element solution is presented.

  16. Advances in the integration of drug metabolism into the lead optimization paradigm.

    PubMed

    Korfmacher, Walter A

    2009-06-01

    The lead optimization paradigm includes a team of experts that has a multitude of parameters to consider when moving from an initial lead compound through the lead optimization phase to the development phase. While in the past the team may have had only a medicinal chemist and a pharmacologist, the current team would often include experts in the areas of drug metabolism and pharmacokinetics (DMPK) as well as chemical toxicity. This review provides an overview of the some of the recent advances in the areas of DMPK screening plus a discussion of some of the assays that can be used to begin to screen for toxicity issues. The focus of this review is the major potential problem areas: oral bioavailability, half-life, drug-drug interactions and metabolism and toxicity issues. PMID:19519496

  17. Sensitivity analysis and multidisciplinary optimization for aircraft design - Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  18. Sensitivity analysis and multidisciplinary optimization for aircraft design: Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  19. An approach to optimal guidance of an advanced launch vehicle concept

    NASA Technical Reports Server (NTRS)

    Leung, Martin S.; Calise, Anthony J.

    1990-01-01

    An approximate solution for the maximum payload trajectory of a two-stage launch vehicle using a regular perturbation technique is presented. A zero-order solution for a two-stage vehicle based on a flat-earth approximation and negligible atmospheric effects is obtained in closed form. High-order correction terms are obtained from the solution of nonhomogeneous, first-order linear differential equations by quadrature. This promises the capability for an onboard optimal guidance law implementation.

  20. Advanced Targeting Cost Function Design for Evolutionary Optimization of Control of Logistic Equation

    NASA Astrophysics Data System (ADS)

    Senkerik, Roman; Zelinka, Ivan; Davendra, Donald; Oplatkova, Zuzana

    2010-06-01

    This research deals with the optimization of the control of chaos by means of evolutionary algorithms. This work is aimed on an explanation of how to use evolutionary algorithms (EAs) and how to properly define the advanced targeting cost function (CF) securing very fast and precise stabilization of desired state for any initial conditions. As a model of deterministic chaotic system, the one dimensional Logistic equation was used. The evolutionary algorithm Self-Organizing Migrating Algorithm (SOMA) was used in four versions. For each version, repeated simulations were conducted to outline the effectiveness and robustness of used method and targeting CF.

  1. Optimizing spinning time-domain gravitational waveforms for advanced LIGO data analysis

    NASA Astrophysics Data System (ADS)

    Devine, Caleb; Etienne, Zachariah B.; McWilliams, Sean T.

    2016-06-01

    The spinning effective-one-body–numerical relativity (SEOBNR) series of gravitational wave approximants are among the best available for advanced LIGO data analysis. Unfortunately, SEOBNR codes as they currently exist within LALSuite are generally too slow to be directly useful for standard Markov-chain Monte Carlo-based parameter estimation (PE). Reduced-order models (ROMs) of SEOBNR have been developed for this purpose, but there is no known way to make ROMs of the full eight-dimensional intrinsic parameter space more efficient for PE than the SEOBNR codes directly. So as a proof of principle, we have sped up the original LALSuite SEOBNRv2 approximant code, which models waveforms from aligned-spin systems, by nearly 300x. Our optimized code shortens the timescale for conducting PE with this approximant to months, assuming a purely serial analysis, so that even modest parallelization combined with our optimized code will make running the full PE pipeline with SEOBNR codes directly a realistic possibility. A number of our SEOBNRv2 optimizations have already been applied to SEOBNRv3, a new approximant capable of modeling sources with all eight (precessing) intrinsic degrees of freedom. We anticipate that once all of our optimizations have been applied to SEOBNRv3, a similar speed-up may be achieved.

  2. Optimizing spinning time-domain gravitational waveforms for Advanced LIGO data analysis

    NASA Astrophysics Data System (ADS)

    Etienne, Zachariah; Devine, Caleb; McWilliams, Sean

    2016-03-01

    The Spinning Effective One Body--Numerical Relativity (SEOBNR) series of gravitational wave approximants are among the best available for Advanced LIGO data analysis. Unfortunately, SEOBNR codes as they currently exist within LALSuite are generally too slow to be directly useful for standard Markov-Chain Monte Carlo-based parameter estimation (PE). Reduced-Order Models (ROMs) of SEOBNR have been developed for this purpose, but there is no known way to make ROMs of the full eight-dimensional parameter space more efficient for PE than the SEOBNR codes directly. So as a proof of principle, we have sped up the original LALSuite SEOBNRv2 approximant code, which models waveforms from aligned-spin systems, by about 280x. Our optimized code shortens the timescale for conducting PE with this approximant to months, assuming a purely serial analysis, so that even modest parallelization combined with our optimized code will make running the full PE pipeline with SEOBNR codes directly a realistic possibility. A number of our SEOBNRv2 optimizations have already been applied to SEOBNRv3, a new approximant capable of modeling sources with all eight intrinsic degrees of freedom. We anticipate that once all of our optimizations have been applied to SEOBNRv3, a similar speed-up will be achieved.

  3. Optimization of brushless direct current motor design using an intelligent technique.

    PubMed

    Shabanian, Alireza; Tousiwas, Armin Amini Poustchi; Pourmandi, Massoud; Khormali, Aminollah; Ataei, Abdolhay

    2015-07-01

    This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using an improved bee algorithm (IBA). The characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. This method is based on the capability of swarm-based algorithms in finding the optimal solution. One sample case is used to illustrate the performance of the design approach and optimization technique. The IBA has a better performance and speed of convergence compared with bee algorithm (BA). Simulation results show that the proposed method has a very high/efficient performance. PMID:25841938

  4. Research on inverse, hybrid and optimization problems in engineering sciences with emphasis on turbomachine aerodynamics: Review of Chinese advances

    NASA Technical Reports Server (NTRS)

    Liu, Gao-Lian

    1991-01-01

    Advances in inverse design and optimization theory in engineering fields in China are presented. Two original approaches, the image-space approach and the variational approach, are discussed in terms of turbomachine aerodynamic inverse design. Other areas of research in turbomachine aerodynamic inverse design include the improved mean-streamline (stream surface) method and optimization theory based on optimal control. Among the additional engineering fields discussed are the following: the inverse problem of heat conduction, free-surface flow, variational cogeneration of optimal grid and flow field, and optimal meshing theory of gears.

  5. Eversion-Inversion Labral Repair and Reconstruction Technique for Optimal Suction Seal

    PubMed Central

    Moreira, Brett; Pascual-Garrido, Cecilia; Chadayamurri, Vivek; Mei-Dan, Omer

    2015-01-01

    Labral tears are a significant cause of hip pain and are currently the most common indication for hip arthroscopy. Compared with labral debridement, labral repair has significantly better outcomes in terms of both daily activities and athletic pursuits in the setting of femoral acetabular impingement. The classic techniques described in the literature for labral repair all use loop or pass-through intrasubstance labral sutures to achieve a functional hip seal. This hip seal is important for hip stability and optimal joint biomechanics, as well as in the prevention of long-term osteoarthritis. We describe a novel eversion-inversion intrasubstance suturing technique for labral repair and reconstruction that can assist in restoration of the native labrum position by re-creating an optimal seal around the femoral head. PMID:26870648

  6. Parametric Studies and Optimization of Eddy Current Techniques through Computer Modeling

    SciTech Connect

    Todorov, E. I.

    2007-03-21

    The paper demonstrates the use of computer models for parametric studies and optimization of surface and subsurface eddy current techniques. The study with high-frequency probe investigates the effect of eddy current frequency and probe shape on the detectability of flaws in the steel substrate. The low-frequency sliding probe study addresses the effect of conductivity between the fastener and the hole, frequency and coil separation distance on detectability of flaws in subsurface layers.

  7. Method and apparatus for optimizing operation of a power generating plant using artificial intelligence techniques

    SciTech Connect

    Wroblewski, David; Katrompas, Alexander M.; Parikh, Neel J.

    2009-09-01

    A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.

  8. Applications of Advanced Nondestructive Measurement Techniques to Address Safety of Flight Issues on NASA Spacecraft

    NASA Technical Reports Server (NTRS)

    Prosser, Bill

    2016-01-01

    Advanced nondestructive measurement techniques are critical for ensuring the reliability and safety of NASA spacecraft. Techniques such as infrared thermography, THz imaging, X-ray computed tomography and backscatter X-ray are used to detect indications of damage in spacecraft components and structures. Additionally, sensor and measurement systems are integrated into spacecraft to provide structural health monitoring to detect damaging events that occur during flight such as debris impacts during launch and assent or from micrometeoroid and orbital debris, or excessive loading due to anomalous flight conditions. A number of examples will be provided of how these nondestructive measurement techniques have been applied to resolve safety critical inspection concerns for the Space Shuttle, International Space Station (ISS), and a variety of launch vehicles and unmanned spacecraft.

  9. Accelerated Testing Methodology in Constant Stress-Rate Testing for Advanced Structural Ceramics: A Preloading Technique

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.; Huebert, Dean; Bartlett, Allen; Choi, Han-Ho

    2001-01-01

    Preloading technique was used as a means of an accelerated testing methodology in constant stress-rate (dynamic fatigue) testing for two different brittle materials. The theory developed previously for fatigue strength as a function of preload was further verified through extensive constant stress-rate testing for glass-ceramic and CRT glass in room temperature distilled water. The preloading technique was also used in this study to identify the prevailing failure mechanisms at elevated temperatures, particularly at lower test rates in which a series of mechanisms would be associated simultaneously with material failure, resulting in significant strength increase or decrease. Two different advanced ceramics including SiC whisker-reinforced composite silicon nitride and 96 wt% alumina were used at elevated temperatures. It was found that the preloading technique can be used as an additional tool to pinpoint the dominant failure mechanism that is associated with such a phenomenon of considerable strength increase or decrease.

  10. Accelerated Testing Methodology in Constant Stress-Rate Testing for Advanced Structural Ceramics: A Preloading Technique

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.; Huebert, Dean; Bartlett, Allen; Choi, Han-Ho

    2001-01-01

    Preloading technique was used as a means of an accelerated testing methodology in constant stress-rate ('dynamic fatigue') testing for two different brittle materials. The theory developed previously for fatigue strength as a function of preload was further verified through extensive constant stress-rate testing for glass-ceramic and CRT glass in room temperature distilled water. The preloading technique was also used in this study to identify the prevailing failure mechanisms at elevated temperatures, particularly at lower test rate in which a series of mechanisms would be associated simultaneously with material failure, resulting in significant strength increase or decrease. Two different advanced ceramics including SiC whisker-reinforced composite silicon nitride and 96 wt% alumina were used at elevated temperatures. It was found that the preloading technique can be used as an additional tool to pinpoint the dominant failure mechanism that is associated with such a phenomenon of considerable strength increase or decrease.

  11. Influence of robust optimization in intensity-modulated proton therapy with different dose delivery techniques

    PubMed Central

    Liu, Wei; Li, Yupeng; Li, Xiaoqiang; Cao, Wenhua; Zhang, Xiaodong

    2012-01-01

    Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique’s sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans’ sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT’s sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the

  12. Issues and recent advances in optimal experimental design for site investigation (Invited)

    NASA Astrophysics Data System (ADS)

    Nowak, W.

    2013-12-01

    This presentation provides an overview over issues and recent advances in model-based experimental design for site exploration. The addressed issues and advances are (1) how to provide an adequate envelope to prior uncertainty, (2) how to define the information needs in a task-oriented manner, (3) how to measure the expected impact of a data set that it not yet available but only planned to be collected, and (4) how to perform best the optimization of the data collection plan. Among other shortcomings of the state-of-the-art, it is identified that there is a lack of demonstrator studies where exploration schemes based on expert judgment are compared to exploration schemes obtained by optimal experimental design. Such studies will be necessary do address the often voiced concern that experimental design is an academic exercise with little improvement potential over the well- trained gut feeling of field experts. When addressing this concern, a specific focus has to be given to uncertainty in model structure, parameterizations and parameter values, and to related surprises that data often bring about in field studies, but never in synthetic-data based studies. The background of this concern is that, initially, conceptual uncertainty may be so large that surprises are the rule rather than the exception. In such situations, field experts have a large body of experience in handling the surprises, and expert judgment may be good enough compared to meticulous optimization based on a model that is about to be falsified by the incoming data. In order to meet surprises accordingly and adapt to them, there needs to be a sufficient representation of conceptual uncertainty within the models used. Also, it is useless to optimize an entire design under this initial range of uncertainty. Thus, the goal setting of the optimization should include the objective to reduce conceptual uncertainty. A possible way out is to upgrade experimental design theory towards real-time interaction

  13. The L_infinity constrained global optimal histogram equalization technique for real time imaging

    NASA Astrophysics Data System (ADS)

    Ren, Qiongwei; Niu, Yi; Liu, Lin; Jiao, Yang; Shi, Guangming

    2015-08-01

    Although the current imaging sensors can achieve 12 or higher precision, the current display devices and the commonly used digital image formats are still only 8 bits. This mismatch causes significant waste of the sensor precision and loss of information when storing and displaying the images. For better usage of the precision-budget, tone mapping operators have to be used to map the high-precision data into low-precision digital images adaptively. In this paper, the classic histogram equalization tone mapping operator is reexamined in the sense of optimization. We point out that the traditional histogram equalization technique and its variants are fundamentally improper by suffering from local optimum problems. To overcome this drawback, we remodel the histogram equalization tone mapping task based on graphic theory which achieves the global optimal solutions. Another advantage of the graphic-based modeling is that the tone-continuity is also modeled as a vital constraint in our approach which suppress the annoying boundary artifacts of the traditional approaches. In addition, we propose a novel dynamic programming technique to solve the histogram equalization problem in real time. Experimental results shows that the proposed tone-preserved global optimal histogram equalization technique outperforms the traditional approaches by exhibiting more subtle details in the foreground while preserving the smoothness of the background.

  14. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  15. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  16. Advanced spatio-temporal filtering techniques for photogrammetric image sequence analysis in civil engineering material testing

    NASA Astrophysics Data System (ADS)

    Liebold, F.; Maas, H.-G.

    2016-01-01

    The paper shows advanced spatial, temporal and spatio-temporal filtering techniques which may be used to reduce noise effects in photogrammetric image sequence analysis tasks and tools. As a practical example, the techniques are validated in a photogrammetric spatio-temporal crack detection and analysis tool applied in load tests in civil engineering material testing. The load test technique is based on monocular image sequences of a test object under varying load conditions. The first image of a sequence is defined as a reference image under zero load, wherein interest points are determined and connected in a triangular irregular network structure. For each epoch, these triangles are compared to the reference image triangles to search for deformations. The result of the feature point tracking and triangle comparison process is a spatio-temporally resolved strain value field, wherein cracks can be detected, located and measured via local discrepancies. The strains can be visualized as a color-coded map. In order to improve the measuring system and to reduce noise, the strain values of each triangle must be treated in a filtering process. The paper shows the results of various filter techniques in the spatial and in the temporal domain as well as spatio-temporal filtering techniques applied to these data. The best results were obtained by a bilateral filter in the spatial domain and by a spatio-temporal EOF (empirical orthogonal function) filtering technique.

  17. Strategies and Advancement in Antibody-Drug Conjugate Optimization for Targeted Cancer Therapeutics

    PubMed Central

    Kim, Eunhee G.; Kim, Kristine M.

    2015-01-01

    Antibody-drug conjugates utilize the antibody as a delivery vehicle for highly potent cytotoxic molecules with specificity for tumor-associated antigens for cancer therapy. Critical parameters that govern successful antibody-drug conjugate development for clinical use include the selection of the tumor target antigen, the antibody against the target, the cytotoxic molecule, the linker bridging the cytotoxic molecule and the antibody, and the conjugation chemistry used for the attachment of the cytotoxic molecule to the antibody. Advancements in these core antibody-drug conjugate technology are reflected by recent approval of Adectris® (anti-CD30-drug conjugate) and Kadcyla® (anti-HER2 drug conjugate). The potential approval of an anti-CD22 conjugate and promising new clinical data for anti-CD19 and anti-CD33 conjugates are additional advancements. Enrichment of antibody-drug conjugates with newly developed potent cytotoxic molecules and linkers are also in the pipeline for various tumor targets. However, the complexity of antibody-drug conjugate components, conjugation methods, and off-target toxicities still pose challenges for the strategic design of antibody-drug conjugates to achieve their fullest therapeutic potential. This review will discuss the emergence of clinical antibody-drug conjugates, current trends in optimization strategies, and recent study results for antibody-drug conjugates that have incorporated the latest optimization strategies. Future challenges and perspectives toward making antibody-drug conjugates more amendable for broader disease indications are also discussed. PMID:26535074

  18. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  19. Dosimetric characterization and optimization of a customized Stanford total skin electron irradiation (TSEI) technique.

    PubMed

    Luĉić, Felipe; Sánchez-Nieto, Beatriz; Caprile, Paola; Zelada, Gabriel; Goset, Karen

    2013-01-01

    Total skin electron irradiation (TSEI) has been used as a treatment for mycosis fungoides. Our center has implemented a modified Stanford technique with six pairs of 6 MeV adjacent electron beams, incident perpendicularly on the patient who remains lying on a translational platform, at 200 cm from the source. The purpose of this study is to perform a dosimetric characterization of this technique and to investigate its optimization in terms of energy characteristics, extension, and uniformity of the treatment field. In order to improve the homogeneity of the distribution, a custom-made polyester filter of variable thickness and a uniform PMMA degrader plate were used. It was found that the characteristics of a 9 MeV beam with an 8 mm thick degrader were similar to those of the 6 MeV beam without filter, but with an increased surface dose. The combination of the degrader and the polyester filter improved the uniformity of the distribution along the dual field (180cm long), increasing the dose at the borders of field by 43%. The optimum angles for the pair of beams were ± 27°. This configuration avoided displacement of the patient, and reduced the treatment time and the positioning problems related to the abutting superior and inferior fields. Dose distributions in the transversal plane were measured for the six incidences of the Stanford technique with film dosimetry in an anthropomorphic pelvic phantom. This was performed for the optimized treatment and compared with the previously implemented technique. The comparison showed an increased superficial dose and improved uniformity of the 85% isodose curve coverage for the optimized technique. PMID:24036877

  20. Review of recent advances in analytical techniques for the determination of neurotransmitters

    PubMed Central

    Perry, Maura; Li, Qiang; Kennedy, Robert T.

    2009-01-01

    Methods and advances for monitoring neurotransmitters in vivo or for tissue analysis of neurotransmitters over the last five years are reviewed. The review is organized primarily by neurotransmitter type. Transmitter and related compounds may be monitored by either in vivo sampling coupled to analytical methods or implanted sensors. Sampling is primarily performed using microdialysis, but low-flow push-pull perfusion may offer advantages of spatial resolution while minimizing the tissue disruption associated with higher flow rates. Analytical techniques coupled to these sampling methods include liquid chromatography, capillary electrophoresis, enzyme assays, sensors, and mass spectrometry. Methods for the detection of amino acid, monoamine, neuropeptide, acetylcholine, nucleoside, and soluable gas neurotransmitters have been developed and improved upon. Advances in the speed and sensitivity of these methods have enabled improvements in temporal resolution and increased the number of compounds detectable. Similar advances have enabled improved detection at tissue samples, with a substantial emphasis on single cell and other small samples. Sensors provide excellent temporal and spatial resolution for in vivo monitoring. Advances in application to catecholamines, indoleamines, and amino acids have been prominent. Improvements in stability, sensitivity, and selectivity of the sensors have been of paramount interest. PMID:19800472

  1. Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques

    NASA Astrophysics Data System (ADS)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2015-12-01

    To design a robust swimmer tracking system, we took into account two well-known tracking techniques: the nonlinear joint transform correlation (NL-JTC) and the color histogram. The two techniques perform comparably well, yet they both have substantial limitations. Interestingly, they also seem to show some complementarity. The correlation technique yields accurate detection but is sensitive to rotation, scale and contour deformation, whereas the color histogram technique is robust for rotation and contour deformation but shows low accuracy and is highly sensitive to luminosity and confusing background colors. These observations suggested the possibility of a dynamic fusion of the correlation plane and the color scores map. Before this fusion, two steps are required. First is the extraction of a sub-plane of correlation that describes the similarity between the reference and target images. This sub-plane has the same size as the color scores map but they have different interval values. Thus, the second step is required which is the normalization of the planes in the same interval so they can be fused. In order to determine the benefits of this fusion technique, first, we tested it on a synthetic image containing different forms with different colors. We thus were able to optimize the correlation plane and color histogram techniques before applying our fusion technique to real videos of swimmers in international competitions. Last, a comparative study of the dynamic fusion technique and the two classical techniques was carried out to demonstrate the efficacy of the proposed technique. The criteria of comparison were the tracking percentage, the peak to correlation energy (PCE), which evaluated the sharpness of the peak (accuracy), and the local standard deviation (Local-STD), which assessed the noise in the planes (robustness).

  2. Impact of advanced microstructural characterization techniques on modeling and analysis of radiation damage

    SciTech Connect

    Garner, F.A.; Odette, G.R.

    1980-01-01

    The evolution of radiation-induced alterations of dimensional and mechanical properties has been shown to be a direct and often predictable consequence of radiation-induced microstructural changes. Recent advances in understanding of the nature and role of each microstructural component in determining the property of interest has led to a reappraisal of the type and priority of data needed for further model development. This paper presents an overview of the types of modeling and analysis activities in progress, the insights that prompted these activities, and specific examples of successful and ongoing efforts. A review is presented of some problem areas that in the authors' opinion are not yet receiving sufficient attention and which may benefit from the application of advanced techniques of microstructural characterization. Guidelines based on experience gained in previous studies are also provided for acquisition of data in a form most applicable to modeling needs.

  3. System engineering techniques for establishing balanced design and performance guidelines for the advanced telerobotic testbed

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Matijevic, J. R.

    1987-01-01

    Novel system engineering techniques have been developed and applied to establishing structured design and performance objectives for the Telerobotics Testbed that reduce technical risk while still allowing the testbed to demonstrate an advancement in state-of-the-art robotic technologies. To estblish the appropriate tradeoff structure and balance of technology performance against technical risk, an analytical data base was developed which drew on: (1) automation/robot-technology availability projections, (2) typical or potential application mission task sets, (3) performance simulations, (4) project schedule constraints, and (5) project funding constraints. Design tradeoffs and configuration/performance iterations were conducted by comparing feasible technology/task set configurations against schedule/budget constraints as well as original program target technology objectives. The final system configuration, task set, and technology set reflected a balanced advancement in state-of-the-art robotic technologies, while meeting programmatic objectives and schedule/cost constraints.

  4. Advanced MRI techniques to improve our understanding of experience-induced neuroplasticity.

    PubMed

    Tardif, Christine Lucas; Gauthier, Claudine Joëlle; Steele, Christopher John; Bazin, Pierre-Louis; Schäfer, Andreas; Schaefer, Alexander; Turner, Robert; Villringer, Arno

    2016-05-01

    Over the last two decades, numerous human MRI studies of neuroplasticity have shown compelling evidence for extensive and rapid experience-induced brain plasticity in vivo. To date, most of these studies have consisted of simply detecting a difference in structural or functional images with little concern for their lack of biological specificity. Recent reviews and public debates have stressed the need for advanced imaging techniques to gain a better understanding of the nature of these differences - characterizing their extent in time and space, their underlying biological and network dynamics. The purpose of this article is to give an overview of advanced imaging techniques for an audience of cognitive neuroscientists that can assist them in the design and interpretation of future MRI studies of neuroplasticity. The review encompasses MRI methods that probe the morphology, microstructure, function, and connectivity of the brain with improved specificity. We underline the possible physiological underpinnings of these techniques and their recent applications within the framework of learning- and experience-induced plasticity in healthy adults. Finally, we discuss the advantages of a multi-modal approach to gain a more nuanced and comprehensive description of the process of learning. PMID:26318050

  5. Determination of the optimal tolerance for MLC positioning in sliding window and VMAT techniques

    SciTech Connect

    Hernandez, V. Abella, R.; Calvo, J. F.; Jurado-Bruggemann, D.; Sancho, I.; Carrasco, P.

    2015-04-15

    Purpose: Several authors have recommended a 2 mm tolerance for multileaf collimator (MLC) positioning in sliding window treatments. In volumetric modulated arc therapy (VMAT) treatments, however, the optimal tolerance for MLC positioning remains unknown. In this paper, the authors present the results of a multicenter study to determine the optimal tolerance for both techniques. Methods: The procedure used is based on dynalog file analysis. The study was carried out using seven Varian linear accelerators from five different centers. Dynalogs were collected from over 100 000 clinical treatments and in-house software was used to compute the number of tolerance faults as a function of the user-defined tolerance. Thus, the optimal value for this tolerance, defined as the lowest achievable value, was investigated. Results: Dynalog files accurately predict the number of tolerance faults as a function of the tolerance value, especially for low fault incidences. All MLCs behaved similarly and the Millennium120 and the HD120 models yielded comparable results. In sliding window techniques, the number of beams with an incidence of hold-offs >1% rapidly decreases for a tolerance of 1.5 mm. In VMAT techniques, the number of tolerance faults sharply drops for tolerances around 2 mm. For a tolerance of 2.5 mm, less than 0.1% of the VMAT arcs presented tolerance faults. Conclusions: Dynalog analysis provides a feasible method for investigating the optimal tolerance for MLC positioning in dynamic fields. In sliding window treatments, the tolerance of 2 mm was found to be adequate, although it can be reduced to 1.5 mm. In VMAT treatments, the typically used 5 mm tolerance is excessively high. Instead, a tolerance of 2.5 mm is recommended.

  6. The Importance of Supportive Care in Optimizing Treatment Outcomes of Patients with Advanced Prostate Cancer

    PubMed Central

    2012-01-01

    Optimal oncologic care of older men with prostate cancer, including effective prevention and management of the disease and treatment side effects (so-called best supportive care measures) can prolong survival, improve quality of life, and reduce depressive symptoms. In addition, the proportion of treatment discontinuations can be reduced through early reporting and management of side effects. Pharmacologic care may be offered to manage the side effects of androgen-deprivation therapy and chemotherapy, which may include hot flashes, febrile neutropenia, fatigue, and diarrhea. Nonpharmacologic care (e.g., physical exercise, acupuncture, relaxation) has also been shown to benefit patients. At the Georges Pompidou European Hospital, the Program of Optimization of Chemotherapy Administration has demonstrated that improved outpatient follow-up by supportive care measures can reduce the occurrence of chemotherapy-related side effects, reduce cancellations and modifications of treatment, reduce chemotherapy wastage, and reduce the length of stay in the outpatient unit. The importance of supportive care measures to optimize management and outcomes of older men with advanced prostate cancer should not be overlooked. PMID:23015682

  7. Fusion of Optimized Indicators from Advanced Driver Assistance Systems (ADAS) for Driver Drowsiness Detection

    PubMed Central

    Daza, Iván G.; Bergasa, Luis M.; Bronte, Sebastián; Yebes, J. Javier; Almazán, Javier; Arroyo, Roberto

    2014-01-01

    This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study. PMID:24412904

  8. Plasma Profile and Shape Optimization for the Advanced Tokamak Power Plant, ARIES-AT

    SciTech Connect

    C.E. Kessel; T.K. Mau; S.C. Jardin; and F. Najmabadi

    2001-06-05

    An advanced tokamak plasma configuration is developed based on equilibrium, ideal-MHD stability, bootstrap current analysis, vertical stability and control, and poloidal-field coil analysis. The plasma boundaries used in the analysis are forced to coincide with the 99% flux surface from the free-boundary equilibrium. Using an accurate bootstrap current model and external current-drive profiles from ray-tracing calculations in combination with optimized pressure profiles, beta(subscript N) values above 7.0 have been obtained. The minimum current drive requirement is found to lie at a lower beta(subscript N) of 5.4. The external kink mode is stabilized by a tungsten shell located at 0.33 times the minor radius and a feedback system. Plasma shape optimization has led to an elongation of 2.2 and triangularity of 0.9 at the separatrix. Vertical stability could be achieved by a combination of tungsten shells located at 0.33 times the minor radius and feedback control coils located behind the shield. The poloidal-field coils were optimized in location and current, providing a maximum coil current of 8.6 MA. These developments have led to a simultaneous reduction in the power plant major radius and toroidal field.

  9. Multi-Objective Optimization of a Turbofan for an Advanced, Single-Aisle Transport

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.; Guynn, Mark D.

    2012-01-01

    Considerable interest surrounds the design of the next generation of single-aisle commercial transports in the Boeing 737 and Airbus A320 class. Aircraft designers will depend on advanced, next-generation turbofan engines to power these airplanes. The focus of this study is to apply single- and multi-objective optimization algorithms to the conceptual design of ultrahigh bypass turbofan engines for this class of aircraft, using NASA s Subsonic Fixed Wing Project metrics as multidisciplinary objectives for optimization. The independent design variables investigated include three continuous variables: sea level static thrust, wing reference area, and aerodynamic design point fan pressure ratio, and four discrete variables: overall pressure ratio, fan drive system architecture (i.e., direct- or gear-driven), bypass nozzle architecture (i.e., fixed- or variable geometry), and the high- and low-pressure compressor work split. Ramp weight, fuel burn, noise, and emissions are the parameters treated as dependent objective functions. These optimized solutions provide insight to the ultrahigh bypass engine design process and provide information to NASA program management to help guide its technology development efforts.

  10. Optimizing charge breeding techniques for ISOL facilities in Europe: Conclusions from the EMILIE project.

    PubMed

    Delahaye, P; Galatà, A; Angot, J; Cam, J F; Traykov, E; Ban, G; Celona, L; Choinski, J; Gmaj, P; Jardin, P; Koivisto, H; Kolhinen, V; Lamy, T; Maunoury, L; Patti, G; Thuillier, T; Tarvainen, O; Vondrasek, R; Wenander, F

    2016-02-01

    The present paper summarizes the results obtained from the past few years in the framework of the Enhanced Multi-Ionization of short-Lived Isotopes for Eurisol (EMILIE) project. The EMILIE project aims at improving the charge breeding techniques with both Electron Cyclotron Resonance Ion Sources (ECRIS) and Electron Beam Ion Sources (EBISs) for European Radioactive Ion Beam (RIB) facilities. Within EMILIE, an original technique for debunching the beam from EBIS charge breeders is being developed, for making an optimal use of the capabilities of CW post-accelerators of the future facilities. Such a debunching technique should eventually resolve duty cycle and time structure issues which presently complicate the data-acquisition of experiments. The results of the first tests of this technique are reported here. In comparison with charge breeding with an EBIS, the ECRIS technique had lower performance in efficiency and attainable charge state for metallic ion beams and also suffered from issues related to beam contamination. In recent years, improvements have been made which significantly reduce the differences between the two techniques, making ECRIS charge breeding more attractive especially for CW machines producing intense beams. Upgraded versions of the Phoenix charge breeder, originally developed by LPSC, will be used at SPES and GANIL/SPIRAL. These two charge breeders have benefited from studies undertaken within EMILIE, which are also briefly summarized here. PMID:26932063

  11. Optimizing charge breeding techniques for ISOL facilities in Europe: Conclusions from the EMILIE project

    NASA Astrophysics Data System (ADS)

    Delahaye, P.; Galatà, A.; Angot, J.; Cam, J. F.; Traykov, E.; Ban, G.; Celona, L.; Choinski, J.; Gmaj, P.; Jardin, P.; Koivisto, H.; Kolhinen, V.; Lamy, T.; Maunoury, L.; Patti, G.; Thuillier, T.; Tarvainen, O.; Vondrasek, R.; Wenander, F.

    2016-02-01

    The present paper summarizes the results obtained from the past few years in the framework of the Enhanced Multi-Ionization of short-Lived Isotopes for Eurisol (EMILIE) project. The EMILIE project aims at improving the charge breeding techniques with both Electron Cyclotron Resonance Ion Sources (ECRIS) and Electron Beam Ion Sources (EBISs) for European Radioactive Ion Beam (RIB) facilities. Within EMILIE, an original technique for debunching the beam from EBIS charge breeders is being developed, for making an optimal use of the capabilities of CW post-accelerators of the future facilities. Such a debunching technique should eventually resolve duty cycle and time structure issues which presently complicate the data-acquisition of experiments. The results of the first tests of this technique are reported here. In comparison with charge breeding with an EBIS, the ECRIS technique had lower performance in efficiency and attainable charge state for metallic ion beams and also suffered from issues related to beam contamination. In recent years, improvements have been made which significantly reduce the differences between the two techniques, making ECRIS charge breeding more attractive especially for CW machines producing intense beams. Upgraded versions of the Phoenix charge breeder, originally developed by LPSC, will be used at SPES and GANIL/SPIRAL. These two charge breeders have benefited from studies undertaken within EMILIE, which are also briefly summarized here.

  12. Determination of Electromagnetic Properties of Mesh Material Using Advanced Radiometer Techniques

    NASA Technical Reports Server (NTRS)

    Arrington, R. F.; Blume, H. J. C.

    1985-01-01

    The need for a large diameter deployable antenna to map soil moisture with a 10 kilometer or better resolution using a microwave radiometer is discussed. A 6 meter deployable antenna is also needed to map sea surface temperature on the Navy Remote Ocean Sensor System (NROSS). Both of these deployable antennas require a mesh membrane material as the reflecting surface. The determination of the electromagnetic properties of mesh materials is a difficult problem. The Antenna and Microwave Research Branch (AMRB) of Langley Research Center was asked to measure the material to be used on MROSS by NRL. A cooperative program was initiated to measure this mesh material using two advanced radiometer techniques.

  13. Measuring the microbiome: perspectives on advances in DNA-based techniques for exploring microbial life

    PubMed Central

    Bunge, John; Gilbert, Jack A.; Moore, Jason H.

    2012-01-01

    This article reviews recent advances in ‘microbiome studies’: molecular, statistical and graphical techniques to explore and quantify how microbial organisms affect our environments and ourselves given recent increases in sequencing technology. Microbiome studies are moving beyond mere inventories of specific ecosystems to quantifications of community diversity and descriptions of their ecological function. We review the last 24 months of progress in this sort of research, and anticipate where the next 2 years will take us. We hope that bioinformaticians will find this a helpful springboard for new collaborations with microbiologists. PMID:22308073

  14. Techniques for measurement of the thermal expansion of advanced composite materials

    NASA Technical Reports Server (NTRS)

    Tompkins, Stephen S.

    1989-01-01

    Techniques available to measure small thermal displacements in flat laminates and structural tubular elements of advanced composite materials are described. Emphasis is placed on laser interferometry and the laser interferometric dilatometer system used at the National Aeronautics and Space Administration (NASA) Langley Research Center. Thermal expansion data are presented for graphite-fiber reinforced 6061 and 2024 aluminum laminates and for graphite fiber reinforced AZ91 C and QH21 A magnesium laminates before and after processing to minimize or eliminate thermal strain hysteresis. Data are also presented on the effects of reinforcement volume content on thermal expansion of silicon-carbide whisker and particulate reinforced aluminum.

  15. [Recent advances in the techniques of protein-protein interaction study].

    PubMed

    Wang, Ming-Qiang; Wu, Jin-Xia; Zhang, Yu-Hong; Han, Ning; Bian, Hong-Wu; Zhu, Mu-Yuan

    2013-11-01

    Protein-protein interactions play key roles in the development of organisms and the response to biotic and abiotic stresses. Several wet-lab methods have been developed to study this challenging area,including yeast two-hybrid system, tandem affinity purification, Co-immunoprecipitation, GST Pull-down, bimolecular fluorescence complementation, fluorescence resonance energy transfer and surface plasmon resonance analysis. In this review, we discuss theoretical principles and relative advantages and disvantages of these techniques,with an emphasis on recent advances to compensate for limitations. PMID:24579310

  16. A Constrainted Design Approach for NLF Airfoils by Coupling Inverse Design and Optimal Techniques

    NASA Astrophysics Data System (ADS)

    Deng, L.; Gao, Y. W.; Qiao, Z. D.

    2011-09-01

    In present paper, a design method for natural laminar flow (NLF) airfoils with a substantial amount of natural laminar flow on both surfaces by coupling inverse design method and optimal technique is developed. The N-factor method is used to design the target pressure distributions before pressure recovery region with desired transition locations while maintaining aerodynamics constraints. The pressure in recovery region is designed according to Stratford separation criteria to prevent the laminar separation. In order to improve the off-design performance in inverse design, a multi-point inverse design is performed. An optimal technique based on response surface methodology (RSM) is used to calculate the target airfoil shapes according to the designed target pressure distributions. The set of design points is selected to satisfy the D-optimality and the reduced quadratic polynomial RS models without the 2nd-order cross items are constructed to reduce the computational cost. The design cases indicated that by the coupling-method developed in present paper, the inverse design method can be used in multi-point design to improve the off-design performance and the airfoils designed have the desired transition locations and maintain the aerodynamics constraints while the thickness constraint is difficult to meet in this design procedure.

  17. The suitability of selected multidisciplinary design and optimization techniques to conceptual aerospace vehicle design

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1992-01-01

    Four methods for preliminary aerospace vehicle design are reviewed. The first three methods (classical optimization, system decomposition, and system sensitivity analysis (SSA)) employ numerical optimization techniques and numerical gradients to feed back changes in the design variables. The optimum solution is determined by stepping through a series of designs toward a final solution. Of these three, SSA is argued to be the most applicable to a large-scale highly coupled vehicle design where an accurate minimum of an objective function is required. With SSA, several tasks can be performed in parallel. The techniques of classical optimization and decomposition can be included in SSA, resulting in a very powerful design method. The Taguchi method is more of a 'smart' parametric design method that analyzes variable trends and interactions over designer specified ranges with a minimum of experimental analysis runs. Its advantages are its relative ease of use, ability to handle discrete variables, and ability to characterize the entire design space with a minimum of analysis runs.

  18. Imaging equipment and techniques for optimal intraoperative imaging during endovascular interventions.

    PubMed

    Fillinger, M F; Weaver, J B

    1999-12-01

    Because endovascular procedures represent an ever-increasing portion of many vascular surgery practices, many surgeons are faced with difficult choices. Endovascular procedures often require open surgery, and open surgical techniques increasingly require fluoroscopic imaging. Without good intraoperative imaging, endovascular procedures are difficult and endovascular aneurysm repair is impossible. How does one balance the need for optimal imaging without sacrificing the ability to safely perform open surgical procedures, especially in the early stages of a developing endovascular program? Strategies include the use of a portable c-arm and carbon fiber table in the operating room (OR), adding a fixed imaging platform to an OR, gaining access to an angiography suite that does not meet OR requirements, and modifying it into an interventional suite that does meet operating room standards. Once the optimal equipment and facilities have been chosen, other choices must be considered. Should a radiology technician be hired? Should an interventional radiologist be available to assist or be incorporated as a routine member of the team? How will typical operating room procedures and technique need to be altered in an effort to optimize intraoperative imaging for endovascular procedures? This article gives an overview of the many issues that arise as a vascular surgery practice evolves to incorporate complex endovascular procedures. PMID:10651460

  19. Artificial intelligent techniques for optimizing water allocation in a reservoir watershed

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Chang, Li-Chiu; Wang, Yu-Chung

    2014-05-01

    This study proposes a systematical water allocation scheme that integrates system analysis with artificial intelligence techniques for reservoir operation in consideration of the great uncertainty upon hydrometeorology for mitigating droughts impacts on public and irrigation sectors. The AI techniques mainly include a genetic algorithm and adaptive-network based fuzzy inference system (ANFIS). We first derive evaluation diagrams through systematic interactive evaluations on long-term hydrological data to provide a clear simulation perspective of all possible drought conditions tagged with their corresponding water shortages; then search the optimal reservoir operating histogram using genetic algorithm (GA) based on given demands and hydrological conditions that can be recognized as the optimal base of input-output training patterns for modelling; and finally build a suitable water allocation scheme through constructing an adaptive neuro-fuzzy inference system (ANFIS) model with a learning of the mechanism between designed inputs (water discount rates and hydrological conditions) and outputs (two scenarios: simulated and optimized water deficiency levels). The effectiveness of the proposed approach is tested on the operation of the Shihmen Reservoir in northern Taiwan for the first paddy crop in the study area to assess the water allocation mechanism during drought periods. We demonstrate that the proposed water allocation scheme significantly and substantially avails water managers of reliably determining a suitable discount rate on water supply for both irrigation and public sectors, and thus can reduce the drought risk and the compensation amount induced by making restrictions on agricultural use water.

  20. On large-scale nonlinear programming techniques for solving optimal control problems

    SciTech Connect

    Faco, J.L.D.

    1994-12-31

    The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.

  1. Integration of ab-initio nuclear calculation with derivative free optimization technique

    SciTech Connect

    Sharda, Anurag

    2008-01-01

    Optimization techniques are finding their inroads into the field of nuclear physics calculations where the objective functions are very complex and computationally intensive. A vast space of parameters needs searching to obtain a good match between theoretical (computed) and experimental observables, such as energy levels and spectra. Manual calculation defies the scope of such complex calculation and are prone to error at the same time. This body of work attempts to formulate a design and implement it which would integrate the ab initio nuclear physics code MFDn and the VTDIRECT95 code. VTDIRECT95 is a Fortran95 suite of parallel code implementing the derivative-free optimization algorithm DIRECT. Proposed design is implemented for a serial and parallel version of the optimization technique. Experiment with the initial implementation of the design showing good matches for several single-nucleus cases are conducted. Determination and assignment of appropriate number of processors for parallel integration code is implemented to increase the efficiency and resource utilization in the case of multiple nuclei parameter search.

  2. A review on optimization production and upgrading biogas through CO2 removal using various techniques.

    PubMed

    Andriani, Dian; Wresta, Arini; Atmaja, Tinton Dwi; Saepudin, Aep

    2014-02-01

    Biogas from anaerobic digestion of organic materials is a renewable energy resource that consists mainly of CH4 and CO2. Trace components that are often present in biogas are water vapor, hydrogen sulfide, siloxanes, hydrocarbons, ammonia, oxygen, carbon monoxide, and nitrogen. Considering the biogas is a clean and renewable form of energy that could well substitute the conventional source of energy (fossil fuels), the optimization of this type of energy becomes substantial. Various optimization techniques in biogas production process had been developed, including pretreatment, biotechnological approaches, co-digestion as well as the use of serial digester. For some application, the certain purity degree of biogas is needed. The presence of CO2 and other trace components in biogas could affect engine performance adversely. Reducing CO2 content will significantly upgrade the quality of biogas and enhancing the calorific value. Upgrading is generally performed in order to meet the standards for use as vehicle fuel or for injection in the natural gas grid. Different methods for biogas upgrading are used. They differ in functioning, the necessary quality conditions of the incoming gas, and the efficiency. Biogas can be purified from CO2 using pressure swing adsorption, membrane separation, physical or chemical CO2 absorption. This paper reviews the various techniques, which could be used to optimize the biogas production as well as to upgrade the biogas quality. PMID:24293277

  3. Optimization models and techniques for implementation and pricing of electricity markets

    NASA Astrophysics Data System (ADS)

    Madrigal Martinez, Marcelino

    Vertically integrated electric power systems extensively use optimization models and solution techniques to guide their optimal operation and planning. The advent of electric power systems re-structuring has created needs for new optimization tools and the revision of the inherited ones from the vertical integration era into the market environment. This thesis presents further developments on the use of optimization models and techniques for implementation and pricing of primary electricity markets. New models, solution approaches, and price setting alternatives are proposed. Three different modeling groups are studied. The first modeling group considers simplified continuous and discrete models for power pool auctions driven by central-cost minimization. The direct solution of the dual problems, and the use of a Branch-and-Bound algorithm to solve the primal, allows to identify the effects of disequilibrium, and different price setting alternatives over the existence of multiple solutions. It is shown that particular pricing rules worsen the conflict of interest that arise when multiple solutions exist under disequilibrium. A price-setting alternative based on dual variables is shown to diminish such conflict. The second modeling group considers the unit commitment problem. An interior-point/cutting-plane method is proposed for the solution of the dual problem. The new method has better convergence characteristics and does not suffer from the parameter tuning drawback as previous methods The robustness characteristics of the interior-point/cutting-plane method, combined with a non-uniform price setting alternative, show that the conflict of interest is diminished when multiple near optimal solutions exist. The non-uniform price setting alternative is compared to a classic average pricing rule. The last modeling group concerns to a new type of linear network-constrained clearing system models for daily markets for power and spinning reserve. A new model and

  4. Recent advances in latent print visualization techniques at the U.S. Secret Service

    NASA Astrophysics Data System (ADS)

    Ramotowski, Robert S.; Cantu, Antonio A.; Leben, Deborah A.; Joullie, Madeleine M.; Saunders, George C.

    1997-02-01

    The U.S. Secret Service has been doing and supporting research in several areas of fingerprint visualization. The following is discussed: (1) developing ninhydrin analogues for visualizing latent prints on porous surfaces such as paper (with Dr. Madeleine Joullie, University of Pennsylvania); (2) exploring reflective UV imaging techniques as a no-treatment-required method for visualizing latent prints; (3) optimizing 'gun bluing' methods for developing latent prints on metal surfaces (such as spent cartridges); (4) investigating aqueous metal deposition methods for visualizing latent prints on multiple types of surfaces; and (5) studying methods of transferring latent print residues onto membranes.

  5. Development of techniques for advanced optical contamination measurement with internal reflection spectroscopy, phase 1, volume 1

    NASA Technical Reports Server (NTRS)

    Hayes, J. D.

    1972-01-01

    The feasibility of monitoring volatile contaminants in a large space simulation chamber using techniques of internal reflection spectroscopy was demonstrated analytically and experimentally. The infrared spectral region was selected as the operational spectral range in order to provide unique identification of the contaminants along with sufficient sensitivity to detect trace contaminant concentrations. It was determined theoretically that a monolayer of the contaminants could be detected and identified using optimized experimental procedures. This ability was verified experimentally. Procedures were developed to correct the attenuated total reflectance spectra for thick sample distortion. However, by using two different element designs the need for such correction can be avoided.

  6. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    PubMed Central

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  7. A comparison of conventional and advanced ultrasonic inspection techniques in the characterization of TMC materials

    NASA Astrophysics Data System (ADS)

    Holland, Mark R.; Handley, Scott M.; Miller, James G.; Reighard, Mark K.

    Results obtained with a conventional ultrasonic inspection technique as well as those obtained with more advanced ultrasonic NDE methods in the characterization of an 8-ply quasi-isotropic titanium matrix composite (TMC) specimen are presented. Images obtained from a conventional ultrasonic inspection of TMC material are compared with those obtained using more sophisticated ultrasonic inspection methods. It is suggested that the latter techniques are able to provide quantitative images of TMC material. They are able to reveal the same potential defect indications while simultaneously providing more quantitative information concerning the material's inherent properties. Band-limited signal loss and slope-of-attenuation images provide quantitative data on the inherent material characteristics and defects in TMC.

  8. A comparison of conventional and advanced ultrasonic inspection techniques in the characterization of TMC materials

    NASA Technical Reports Server (NTRS)

    Holland, Mark R.; Handley, Scott M.; Miller, James G.; Reighard, Mark K.

    1992-01-01

    Results obtained with a conventional ultrasonic inspection technique as well as those obtained with more advanced ultrasonic NDE methods in the characterization of an 8-ply quasi-isotropic titanium matrix composite (TMC) specimen are presented. Images obtained from a conventional ultrasonic inspection of TMC material are compared with those obtained using more sophisticated ultrasonic inspection methods. It is suggested that the latter techniques are able to provide quantitative images of TMC material. They are able to reveal the same potential defect indications while simultaneously providing more quantitative information concerning the material's inherent properties. Band-limited signal loss and slope-of-attenuation images provide quantitative data on the inherent material characteristics and defects in TMC.

  9. Recent advances in coupling capillary electrophoresis based separation techniques to ESI and MALDI MS

    PubMed Central

    Zhong, Xuefei; Zhang, Zichuan; Jiang, Shan; Li, Lingjun

    2014-01-01

    Coupling capillary electrophoresis (CE) based separation techniques to mass spectrometry creates a powerful platform for analysis of a wide range of biomolecules from complex samples because it combines the high separation efficiency of CE and the sensitivity and selectivity of MS detection. ESI and MALDI, as the most common soft ionization techniques employed for CE and MS coupling, offer distinct advantages for biomolecular characterization. This review is focused primarily on technological advances in combining CE and chip-based CE with ESI and MALDI MS detection in the past five years. Selected applications in the analyses of metabolites, peptides, and proteins with the recently developed CE-MS platforms are also highlighted. PMID:24170529

  10. An optimal technique for constraint-based image restoration and reconstruction

    NASA Astrophysics Data System (ADS)

    Leahy, Richard M.; Goutis, Costas E.

    1986-12-01

    A new technique for finding an optimal feasible solution to the general image reconstruction and restoration problem is described. This method allows the use of prior knowledge of the properties of both the solution and any noise present on the data. The problem is formulated as the optimization of a cost function over the intersection of a number of convex constraint sets; each set being defined as containing those solutions consistent with a particular constraint. A duality theorem is then applied to yield a dual problem in which the unknown image is replaced by a model defined in terms of a finite dimensional parameter vector and the kernels of the inteegral equations relating the data and solution. The dual problem may then be solved for the model parameters using a gradient descent algorithm. This method serves as an alternative to the primal constrained optimization and projection onto convex sets (POCS) algorithms. Problems in which this new approach is appropriate are discussed. An example is given for image reconstruction from noisy projection data; applying the dual method results in a fast nonlinear algorithm. Simulation results demonstrate the superiority of the optimal feasible solution over one obtained using a suboptimal approach.

  11. Recent Advances and New Techniques in Visualization of Ultra-short Relativistic Electron Bunches

    SciTech Connect

    Xiang, Dao; /SLAC

    2012-06-05

    Ultrashort electron bunches with rms length of {approx} 1 femtosecond (fs) can be used to generate ultrashort x-ray pulses in FELs that may open up many new regimes in ultrafast sciences. It is also envisioned that ultrashort electron bunches may excite {approx}TeV/m wake fields for plasma wake field acceleration and high field physics studies. Recent success of using 20 pC electron beam to drive an x-ray FEL at LCLS has stimulated world-wide interests in using low charge beam (1 {approx} 20 pC) to generate ultrashort x-ray pulses (0.1 fs {approx} 10 fs) in FELs. Accurate measurement of the length (preferably the temporal profile) of the ultrashort electron bunch is essential for understanding the physics associated with the bunch compression and transportation. However, the shorter and shorter electron bunch greatly challenges the present beam diagnostic methods. In this paper we review the recent advances in the measurement of ultra-short electron bunches. We will focus on several techniques and their variants that provide the state-of-the-art temporal resolution. Methods to further improve the resolution of these techniques and the promise to break the 1 fs time barrier is discussed. We review recent advances in the measurement of ultrashort relativistic electron bunches. We will focus on several techniques and their variants that are capable of breaking the femtosecond time barrier in measurements of ultrashort bunches. Techniques for measuring beam longitudinal phase space as well as the x-ray pulse shape in an x-ray FEL are also discussed.

  12. Individual Particle Analysis of Ambient PM 2.5 Using Advanced Electron Microscopy Techniques

    SciTech Connect

    Gerald J. Keeler; Masako Morishita

    2006-12-31

    The overall goal of this project was to demonstrate a combination of advanced electron microscopy techniques that can be effectively used to identify and characterize individual particles and their sources. Specific techniques to be used include high-angle annular dark field scanning transmission electron microscopy (HAADF-STEM), STEM energy dispersive X-ray spectrometry (EDX), and energy-filtered TEM (EFTEM). A series of ambient PM{sub 2.5} samples were collected in communities in southwestern Detroit, MI (close to multiple combustion sources) and Steubenville, OH (close to several coal fired utility boilers). High-resolution TEM (HRTEM) -imaging showed a series of nano-metal particles including transition metals and elemental composition of individual particles in detail. Submicron and nano-particles with Al, Fe, Ti, Ca, U, V, Cr, Si, Ba, Mn, Ni, K and S were observed and characterized from the samples. Among the identified nano-particles, combinations of Al, Fe, Si, Ca and Ti nano-particles embedded in carbonaceous particles were observed most frequently. These particles showed very similar characteristics of ultrafine coal fly ash particles that were previously reported. By utilizing HAADF-STEM, STEM-EDX, and EF-TEM, this investigation was able to gain information on the size, morphology, structure, and elemental composition of individual nano-particles collected in Detroit and Steubenville. The results showed that the contributions of local combustion sources - including coal fired utilities - to ultrafine particle levels were significant. Although this combination of advanced electron microscopy techniques by itself can not identify source categories, these techniques can be utilized as complementary analytical tools that are capable of providing detailed information on individual particles.

  13. Design and optimization of a total vaporization technique coupled to solid-phase microextraction.

    PubMed

    Rainey, Christina L; Bors, Dana E; Goodpaster, John V

    2014-11-18

    Solid-phase microextraction (SPME) is a popular sampling technique in which chemical compounds are collected with a sorbent-coated fiber and then desorbed into an analytical instrument such as a liquid or gas chromatograph. Typically, this technique is used to sample the headspace above a solid or liquid sample (headspace SPME), or to directly sample a liquid (immersion SPME). However, this work demonstrates an alternative approach where the sample is totally vaporized (total vaporization SPME or TV-SPME) so that analytes partition directly between the vapor phase and the SPME fiber. The implementation of this technique is demonstrated with polydimethylsiloxane-divinylbenzene (PDMS-DVB) and polyacrylate (PA) coated SPME fibers for the collection of nicotine and its metabolite cotinine in chloroform extracts. The most important method parameters were optimized using a central composite design, and this resulted in an optimal extraction temperature (96 °C), extraction time (60 min), and sample volume (120 μL). In this application, large sample volumes up to 210 μL were analyzed using a volatile solvent such as chloroform at elevated temperatures. The sensitivity of TV-SPME is nearly twice that of liquid injection for cotinine and nearly 6 times higher for nicotine. In addition, increased sampling selectivity of TV-SPME permits detection of both nicotine and cotinine in hair as biomarkers of tobacco use where in the past the detection of cotinine has not been achieved by conventional SPME. PMID:25313649

  14. Optimized Scheduling Technique of Null Subcarriers for Peak Power Control in 3GPP LTE Downlink

    PubMed Central

    Park, Sang Kyu

    2014-01-01

    Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system. PMID:24883376

  15. Parameterized CAD techniques implementation for the fatigue behaviour optimization of a service chamber

    NASA Astrophysics Data System (ADS)

    Sánchez, H. T.; Estrems, M.; Franco, P.; Faura, F.

    2009-11-01

    In recent years, the market of heat exchangers is increasingly demanding new products in short cycle time, which means that both the design and manufacturing stages must be extremely reduced. The design stage can be reduced by means of CAD-based parametric design techniques. The methodology presented in this proceeding is based on the optimized control of geometric parameters of a service chamber of a heat exchanger by means of the Application Programming Interface (API) provided by the Solidworks CAD package. Using this implementation, a set of different design configurations of the service chamber made of stainless steel AISI 316 are studied by means of the FE method. As a result of this study, a set of knowledge rules based on the fatigue behaviour are constructed and integrated into the design optimization process.

  16. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1991-01-01

    Spacecraft designers have always been concerned about the effects of meteoroid impacts on mission safety. The engineering solution to this problem has generally been to erect a bumper or shield placed outboard from the spacecraft wall to disrupt/deflect the incoming projectiles. Spacecraft designers have a number of tools at their disposal to aid in the design process. These include hypervelocity impact testing, analytic impact predictors, and hydrodynamic codes. Analytic impact predictors generally provide the best quick-look estimate of design tradeoffs. The most complete way to determine the characteristics of an analytic impact predictor is through optimization of the protective structures design problem formulated with the predictor of interest. Space Station Freedom protective structures design insight is provided through the coupling of design/material requirements, hypervelocity impact phenomenology, meteoroid and space debris environment sensitivities, optimization techniques and operations research strategies, and mission scenarios. Major results are presented.

  17. Human motion planning based on recursive dynamics and optimal control techniques

    NASA Technical Reports Server (NTRS)

    Lo, Janzen; Huang, Gang; Metaxas, Dimitris

    2002-01-01

    This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.

  18. Advanced Fluid--Structure Interaction Techniques in Application to Horizontal and Vertical Axis Wind Turbines

    NASA Astrophysics Data System (ADS)

    Korobenko, Artem

    During the last several decades engineers and scientists put significant effort into developing reliable and efficient wind turbines. As a wind power production demands grow, the wind energy research and development need to be enhanced with high-precision methods and tools. These include time-dependent, full-scale, complex-geometry advanced computational simulations at large-scale. Those, computational analysis of wind turbines, including fluid-structure interaction simulations (FSI) at full scale is important for accurate and reliable modeling, as well as blade failure prediction and design optimization. In current dissertation the FSI framework is applied to most challenging class of problems, such as large scale horizontal axis wind turbines and vertical axis wind turbines. The governing equations for aerodynamics and structural mechanics together with coupled formulation are explained in details. The simulations are performed for different wind turbine designs, operational conditions and validated against field-test and wind tunnel experimental data.

  19. Application of Advanced Process Control techniques to a pusher type reheating furnace

    NASA Astrophysics Data System (ADS)

    Zanoli, S. M.; Pepe, C.; Barboni, L.

    2015-11-01

    In this paper an Advanced Process Control system aimed at controlling and optimizing a pusher type reheating furnace located in an Italian steel plant is proposed. The designed controller replaced the previous control system, based on PID controllers manually conducted by process operators. A two-layer Model Predictive Control architecture has been adopted that, exploiting a chemical, physical and economic modelling of the process, overcomes the limitations of plant operators’ mental model and knowledge. In addition, an ad hoc decoupling strategy has been implemented, allowing the selection of the manipulated variables to be used for the control of each single process variable. Finally, in order to improve the system flexibility and resilience, the controller has been equipped with a supervision module. A profitable trade-off between conflicting specifications, e.g. safety, quality and production constraints, energy saving and pollution impact, has been guaranteed. Simulation tests and real plant results demonstrated the soundness and the reliability of the proposed system.

  20. Biotechnology Apprenticeship for Secondary-Level Students: Teaching Advanced Cell Culture Techniques for Research

    PubMed Central

    Lewis, Jennifer R.; Kotur, Mark S.; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A.; Ferrell, Nick; Sullivan, Kathryn D.; Ferrari, Mauro

    2002-01-01

    The purpose of this article is to discuss small-group apprenticeships (SGAs) as a method to instruct cell culture techniques to high school participants. The study aimed to teach cell culture practices and to introduce advanced imaging techniques to solve various biomedical engineering problems. Participants designed and completed experiments using both flow cytometry and laser scanning cytometry during the 1-month summer apprenticeship. In addition to effectively and efficiently teaching cell biology laboratory techniques, this course design provided an opportunity for research training, career exploration, and mentoring. Students participated in active research projects, working with a skilled interdisciplinary team of researchers in a large research institution with access to state-of-the-art instrumentation. The instructors, composed of graduate students, laboratory managers, and principal investigators, worked well together to present a real and worthwhile research experience. The students enjoyed learning cell culture techniques while contributing to active research projects. The institution's researchers were equally enthusiastic to instruct and serve as mentors. In this article, we clarify and illuminate the value of small-group laboratory apprenticeships to the institution and the students by presenting the results and experiences of seven middle and high school participants and their instructors. PMID:12587031

  1. Biotechnology apprenticeship for secondary-level students: teaching advanced cell culture techniques for research.

    PubMed

    Lewis, Jennifer R; Kotur, Mark S; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A; Ferrell, Nick; Sullivan, Kathryn D; Ferrari, Mauro

    2002-01-01

    The purpose of this article is to discuss small-group apprenticeships (SGAs) as a method to instruct cell culture techniques to high school participants. The study aimed to teach cell culture practices and to introduce advanced imaging techniques to solve various biomedical engineering problems. Participants designed and completed experiments using both flow cytometry and laser scanning cytometry during the 1-month summer apprenticeship. In addition to effectively and efficiently teaching cell biology laboratory techniques, this course design provided an opportunity for research training, career exploration, and mentoring. Students participated in active research projects, working with a skilled interdisciplinary team of researchers in a large research institution with access to state-of-the-art instrumentation. The instructors, composed of graduate students, laboratory managers, and principal investigators, worked well together to present a real and worthwhile research experience. The students enjoyed learning cell culture techniques while contributing to active research projects. The institution's researchers were equally enthusiastic to instruct and serve as mentors. In this article, we clarify and illuminate the value of small-group laboratory apprenticeships to the institution and the students by presenting the results and experiences of seven middle and high school participants and their instructors. PMID:12587031

  2. Charge mitigation techniques using glow and corona discharges for advanced gravitational wave detectors

    NASA Astrophysics Data System (ADS)

    Campsie, P.; Cunningham, L.; Hendry, M.; Hough, J.; Reid, S.; Rowan, S.; Hammond, G. D.

    2011-11-01

    Charging of silica test masses in gravitational wave detectors could potentially become a significant low-frequency noise source for advanced detectors. Charging noise has already been observed and confirmed in the GEO600 detector and is thought to have been observed in one of the LIGO detectors. In this paper, two charge mitigation techniques using glow and corona discharges were investigated to create repeatable and robust procedures. The glow discharge procedure was used to mitigate charge under vacuum and would be intended to be used in the instance where an optic has become charged while the detector is in operation. The corona discharge procedure was used to discharge samples at atmospheric pressure and would be intended to be used to discharge the detector optics during the cleaning of the optics. Both techniques were shown to reduce both polarities of surface charge on fused silica to a level that would not limit advanced LIGO. Measurements of the transmission of samples that had undergone the charge mitigation procedures showed no significant variation in transmission, at a sensitivity of ~ 200 ppm, in TiO2-doped Ta2O5/SiO2 multi-layer coated fused silica.

  3. Development of Advanced Nuclide Separation and Recovery Methods using Ion-Exchanhge Techniques in Nuclear Backend

    NASA Astrophysics Data System (ADS)

    Miura, Hitoshi

    The development of compact separation and recovery methods using selective ion-exchange techniques is very important for the reprocessing and high-level liquid wastes (HLLWs) treatment in the nuclear backend field. The selective nuclide separation techniques are effective for the volume reduction of wastes and the utilization of valuable nuclides, and expected for the construction of advanced nuclear fuel cycle system and the rationalization of waste treatment. In order to accomplish the selective nuclide separation, the design and synthesis of novel adsorbents are essential for the development of compact and precise separation processes. The present paper deals with the preparation of highly functional and selective hybrid microcapsules enclosing nano-adsorbents in the alginate gel polymer matrices by sol-gel methods, their characterization and the clarification of selective adsorption properties by batch and column methods. The selective separation of Cs, Pd and Re in real HLLW was further accomplished by using novel microcapsules, and an advanced nuclide separation system was proposed by the combination of selective processes using microcapsules.

  4. Use of Advanced Magnetic Resonance Imaging Techniques in Neuromyelitis Optica Spectrum Disorder

    PubMed Central

    Kremer, Stephane; Renard, Felix; Achard, Sophie; Lana-Peixoto, Marco A.; Palace, Jacqueline; Asgari, Nasrin; Klawiter, Eric C.; Tenembaum, Silvia N.; Banwell, Brenda; Greenberg, Benjamin M.; Bennett, Jeffrey L.; Levy, Michael; Villoslada, Pablo; Saiz, Albert; Fujihara, Kazuo; Chan, Koon Ho; Schippling, Sven; Paul, Friedemann; Kim, Ho Jin; de Seze, Jerome; Wuerfel, Jens T.

    2016-01-01

    Brain parenchymal lesions are frequently observed on conventional magnetic resonance imaging (MRI) scans of patients with neuromyelitis optica (NMO) spectrum disorder, but the specific morphological and temporal patterns distinguishing them unequivocally from lesions caused by other disorders have not been identified. This literature review summarizes the literature on advanced quantitative imaging measures reported for patients with NMO spectrum disorder, including proton MR spectroscopy, diffusion tensor imaging, magnetization transfer imaging, quantitative MR volumetry, and ultrahigh-field strength MRI. It was undertaken to consider the advanced MRI techniques used for patients with NMO by different specialists in the field. Although quantitative measures such as proton MR spectroscopy or magnetization transfer imaging have not reproducibly revealed diffuse brain injury, preliminary data from diffusion-weighted imaging and brain tissue volumetry indicate greater white matter than gray matter degradation. These findings could be confirmed by ultrahigh-field MRI. The use of nonconventional MRI techniques may further our understanding of the pathogenic processes in NMO spectrum disorders and may help us identify the distinct radiographic features corresponding to specific phenotypic manifestations of this disease. PMID:26010909

  5. Advanced grazing-incidence techniques for modern soft-matter materials analysis

    SciTech Connect

    Hexemer, Alexander; Müller-Buschbaum, Peter

    2015-01-01

    The complex nano-morphology of modern soft-matter materials is successfully probed with advanced grazing-incidence techniques. Based on grazing-incidence small- and wide-angle X-ray and neutron scattering (GISAXS, GIWAXS, GISANS and GIWANS), new possibilities arise which are discussed with selected examples. Due to instrumental progress, highly interesting possibilities for local structure analysis in this material class arise from the use of micro- and nanometer-sized X-ray beams in micro- or nanofocused GISAXS and GIWAXS experiments. The feasibility of very short data acquisition times down to milliseconds creates exciting possibilities forin situandin operandoGISAXS and GIWAXS studies. Tuning the energy of GISAXS and GIWAXS in the soft X-ray regime and in time-of flight GISANS allows the tailoring of contrast conditions and thereby the probing of more complex morphologies. In addition, recent progress in software packages, useful for data analysis for advanced grazing-incidence techniques, is discussed.

  6. Advanced MRI Techniques in the Evaluation of Complex Cystic Breast Lesions

    PubMed Central

    Popli, Manju Bala; Gupta, Pranav; Arse, Devraj; Kumar, Pawan; Kaur, Prabhjot

    2016-01-01

    OBJECTIVE The purpose of this research work was to evaluate complex cystic breast lesions by advanced MRI techniques and correlating imaging with histologic findings. METHODS AND MATERIALS In a cross-sectional design from September 2013 to August 2015, 50 patients having sonographically detected complex cystic lesions of the breast were included in the study. Morphological characteristics were assessed. Dynamic contrast-enhanced MRI along with diffusion-weighted imaging and MR spectroscopy were used to further classify lesions into benign and malignant categories. All the findings were correlated with histopathology. RESULTS Of the 50 complex cystic lesions, 32 proved to be benign and 18 were malignant on histopathology. MRI features of heterogeneous enhancement on CE-MRI (13/18), Type III kinetic curve (13/18), reduced apparent diffusion coefficient (18/18), and tall choline peak (17/18) were strong predictors of malignancy. Thirteen of the 18 lesions showed a combination of Type III curve, reduced apparent diffusion coefficient value, and tall choline peak. CONCLUSIONS Advanced MRI techniques like dynamic imaging, diffusion-weighted sequences, and MR spectroscopy provide a high level of diagnostic confidence in the characterization of complex cystic breast lesion, thus allowing early diagnosis and significantly reducing patient morbidity and mortality. From our study, lesions showing heterogeneous contrast enhancement, Type III kinetic curve, diffusion restriction, and tall choline peak were significantly associated with malignant complex cystic lesions of the breast. PMID:27330299

  7. Advanced grazing-incidence techniques for modern soft-matter materials analysis

    DOE PAGESBeta

    Hexemer, Alexander; Müller-Buschbaum, Peter

    2015-01-01

    The complex nano-morphology of modern soft-matter materials is successfully probed with advanced grazing-incidence techniques. Based on grazing-incidence small- and wide-angle X-ray and neutron scattering (GISAXS, GIWAXS, GISANS and GIWANS), new possibilities arise which are discussed with selected examples. Due to instrumental progress, highly interesting possibilities for local structure analysis in this material class arise from the use of micro- and nanometer-sized X-ray beams in micro- or nanofocused GISAXS and GIWAXS experiments. The feasibility of very short data acquisition times down to milliseconds creates exciting possibilities forin situandin operandoGISAXS and GIWAXS studies. Tuning the energy of GISAXS and GIWAXS in themore » soft X-ray regime and in time-of flight GISANS allows the tailoring of contrast conditions and thereby the probing of more complex morphologies. In addition, recent progress in software packages, useful for data analysis for advanced grazing-incidence techniques, is discussed.« less

  8. Use of Advanced Magnetic Resonance Imaging Techniques in Neuromyelitis Optica Spectrum Disorder.

    PubMed

    Kremer, Stephane; Renard, Felix; Achard, Sophie; Lana-Peixoto, Marco A; Palace, Jacqueline; Asgari, Nasrin; Klawiter, Eric C; Tenembaum, Silvia N; Banwell, Brenda; Greenberg, Benjamin M; Bennett, Jeffrey L; Levy, Michael; Villoslada, Pablo; Saiz, Albert; Fujihara, Kazuo; Chan, Koon Ho; Schippling, Sven; Paul, Friedemann; Kim, Ho Jin; de Seze, Jerome; Wuerfel, Jens T; Cabre, Philippe; Marignier, Romain; Tedder, Thomas; van Pelt, Danielle; Broadley, Simon; Chitnis, Tanuja; Wingerchuk, Dean; Pandit, Lekha; Leite, Maria Isabel; Apiwattanakul, Metha; Kleiter, Ingo; Prayoonwiwat, Naraporn; Han, May; Hellwig, Kerstin; van Herle, Katja; John, Gareth; Hooper, D Craig; Nakashima, Ichiro; Sato, Douglas; Yeaman, Michael R; Waubant, Emmanuelle; Zamvil, Scott; Stüve, Olaf; Aktas, Orhan; Smith, Terry J; Jacob, Anu; O'Connor, Kevin

    2015-07-01

    Brain parenchymal lesions are frequently observed on conventional magnetic resonance imaging (MRI) scans of patients with neuromyelitis optica (NMO) spectrum disorder, but the specific morphological and temporal patterns distinguishing them unequivocally from lesions caused by other disorders have not been identified. This literature review summarizes the literature on advanced quantitative imaging measures reported for patients with NMO spectrum disorder, including proton MR spectroscopy, diffusion tensor imaging, magnetization transfer imaging, quantitative MR volumetry, and ultrahigh-field strength MRI. It was undertaken to consider the advanced MRI techniques used for patients with NMO by different specialists in the field. Although quantitative measures such as proton MR spectroscopy or magnetization transfer imaging have not reproducibly revealed diffuse brain injury, preliminary data from diffusion-weighted imaging and brain tissue volumetry indicate greater white matter than gray matter degradation. These findings could be confirmed by ultrahigh-field MRI. The use of nonconventional MRI techniques may further our understanding of the pathogenic processes in NMO spectrum disorders and may help us identify the distinct radiographic features corresponding to specific phenotypic manifestations of this disease. PMID:26010909

  9. Advanced grazing-incidence techniques for modern soft-matter materials analysis

    PubMed Central

    Hexemer, Alexander; Müller-Buschbaum, Peter

    2015-01-01

    The complex nano-morphology of modern soft-matter materials is successfully probed with advanced grazing-incidence techniques. Based on grazing-incidence small- and wide-angle X-ray and neutron scattering (GISAXS, GIWAXS, GISANS and GIWANS), new possibilities arise which are discussed with selected examples. Due to instrumental progress, highly interesting possibilities for local structure analysis in this material class arise from the use of micro- and nanometer-sized X-ray beams in micro- or nanofocused GISAXS and GIWAXS experiments. The feasibility of very short data acquisition times down to milliseconds creates exciting possibilities for in situ and in operando GISAXS and GIWAXS studies. Tuning the energy of GISAXS and GIWAXS in the soft X-ray regime and in time-of flight GISANS allows the tailoring of contrast conditions and thereby the probing of more complex morphologies. In addition, recent progress in software packages, useful for data analysis for advanced grazing-incidence techniques, is discussed. PMID:25610632

  10. An optimal control strategy for crop growth in advanced life support systems.

    PubMed

    Fleisher, D H; Baruh, H

    2001-01-01

    A feedback control method for regulating crop growth in advanced life support systems is presented. Two models for crop growth are considered, one developed by the agricultural industry and used by the Ames Research Center, and a mechanistic model, termed the Energy Cascade model. Proportional and pointwise-optimal control laws are applied to both models using wheat as the crop and light intensity as the control input. The control is particularly sensitive to errors in measurement of crop dry mass. However, it is shown that the proposed approach is a potentially viable way of controlling crop growth as it compensates for model errors and problems associated with applying the desired control input due to environmental disturbances. Grant numbers: NGT5-50229. PMID:11725784

  11. A technique for optimal temperature estimation for modeling sunrise/sunset thermal snap disturbance torque

    NASA Technical Reports Server (NTRS)

    Zimbelman, D. F.; Dennehy, C. J.; Welch, R. V.; Born, G. H.

    1990-01-01

    A predictive temperature estimation technique which can be used to drive a model of the Sunrise/Sunset thermal 'snap' disturbance torque experienced by low Earth orbiting spacecraft is described. The twice per orbit impulsive disturbance torque is attributed to vehicle passage in and out of the Earth's shadow cone (umbra), during which large flexible appendages undergo rapidly changing thermal conditions. Flexible members, in particular solar arrays, experience rapid cooling during umbra entrance (Sunset) and rapid heating during exit (Sunrise). The thermal 'snap' phenomena has been observed during normal on-orbit operations of both the LANDSAT-4 satellite and the Communications Technology Satellite (CTS). Thermal 'snap' has also been predicted to be a dominant source of error for the TOPEX satellite. The fundamental equations used to model the Sunrise/Sunset thermal 'snap' disturbance torque for a typical solar array like structure will be described. For this derivation the array is assumed to be a thin, cantilevered beam. The time varying thermal gradient is shown to be the driving force behind predicting the thermal 'snap' disturbance torque and therefore motivates the need for accurate estimates of temperature. The development of a technique to optimally estimate appendage surface temperature is highlighted. The objective analysis method used is structured on the Gauss-Markov Theorem and provides an optimal temperature estimate at a prescribed location given data from a distributed thermal sensor network. The optimally estimated surface temperatures could then be used to compute the thermal gradient across the body. The estimation technique is demonstrated using a typical satellite solar array.

  12. The Analysis and Design of Low Boom Configurations Using CFD and Numerical Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1999-01-01

    The use of computational fluid dynamics (CFD) for the analysis of sonic booms generated by aircraft has been shown to increase the accuracy and reliability of predictions. CFD takes into account important three-dimensional and nonlinear effects that are generally neglected by modified linear theory (MLT) methods. Up to the present time, CFD methods have been primarily used for analysis or prediction. Some investigators have used CFD to impact the design of low boom configurations using trial and error methods. One investigator developed a hybrid design method using a combination of Modified Linear Theory (e.g. F-functions) and CFD to provide equivalent area due to lift driven by a numerical optimizer to redesign or modify an existing configuration to achieve a shaped sonic boom signature. A three-dimensional design methodology has not yet been developed that completely uses nonlinear methods or CFD. Constrained numerical optimization techniques have existed for some time. Many of these methods use gradients to search for the minimum of a specified objective function subject to a variety of design variable bounds, linear and nonlinear constraints. Gradient based design optimization methods require the determination of the objective function gradients with respect to each of the design variables. These optimization methods are efficient and work well if the gradients can be obtained analytically. If analytical gradients are not available, the objective gradients or derivatives with respect to the design variables must be obtained numerically. To obtain numerical gradients, say, for 10 design variables, might require anywhere from 10 to 20 objective function evaluations. Typically, 5-10 global iterations of the optimizer are required to minimize the objective function. In terms of using CFD as a design optimization tool, the numerical evaluation of gradients can require anywhere from 100 to 200 CFD computations per design for only 10 design variables. If one CFD

  13. Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for Column CO2 Measurements

    NASA Astrophysics Data System (ADS)

    Campbell, J. F.; Lin, B.; Nehrir, A. R.; Obland, M. D.; Liu, Z.; Browell, E. V.; Chen, S.; Kooi, S. A.; Fan, T. F.

    2015-12-01

    Global and regional atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission and Atmospheric Carbon and Transport (ACT) - America airborne investigation are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are being investigated as a means of facilitating CO2 measurements from space and airborne platforms to meet the mission science measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud returns. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of intervening optically thin clouds, thereby minimizing bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the Earth's surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques and provides very high (at sub-meter level) range resolution. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These techniques are used in a new data processing architecture to support the ASCENDS CarbonHawk Experiment Simulator (ACES) and ACT-America programs.

  14. Advanced intensity-modulation continuous-wave lidar techniques for ASCENDS CO2 column measurements

    NASA Astrophysics Data System (ADS)

    Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. W.; Obland, Michael D.; Meadows, Byron

    2015-10-01

    Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.

  15. Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for ASCENDS O2 Column Measurements

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. Wallace; Obland, Michael D.; Meadows, Byron

    2015-01-01

    Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.

  16. Study of solid oxide fuel cell interconnects, protective coatings and advanced physical vapor deposition techniques

    NASA Astrophysics Data System (ADS)

    Gannon, Paul Edward

    High energy conversion efficiency, decreased environmentally-sensitive emissions and fuel flexibility have attracted increasing attention toward solid oxide fuel cell (SOFC) systems for stationary, transportation and portable power generation. Critical durability and cost issues, however, continue to impede wide-spread deployment. Many intermediate temperature (600-800°C) planar SOFC systems employ metallic alloy interconnect components, which physically connect individual fuel cells into electric series, facilitate gas distribution to appropriate SOFC electrode chambers (fuel/anode and oxidant[air]/cathode) and provide SOFC stack mechanical support. These demanding multifunctional requirements challenge commercially-available and inexpensive metallic alloys due to corrosion and related effects. Many ongoing investigations are aimed at enabling inexpensive metallic alloys (via bulk and/or surface modifications) as SOFC interconnects (SOFC(IC)s). In this study, two advanced physical vapor deposition (PVD) techniques: large area filtered vacuum arc deposition (LAFAD), and filtered arc plasma-assisted electron beam PVD (FA-EBPVD) were used to deposit a wide-variety of protective nanocomposite (amorphous/nanocrystalline) ceramic thin-film (<5microm) coatings on commercial and specialty stainless steels with different surface finishes. Both bare and coated steel specimens were subjected to SOFC(IC)-relevant exposures and evaluated using complimentary surface analysis techniques. Significant improvements were observed under simulated SOFC(IC) exposures with many coated specimens at ˜800°C relative to uncoated specimens: stable surface morphology; low area specific resistance (ASR <100mO·cm 2 >1,000 hours); and, dramatically reduced Cr volatility (>30-fold). Analyses and discussions of SOFC(IC) corrosion, advanced PVD processes and protective coating behavior are intended to advance understanding and accelerate the development of durable and commercially-viable SOFC

  17. Dynamic rain fade compensation techniques for the advanced communications technology satellite

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1992-01-01

    The dynamic and composite nature of propagation impairments that are incurred on earth-space communications links at frequencies in and above the 30/20 GHz Ka band necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) project by the implementation of optimal processing schemes derived through the use of the ACTS Rain Attenuation Prediction Model and nonlinear Markov filtering theory. The ACTS Rain Attenuation Prediction Model discerns climatological variations on the order of 0.5 deg in latitude and longitude in the continental U.S. The time-dependent portion of the model gives precise availability predictions for the 'spot beam' links of ACTS. However, the structure of the dynamic portion of the model, which yields performance parameters such as fade duration probabilities, is isomorphic to the state-variable approach of stochastic control theory and is amenable to the design of such statistical fade processing schemes which can be made specific to the particular climatological location at which they are employed.

  18. Design and optimization of stepped austempered ductile iron using characterization techniques

    SciTech Connect

    Hernández-Rivera, J.L.; Garay-Reyes, C.G.; Campos-Cambranis, R.E.; Cruz-Rivera, J.J.

    2013-09-15

    Conventional characterization techniques such as dilatometry, X-ray diffraction and metallography were used to select and optimize temperatures and times for conventional and stepped austempering. Austenitization and conventional austempering time was selected when the dilatometry graphs showed a constant expansion value. A special heat color-etching technique was applied to distinguish between the untransformed austenite and high carbon stabilized austenite which had formed during the treatments. Finally, it was found that carbide precipitation was absent during the stepped austempering in contrast to conventional austempering, on which carbide evidence was found. - Highlights: • Dilatometry helped to establish austenitization and austempering parameters. • Untransformed austenite was present even for longer processing times. • Ausferrite formed during stepped austempering caused important reinforcement effect. • Carbide precipitation was absent during stepped treatment.

  19. A model based technique for the design of flight directors. [optimal control models

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1973-01-01

    A new technique for designing flight directors is discussed. This technique uses the optimal-control pilot/vehicle model to determine the appropriate control strategy. The dynamics of this control strategy are then incorporated into the director control laws, thereby enabling the pilot to operate at a significantly lower workload. A preliminary design of a control director for maintaining a STOL vehicle on the approach path in the presence of random air turbulence is evaluated. By selecting model parameters in terms of allowable path deviations and pilot workload levels, a set of director laws is achieved which allows improved system performance at reduced workload levels. The pilot acts essentially as a proportional controller with regard to the director signals, and control motions are compatible with those appropriate to status-only displays.

  20. Improved Broadband Liner Optimization Applied to the Advanced Noise Control Fan

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, Michael G.; Sutliff, Daniel L.; Ayle, Earl; Ichihashi, Fumitaka

    2014-01-01

    The broadband component of fan noise has grown in relevance with the utilization of increased bypass ratio and advanced fan designs. Thus, while the attenuation of fan tones remains paramount, the ability to simultaneously reduce broadband fan noise levels has become more desirable. This paper describes improvements to a previously established broadband acoustic liner optimization process using the Advanced Noise Control Fan rig as a demonstrator. Specifically, in-duct attenuation predictions with a statistical source model are used to obtain optimum impedance spectra over the conditions of interest. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners aimed at producing impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increased weighting to specific frequencies and/or operating conditions. Constant-depth, double-degree of freedom and variable-depth, multi-degree of freedom designs are carried through design, fabrication, and testing to validate the efficacy of the design process. Results illustrate the value of the design process in concurrently evaluating the relative costs/benefits of these liner designs. This study also provides an application for demonstrating the integrated use of duct acoustic propagation/radiation and liner modeling tools in the design and evaluation of novel broadband liner concepts for complex engine configurations.

  1. Advanced thermal hydrolysis: optimization of a novel thermochemical process to aid sewage sludge treatment.

    PubMed

    Abelleira, Jose; Pérez-Elvira, Sara I; Portela, Juan R; Sánchez-Oneto, Jezabel; Nebot, Enrique

    2012-06-01

    The aim of this work was to study in depth the behavior and optimization of a novel process, called advanced thermal hydrolysis (ATH), to determine its utility as a pretreatment (sludge solubilization) or postreatment (organic matter removal) for anaerobic digestion (AD) in the sludge line of wastewater treatment plants (WWTPs). ATH is based on a thermal hydrolysis (TH) process plus hydrogen peroxide (H(2)O(2)) addition and takes advantage of a peroxidation/direct steam injection synergistic effect. On the basis of the response surface methodology (RSM) and a modified Doehlert design, an empirical second-order polynomial model was developed for the total yield of: (a) disintegration degree [DD (%)] (solubilization), (b) filtration constant [F(c) (cm(2)/min)] (dewaterability), and (c) organic matter removal (%). The variables considered were operation time (t), temperature reached after initial heating (T), and oxidant coefficient (n = oxygen(supplied)/oxygen(stoichiometric)). As the model predicts, in the case of the ATH process with high levels of oxidant, it is possible to achieve an organic matter removal of up to 92%, but the conditions required are prohibitive on an industrial scale. ATH operated at optimal conditions (oxygen amount 30% of stoichiometric, 115 °C and 24 min) gave promising results as a pretreatment, with similar solubilization and markedly better dewaterability levels in comparison to those obtained with TH at 170 °C. The empirical validation of the model was satisfactory. PMID:22463756

  2. Technology-design-manufacturing co-optimization for advanced mobile SoCs

    NASA Astrophysics Data System (ADS)

    Yang, Da; Gan, Chock; Chidambaram, P. R.; Nallapadi, Giri; Zhu, John; Song, S. C.; Xu, Jeff; Yeap, Geoffrey

    2014-03-01

    How to maintain the Moore's Law scaling beyond the 193 immersion resolution limit is the key question semiconductor industry needs to answer in the near future. Process complexity will undoubtfully increase for 14nm node and beyond, which brings both challenges and opportunities for technology development. A vertically integrated design-technologymanufacturing co-optimization flow is desired to better address the complicated issues new process changes bring. In recent years smart mobile wireless devices have been the fastest growing consumer electronics market. Advanced mobile devices such as smartphones are complex systems with the overriding objective of providing the best userexperience value by harnessing all the technology innovations. Most critical system drivers are better system performance/power efficiency, cost effectiveness, and smaller form factors, which, in turns, drive the need of system design and solution with More-than-Moore innovations. Mobile system-on-chips (SoCs) has become the leading driver for semiconductor technology definition and manufacturing. Here we highlight how the co-optimization strategy influenced architecture, device/circuit, process technology and package, in the face of growing process cost/complexity and variability as well as design rule restrictions.

  3. Advanced Ecosystem Mapping Techniques for Large Arctic Study Domains Using Calibrated High-Resolution Imagery

    NASA Astrophysics Data System (ADS)

    Macander, M. J.; Frost, G. V., Jr.

    2015-12-01

    Regional-scale mapping of vegetation and other ecosystem properties has traditionally relied on medium-resolution remote sensing such as Landsat (30 m) and MODIS (250 m). Yet, the burgeoning availability of high-resolution (<=2 m) imagery and ongoing advances in computing power and analysis tools raises the prospect of performing ecosystem mapping at fine spatial scales over large study domains. Here we demonstrate cutting-edge mapping approaches over a ~35,000 km² study area on Alaska's North Slope using calibrated and atmospherically-corrected mosaics of high-resolution WorldView-2 and GeoEye-1 imagery: (1) an a priori spectral approach incorporating the Satellite Imagery Automatic Mapper (SIAM) algorithms; (2) image segmentation techniques; and (3) texture metrics. The SIAM spectral approach classifies radiometrically-calibrated imagery to general vegetation density categories and non-vegetated classes. The SIAM classes were developed globally and their applicability in arctic tundra environments has not been previously evaluated. Image segmentation, or object-based image analysis, automatically partitions high-resolution imagery into homogeneous image regions that can then be analyzed based on spectral, textural, and contextual information. We applied eCognition software to delineate waterbodies and vegetation classes, in combination with other techniques. Texture metrics were evaluated to determine the feasibility of using high-resolution imagery to algorithmically characterize periglacial surface forms (e.g., ice-wedge polygons), which are an important physical characteristic of permafrost-dominated regions but which cannot be distinguished by medium-resolution remote sensing. These advanced mapping techniques provide products which can provide essential information supporting a broad range of ecosystem science and land-use planning applications in northern Alaska and elsewhere in the circumpolar Arctic.

  4. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    NASA Astrophysics Data System (ADS)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  5. Compiler optimization technique for data cache prefetching using a small CAM array

    SciTech Connect

    Chi, C.H.

    1994-12-31

    With advances in compiler optimization and program flow analysis, software assisted cache prefetching schemes using PREFETCH instructions are now possible. Although data can be prefetched accurately into the cache, the runtime overhead associated with these schemes often limits their practical use. In this paper, we propose a new scheme, called the Strike-CAM Data Prefetching (SCP), to prefetch array references with constant strides accurately. Compared to current software assisted data prefetching schemes, the SCP scheme has much lower runtime overhead without sacrificing prefetching accuracy. Our result showed that the SCP scheme is particularly suitable for computing intensive scientific applications where cache misses are mainly due to array references with constant strides and they can be prefetched very accurately by this SCP scheme.

  6. Optimizing Single Agent Panitumumab Therapy in Pre-Treated Advanced Colorectal Cancer12

    PubMed Central

    Gasparini, Giampietro; Buttitta, Fiamma; D'Andrea, Mario Rosario; Tumolo, Salvatore; Buonadonna, Angela; Pavese, Ida; Cordio, Stefano; De Tursi, Michele; Mosconi, Stefania; Stumbo, Luciano; Felicioni, Lara; Marchetti, Antonio

    2014-01-01

    PURPOSE: To improve the selection of advanced colorectal cancer patients to panitumumab by optimizing the assessment of RAS (KRAS-NRAS) mutations. EXPERIMENTAL DESIGN: Using a centralized pyrosequencing RAS assay, we analyzed the tumors of 94 patients, wild-type for KRAS mutations (codons 12 to 13) by Sanger sequencing (SS), treated with panitumumab. RESULTS: By SS analysis, 94 (62%) of 152 patients were wild-type and their objective response rate to panitumumab was 17%. We first optimized the KRAS test, by performing an accurate tissue-dissection step followed by pyrosequencing, a more sensitive method, and found further mutations in 12 (12.8%) cases. Secondly, tumors were subjected to RAS extension analysis (KRAS, exons 3 to 4; NRAS exons 2 to 4) by pyrosequencing that allowed to identify several rare mutations: KRAS codon 61, 5.3%; codon 146, 5.3%; NRAS, 9.5%. Overall, RAS mutation rate was 32.9%. All patients with additional RAS mutations had progressive or stable disease, except 3 patients with mutations at codon 61 of KRAS or NRAS who experienced partial (2 cases) or complete response. By excluding from the analysis 11 cases with mutations at codons 61, no patient was responsive to treatment (P = .021). RAS wild-type versus RAS mutated cases had a significantly better time to progression (P = .044), that resulted improved (p = .004) by excluding codon 61 mutations. CONCLUSION: This study shows that by optimizing the RAS test it is possible to significantly improve the identification of patients who do not gain benefit of panitumumab. Prospective studies are warranted to determine the clinical significance of rare mutations. PMID:25246275

  7. Techniques to reduce pain associated with hair transplantation: optimizing anesthesia and analgesia.

    PubMed

    Nusbaum, Bernard P

    2004-01-01

    The importance of pain control in hair transplantation cannot be overemphasized. Adequate preoperative sedation to reduce anxiety, raise pain threshold, and induce amnesia is fundamental to minimizing operative pain. Most of the pain associated with the procedure results from injection of the local anesthetic. Once initial anesthesia is achieved, proper maintenance of anesthesia is of paramount importance especially with the trend toward larger numbers of grafts being performed in one session with prolonged operative times. The choice of local anesthetic agents, infiltration technique, optimal field blocks and nerve blocks, proper hemostasis, timely repetition of anesthesia, and use of analgesics intraoperatively, with the goal of maintaining the patient pain-free during the procedure, are fundamental. In addition, reduced pain on infiltration can be achieved with buffering and warming of the local anesthetic solution as well as techniques to decrease sensation or partially anesthetize the skin prior to injection. Techniques such as bupivacaine donor area field block in the immediate postoperative period and early administration of analgesics can greatly influence postoperative pain. Along with excellent cosmetic results attainable with modern techniques, improving patients' experiences during the surgical process will enhance the public perception of hair transplantation and will encourage prospective patients to seek this treatment modality. PMID:14979739

  8. A comparison of two global optimization algorithms with sequential niche technique for structural model updating

    NASA Astrophysics Data System (ADS)

    Shabbir, Faisal; Omenzetter, Piotr

    2014-04-01

    Much effort is devoted nowadays to derive accurate finite element (FE) models to be used for structural health monitoring, damage detection and assessment. However, formation of a FE model representative of the original structure is a difficult task. Model updating is a branch of optimization which calibrates the FE model by comparing the modal properties of the actual structure with these of the FE predictions. As the number of experimental measurements is usually much smaller than the number of uncertain parameters, and, consequently, not all uncertain parameters are selected for model updating, different local minima may exist in the solution space. Experimental noise further exacerbates the problem. The attainment of a global solution in a multi-dimensional search space is a challenging problem. Global optimization algorithms (GOAs) have received interest in the previous decade to solve this problem, but no GOA can ensure the detection of the global minimum either. To counter this problem, a combination of GOA with sequential niche technique (SNT) has been proposed in this research which systematically searches the whole solution space. A dynamically tested full scale pedestrian bridge is taken as a case study. Two different GOAs, namely particle swarm optimization (PSO) and genetic algorithm (GA), are investigated in combination with SNT. The results of these GOA are compared in terms of their efficiency in detecting global minima. The systematic search enables to find different solutions in the search space, thus increasing the confidence of finding the global minimum.

  9. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  10. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.

    2000-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  11. Optimization of MKID noise performance via readout technique for astronomical applications

    NASA Astrophysics Data System (ADS)

    Czakon, Nicole G.; Schlaerth, James A.; Day, Peter K.; Downes, Thomas P.; Duan, Ran P.; Gao, Jiansong; Glenn, Jason; Golwala, Sunil R.; Hollister, Matt I.; LeDuc, Henry G.; Mazin, Benjamin A.; Maloney, Philip R.; Noroozian, Omid; Nguyen, Hien T.; Sayers, Jack; Siegel, Seth; Vaillancourt, John E.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas

    2010-07-01

    Detectors employing superconducting microwave kinetic inductance detectors (MKIDs) can be read out by measuring changes in either the resonator frequency or dissipation. We will discuss the pros and cons of both methods, in particular, the readout method strategies being explored for the Multiwavelength Sub/millimeter Inductance Camera (MUSIC) to be commissioned at the CSO in 2010. As predicted theoretically and observed experimentally, the frequency responsivity is larger than the dissipation responsivity, by a factor of 2-4 under typical conditions. In the absence of any other noise contributions, it should be easier to overcome amplifier noise by simply using frequency readout. The resonators, however, exhibit excess frequency noise which has been ascribed to a surface distribution of two-level fluctuators sensitive to specific device geometries and fabrication techniques. Impressive dark noise performance has been achieved using modified resonator geometries employing interdigitated capacitors (IDCs). To date, our noise measurement and modeling efforts have assumed an onresonance readout, with the carrier power set well below the nonlinear regime. Several experimental indicators suggested to us that the optimal readout technique may in fact require a higher readout power, with the carrier tuned somewhat off resonance, and that a careful systematic study of the optimal readout conditions was needed. We will present the results of such a study, and discuss the optimum readout conditions as well as the performance that can be achieved relative to BLIP.

  12. Reducing the impact of a desalination plant using stochastic modeling and optimization techniques

    NASA Astrophysics Data System (ADS)

    Alcolea, Andres; Renard, Philippe; Mariethoz, Gregoire; Bertone, François

    2009-02-01

    SummaryWater is critical for economic growth in coastal areas. In this context, desalination has become an increasingly important technology over the last five decades. It often has environmental side effects, especially when the input water is pumped directly from the sea via intake pipelines. However, it is generally more efficient and cheaper to desalt brackish groundwater from beach wells rather than desalting seawater. Natural attenuation is also gained and hazards due to anthropogenic pollution of seawater are reduced. In order to minimize allocation and operational costs and impacts on groundwater resources, an optimum pumping network is required. Optimization techniques are often applied to this end. Because of aquifer heterogeneity, designing the optimum pumping network demands reliable characterizations of aquifer parameters. An optimum pumping network in a coastal aquifer in Oman, where a desalination plant currently pumps brackish groundwater at a rate of 1200 m 3/h for a freshwater production of 504 m 3/h (insufficient to satisfy the growing demand in the area) was designed using stochastic inverse modeling together with optimization techniques. The Monte Carlo analysis of 200 simulations of transmissivity and storage coefficient fields conditioned to the response to stresses of tidal fluctuation and three long term pumping tests was performed. These simulations are physically plausible and fit the available data well. Simulated transmissivity fields are used to design the optimum pumping configuration required to increase the current pumping rate to 9000 m 3/h, for a freshwater production of 3346 m 3/h (more than six times larger than the existing one). For this task, new pumping wells need to be sited and their pumping rates defined. These unknowns are determined by a genetic algorithm that minimizes a function accounting for: (1) drilling, operational and maintenance costs, (2) target discharge and minimum drawdown (i.e., minimum aquifer

  13. Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes

    NASA Astrophysics Data System (ADS)

    Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd

    2016-04-01

    In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.

  14. Advances of Peripheral Nerve Repair Techniques to Improve Hand Function: A Systematic Review of Literature

    PubMed Central

    P, Mafi; S, Hindocha; M, Dhital; M, Saleh

    2012-01-01

    Concepts of neuronal damage and repair date back to ancient times. The research in this topic has been growing ever since and numerous nerve repair techniques have evolved throughout the years. Due to our greater understanding of nerve injuries and repair we now distinguish between central and peripheral nervous system. In this review, we have chosen to concentrate on peripheral nerve injuries and in particular those involving the hand. There are no reviews bringing together and summarizing the latest research evidence concerning the most up-to-date techniques used to improve hand function. Therefore, by identifying and evaluating all the published literature in this field, we have summarized all the available information about the advances in peripheral nerve techniques used to improve hand function. The most important ones are the use of resorbable poly[(R)-3-hydroxybutyrate] (PHB), epineural end-to-end suturing, graft repair, nerve transfer, side to side neurorrhaphy and end to side neurorrhaphy between median, radial and ulnar nerves, nerve transplant, nerve repair, external neurolysis and epineural sutures, adjacent neurotization without nerve suturing, Agee endoscopic operation, tourniquet induced anesthesia, toe transfer and meticulous intrinsic repair, free auto nerve grafting, use of distal based neurocutaneous flaps and tubulization. At the same time we found that the patient’s age, tension of repair, time of repair, level of injury and scar formation following surgery affect the prognosis. Despite the thorough findings of this systematic review we suggest that further research in this field is needed. PMID:22431951

  15. New advanced surface modification technique: titanium oxide ceramic surface implants: long-term clinical results

    NASA Astrophysics Data System (ADS)

    Szabo, Gyorgy; Kovacs, Lajos; Barabas, Jozsef; Nemeth, Zsolt; Maironna, Carlo

    2001-11-01

    The purpose of this paper is to discuss the background to advanced surface modification technologies and to present a new technique, involving the formation of a titanium oxide ceramic coating, with relatively long-term results of its clinical utilization. Three general techniques are used to modify surfaces: the addition or removal of material and the change of material already present. Surface properties can also be changed without the addition or removal of material, through the laser or electron beam thermal treatment. The new technique outlined in this paper relates to the production of a corrosion-resistant 2000-2500 A thick, ceramic oxide layer with a coherent crystalline structure on the surface of titanium implants. The layer is grown electrochemically from the bulk of the metal and is modified by heat treatment. Such oxide ceramic-coated implants have a number of advantageous properties relative to implants covered with various other coatings: a higher external hardness, a greater force of adherence between the titanium and the oxide ceramic coating, a virtually perfect insulation between the organism and the metal (no possibility of metal allergy), etc. The coated implants were subjected to various physical, chemical, electronmicroscopic, etc. tests for a qualitative characterization. Finally, these implants (plates, screws for maxillofacial osteosynthesis and dental root implants) were applied in surgical practice for a period of 10 years. Tests and the experience acquired demonstrated the good properties of the titanium oxide ceramic-coated implants.

  16. Measurements of the subcriticality using advanced technique of shooting source during operation of NPP reactors

    SciTech Connect

    Lebedev, G. V. Petrov, V. V.; Bobylyov, V. T.; Butov, R. I.; Zhukov, A. M.; Sladkov, A. A.

    2014-12-15

    According to the rules of nuclear safety, the measurements of the subcriticality of reactors should be carried out in the process of performing nuclear hazardous operations. An advanced technique of shooting source of neutrons is proposed to meet this requirement. As such a source, a pulsed neutron source (PNS) is used. In order to realize this technique, it is recommended to enable a PNS with a frequency of 1–20 Hz. The PNS is stopped after achieving a steady-state (on average) number of neutrons in the reactor volume. The change in the number of neutrons in the reactor volume is measured in time with an interval of discreteness of ∼0.1 s. The results of these measurements with the application of a system of point-kinetics equations are used in order to calculate the sought subcriticality. The basic idea of the proposed technique used to measure the subcriticality is elaborated in a series of experiments on the Kvant assembly. The conditions which should be implemented in order to obtain a positive result of measurements are formulated. A block diagram of the basic version of the experimental setup is presented, whose main element is a pulsed neutron generator.

  17. Measurements of the subcriticality using advanced technique of shooting source during operation of NPP reactors

    NASA Astrophysics Data System (ADS)

    Lebedev, G. V.; Petrov, V. V.; Bobylyov, V. T.; Butov, R. I.; Zhukov, A. M.; Sladkov, A. A.

    2014-12-01

    According to the rules of nuclear safety, the measurements of the subcriticality of reactors should be carried out in the process of performing nuclear hazardous operations. An advanced technique of shooting source of neutrons is proposed to meet this requirement. As such a source, a pulsed neutron source (PNS) is used. In order to realize this technique, it is recommended to enable a PNS with a frequency of 1-20 Hz. The PNS is stopped after achieving a steady-state (on average) number of neutrons in the reactor volume. The change in the number of neutrons in the reactor volume is measured in time with an interval of discreteness of ˜0.1 s. The results of these measurements with the application of a system of point-kinetics equations are used in order to calculate the sought subcriticality. The basic idea of the proposed technique used to measure the subcriticality is elaborated in a series of experiments on the Kvant assembly. The conditions which should be implemented in order to obtain a positive result of measurements are formulated. A block diagram of the basic version of the experimental setup is presented, whose main element is a pulsed neutron generator.

  18. Optimal pharmacotherapeutic strategies for elderly patients with advanced non-small cell lung cancer.

    PubMed

    Quoix, Elisabeth

    2011-11-01

    carboplatin with weekly paclitaxel. While there have been no trials of second-line therapy for NSCLC specifically in elderly patients, exploratory subgroup analyses indicate that docetaxel, pemetrexed and erlotinib may provide outcomes in elderly patients similar to those reported in younger patients. However, specific second-line therapy trials in elderly patients are required as the elderly patients in trials conducted to date were probably highly selected to fit the inclusion criteria. There is no more room for nihilism in the treatment of elderly patients with advanced NSCLC. Such patients should be evaluated carefully by geriatric indexes and, if they have a PS score of 0-2, may be treated with platinum-based (mostly carboplatin) doublet therapy in the same manner as their younger counterparts. The optimal second line treatment remains to be determined. PMID:22054229

  19. 2D Flood Modelling Using Advanced Terrain Analysis Techniques And A Fully Continuous DEM-Based Rainfall-Runoff Algorithm

    NASA Astrophysics Data System (ADS)

    Nardi, F.; Grimaldi, S.; Petroselli, A.

    2012-12-01

    Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.

  20. Planning and scheduling the Hubble Space Telescope: Practical application of advanced techniques

    NASA Technical Reports Server (NTRS)

    Miller, Glenn E.

    1994-01-01

    NASA's Hubble Space Telescope (HST) is a major astronomical facility that was launched in April, 1990. In late 1993, the first of several planned servicing missions refurbished the telescope, including corrections for a manufacturing flaw in the primary mirror. Orbiting above the distorting effects of the Earth's atmosphere, the HST provides an unrivaled combination of sensitivity, spectral coverage and angular resolution. The HST is arguably the most complex scientific observatory ever constructed and effective use of this valuable resource required novel approaches to astronomical observation and the development of advanced software systems including techniques to represent scheduling preferences and constraints, a constraint satisfaction problem (CSP) based scheduler and a rule based planning system. This paper presents a discussion of these systems and the lessons learned from operational experience.

  1. Planning and scheduling the Hubble Space Telescope: Practical application of advanced techniques

    NASA Astrophysics Data System (ADS)

    Miller, Glenn E.

    1994-10-01

    NASA's Hubble Space Telescope (HST) is a major astronomical facility that was launched in April, 1990. In late 1993, the first of several planned servicing missions refurbished the telescope, including corrections for a manufacturing flaw in the primary mirror. Orbiting above the distorting effects of the Earth's atmosphere, the HST provides an unrivaled combination of sensitivity, spectral coverage and angular resolution. The HST is arguably the most complex scientific observatory ever constructed and effective use of this valuable resource required novel approaches to astronomical observation and the development of advanced software systems including techniques to represent scheduling preferences and constraints, a constraint satisfaction problem (CSP) based scheduler and a rule based planning system. This paper presents a discussion of these systems and the lessons learned from operational experience.

  2. Vibrio parahaemolyticus: a review on the pathogenesis, prevalence, and advance molecular identification techniques

    PubMed Central

    Letchumanan, Vengadesh; Chan, Kok-Gan; Lee, Learn-Han

    2014-01-01

    Vibrio parahaemolyticus is a Gram-negative halophilic bacterium that is found in estuarine, marine and coastal environments. V. parahaemolyticus is the leading causal agent of human acute gastroenteritis following the consumption of raw, undercooked, or mishandled marine products. In rare cases, V. parahaemolyticus causes wound infection, ear infection or septicaemia in individuals with pre-existing medical conditions. V. parahaemolyticus has two hemolysins virulence factors that are thermostable direct hemolysin (tdh)-a pore-forming protein that contributes to the invasiveness of the bacterium in humans, and TDH-related hemolysin (trh), which plays a similar role as tdh in the disease pathogenesis. In addition, the bacterium is also encodes for adhesions and type III secretion systems (T3SS1 and T3SS2) to ensure its survival in the environment. This review aims at discussing the V. parahaemolyticus growth and characteristics, pathogenesis, prevalence and advances in molecular identification techniques. PMID:25566219

  3. Effects of age, system experience, and navigation technique on driving with an advanced traveler information system.

    PubMed

    Dingus, T A; Hulse, M C; Mollenhauer, M A; Fleischman, R N; McGehee, D V; Manakkal, N

    1997-06-01

    This paper explores the effects of age, system experience, and navigation technique on driving, navigation performance, and safety for drivers who used TravTek, an Advanced Traveler Information System. The first two studies investigated various route guidance configurations on the road in a specially equipped instrumented vehicle with an experimenter present. The third was a naturalistic quasi-experimental field study that collected data unobtrusively from more than 1200 TravTek rental car drivers with no in-vehicle experimenter. The results suggest that with increased experience, drivers become familiar with the system and develop strategies for substantially more efficient and safer use. The results also showed that drivers over age 65 had difficulty driving and navigating concurrently. They compensated by driving slowly and more cautiously. Despite this increased caution, older drivers made more safety-related errors than did younger drivers. The results also showed that older drivers benefited substantially from a well-designed ATIS driver interface. PMID:9302887

  4. Visualisation of Ecohydrological Processes and Relationships for Teaching Using Advanced Techniques

    NASA Astrophysics Data System (ADS)

    Guan, H.; Wang, H.; Gutierrez-Jurado, H. A.; Yang, Y.; Deng, Z.

    2014-12-01

    Ecohydrology is an emerging discipline with a rapid research growth. This calls for enhancing ecohydrology education in both undergraduate and postgraduate levels. In other hydrology disciplines, hydrological processes are commonly observed in environments (e.g. streamflow, infiltration) or easily demonstrated in labs (e.g. Darcy's column). It is relatively difficult to demonstrate ecohydrological concepts and processes (e.g. soil-vegetation water relationship) in teaching. In this presentation, we report examples of using some advanced techniques to illustrate ecohydrological concepts, relationships, and processes, with measurements based on a native vegetation catchment in South Australia. They include LIDAR images showing the relationship between topography-control hdyroclimatic conditions and vegetation distribution, electrical resistivity tomography derived images showing stem structures, continuous stem water potential monitoring showing diurnal variations of plant water status, root zone moisture depletion during dry spells, and responses to precipitation inputs, and incorporating sapflow measurements to demonstrate environmental stress on plant stomatal behaviours.

  5. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011)

    NASA Astrophysics Data System (ADS)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro

    2012-06-01

    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the

  6. Robotic-assisted laparoscopic anterior pelvic exenteration in patients with advanced ovarian cancer: Farghaly's technique.

    PubMed

    Farghaly, S A

    2010-01-01

    The safety and efficacy of the robotic-assisted laparoscopic approach to anterior pelvic exenteration is evaluated in patients with advanced ovarian cancer undergoing anterior pelvic exenteration for involvement of the urinary bladder during primary cytoreduction surgery. All patients undergo preoperative lab work, imaging studies and bowel preparation prior to surgery. The Davinci surgical system is used to perform urinary cystectomy, total hysterectomy, bilateral salpingo-oophorectomy, bilateral pelvic adenectomy (including obturator, hypogastic, external iliac, and common iliac lymph nodes). In addition, debulking to less than 1 cm is performed. The anterior pelvic exenteration procedure involves wide perivesical dissection. Then the robot is locked, and ileal conduit is performed via a 6 cm lower midline incision. Operative time can be maintained in 4.6 hours with a mean blood loss of 215 ml and hospital stay of five days. Farghaly's technique of robotic-assisted laparoscopic anterior pelvic exenteration in patients with advanced ovarian cancer is safe, feasible, and cost-effective with acceptable operative, pathological and short- and long-term clinical outcomes. It retains the advantage of minimally invasive surgery. PMID:20882872

  7. Characterization of water movement in a reconstructed slope in Keokuk, Iowa, using advanced geophysical techniques

    NASA Astrophysics Data System (ADS)

    Schettler, Megan Elizabeth

    This project addresses the topic of evaluating water movement inside a hillslope using a combination of conventional and advanced geophysical techniques. While slope dynamics have been widely studied, ground water movement in hills is still poorly understood. A combination of piezometers, ground-penetrating radar (GPR), and electrical resistivity (ER) surveys were used in an effort to monitor fluctuations in the subsurface water level in a reengineered slope near Keokuk, Iowa. This information, integrated with rainfall data, formed a picture of rainfall-groundwater response dynamics. There were two hypotheses: 1) that the depth and fluctuation of the water table could be accurately sensed using a combination of monitoring wells, ground-penetrating radar and resistivity surveys; and 2) that the integration of data from the instrumentation array and the geophysical surveys would enable the characterization of water movement in the slope in response to rainfall events. This project also sought to evaluate the utility and limitations of using these techniques in landslide and hydrology studies, advance our understanding of hillslope hydrology, and improve our capacity to better determine when slope failure may occur. Results from monitoring wells, stratigraphy, and resistivity surveys at the study site indicated the presence of a buried swale, channelizing subsurface storm flow and creating variations in groundwater. Although there was some success in defining hydrologic characteristics and response of the slope using this integrated approach, it was determined that GPR was ultimately not well suited to this site. However, the use of GPR as part of an integrated approach to study hillslope hydrology still appears to hold potential, and future work to further evaluate the applicability and potential of this approach would be warranted.

  8. Optimal technique of linear accelerator-based stereotactic radiosurgery for tumors adjacent to brainstem.

    PubMed

    Chang, Chiou-Shiung; Hwang, Jing-Min; Tai, Po-An; Chang, You-Kang; Wang, Yu-Nong; Shih, Rompin; Chuang, Keh-Shih

    2016-01-01

    either DCA or IMRS plans, at 9.2 ± 7% and 8.2 ± 6%, respectively. Owing to the multiple arc or beam planning designs of IMRS and VMAT, both of these techniques required higher MU delivery than DCA, with the averages being twice as high (p < 0.05). If linear accelerator is only 1 modality can to establish for SRS treatment. Based on statistical evidence retrospectively, we recommend VMAT as the optimal technique for delivering treatment to tumors adjacent to brainstem. PMID:27396940

  9. Good techniques optimize control of oil-based mud and solids

    SciTech Connect

    Phelps, J.; Hoopingarner, J.

    1989-02-13

    Effective techniques have been developed from work on dozens of North Sea Wells to minimize the amount of oil-based mud discharged to the sea while maintaining acceptable levels of solids. Pressure to reduce pollution during the course of drilling prompted the development of these techniques. They involve personnel and optimization of mud system and procedures. Case histories demonstrate that regulations may be met with economical techniques using existing technology. The benefits of low solids content are widely known, and are a key part of any successful mud program. Good solids control should result in lower mud costs and better drilling performance. Operators have specified high-performance shakers to accomplish this and have revised their mud programs with lower and lower allowable drilled solids percentages. This will pay off in certain areas. But with the U.K. Department of Energy regulations requiring cuttings oil discharge content (CODC) to be less than 150 g of oil/kg of dry solids discharge that went into effect Jan. 1, 1989, oil-loss control has a higher profile in the U.K. sector of the North Sea.

  10. Long term volcano monitoring by using advanced Persistent Scatterer SAR Interferometry technique: A case study at Unimak Island, Alaska

    NASA Astrophysics Data System (ADS)

    Gong, W.; Meyer, F. J.; Freymueller, J. T.; Lu, Z.

    2012-12-01

    Unimak Island, the largest island in the eastern Aleutians of Alaska, is home to three major active volcanoes: Shishaldin, Fisher, and Westdahl. Shishaldin and Westdahl erupted within the past 2 decades and Fisher has shown persistent hydrothermal activity (Mann and Freymueller, 2003). Therefore, Unimak Island is of particular interest to geoscientists. Surface deformation on Unimak Island has been studied in several previous efforts. Lu et al. (2000, 2003) applied conventional InSAR techniques to study surface inflation at Westdahl during 1991 and 2000. Mann and Freymueller (2003) used GPS measurements to analyze inflation at Westdahl and subsidence at Fisher during 1998-2001. Moran et al., ( 2006) reported that Shishaldin, the most active volcano in the island , experienced no significant deformation during the 1993 to 2003 period bracketing two eruptions. In this paper, we present deformation measurements at Unimak Islank during 2003-2010 using advanced persistent scatterer InSAR (PSI). Due to the non-urban setting in a subarctic environment and the limited data acquisition, the number of images usable for PSI processing is limited to about 1-3 acquisitions per year. The relatively smaller image stack and the irregular acquisition distribution in time pose challenges in the PSI time-series processing. Therefore, we have developed a modified PSI technique that integrates external atmospheric information from numerical weather predication models to assist in the removal of atmospheric artifacts [1]. Deformation modeling based on PSI results will be also presented. Our new results will be combined with previous findings to address the magma plumbing system at Unimak Island. 1) W. Gong, F. J. Meyer (2012): Optimized filter design for irregular acquired data stack in Persistent Scatterers Synthetic Aperture Radar Interferometry, Proceeding of Geosciences and Remote Sensing Symposium (IGARSS), 2012 IEEE International, Munich, Germany.

  11. Optimization, formulation, and characterization of multiflavonoids-loaded flavanosome by bulk or sequential technique

    PubMed Central

    Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida

    2016-01-01

    This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as “flavonosome”. Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA–phosphatidylcholine) through four different methods of synthesis – bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug–carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA–phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of −39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a

  12. Recursive Ant Colony Global Optimization: a new technique for the inversion of geophysical data

    NASA Astrophysics Data System (ADS)

    Gupta, D. K.; Gupta, J. P.; Arora, Y.; Singh, U. K.

    2011-12-01

    We present a new method called Recursive Ant Colony Global Optimization (RACO) technique, a modified form of general ACO, which can be used to find the best solutions to inversion problems in geophysics. RACO simulates the social behaviour of ants to find the best path between the nest and the food source. A new term depth has been introduced, which controls the extent of recursion. A selective number of cities get qualified for the successive depth. The results of one depth are used to construct the models for the next depth and the range of values for each of the parameters is reduced without any change to the number of models. The three additional steps performed after each depth, are the pheromone tracking, pheromone updating and city selection. One of the advantages of RACO over ACO is that if a problem has multiple solutions, then pheromone accumulation will take place at more than one city thereby leading to formation of multiple nested ACO loops within the ACO loop of the previous depth. Also, while the convergence of ACO is almost linear, RACO shows exponential convergence and hence is faster than the ACO. RACO proves better over some other global optimization techniques, as it does not require any initial values to be assigned to the parameters function. The method has been tested on some mathematical functions, synthetic self-potential (SP) and synthetic gravity data. The obtained results reveal the efficiency and practicability of the method. The method is found to be efficient enough to solve the problems of SP and gravity anomalies due to a horizontal cylinder, a sphere, an inclined sheet and multiple idealized bodies buried inside the earth. These anomalies with and without noise were inverted using the RACO algorithm. The obtained results were compared with those obtained from the conventional methods and it was found that RACO results are more accurate. Finally this optimization technique was applied to real field data collected over the Surda

  13. Optimization, formulation, and characterization of multiflavonoids-loaded flavanosome by bulk or sequential technique.

    PubMed

    Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida

    2016-01-01

    This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as "flavonosome". Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA-phosphatidylcholine) through four different methods of synthesis - bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug-carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA-phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising

  14. Optimal Materials and Deposition Technique Lead to Cost-Effective Solar Cell with Best-Ever Conversion Efficiency (Fact Sheet)

    SciTech Connect

    Not Available

    2012-07-01

    This fact sheet describes how the SJ3 solar cell was invented, explains how the technology works, and why it won an R&D 100 Award. Based on NREL and Solar Junction technology, the commercial SJ3 concentrator solar cell - with 43.5% conversion efficiency at 418 suns - uses a lattice-matched multijunction architecture that has near-term potential for cells with {approx}50% efficiency. Multijunction solar cells have higher conversion efficiencies than any other type of solar cell. But developers of utility-scale and space applications crave even better efficiencies at lower costs to be both cost-effective and able to meet the demand for power. The SJ3 multijunction cell, developed by Solar Junction with assistance from foundational technological advances by the National Renewable Energy Laboratory, has the highest efficiency to date - almost 2% absolute more than the current industry standard multijunction cell-yet at a comparable cost. So what did it take to create this cell having 43.5% efficiency at 418-sun concentration? A combination of materials with carefully designed properties, a manufacturing technique allowing precise control, and an optimized device design.

  15. Real-time approximate optimal guidance laws for the advanced launch system

    NASA Technical Reports Server (NTRS)

    Speyer, Jason L.; Feeley, Timothy; Hull, David G.

    1989-01-01

    An approach to optimal ascent guidance for a launch vehicle is developed using an expansion technique. The problem is to maximize the payload put into orbit subject to the equations of motion of a rocket over a rotating spherical earth. It is assumed that the thrust and gravitational forces dominate over the aerodynamic forces. It is shown that these forces can be separated by a small parameter epsilon, where epsilon is the ratio of the atmospheric scale height to the radius of the earth. The Hamilton-Jacobi-Bellman or dynamic programming equation is expanded in a series where the zeroth-order term (epsilon = 0) can be obtained in closed form. The zeroth-order problem is that of putting maximum payload into orbit subject to the equations of motion of a rocket in a vacuum over a flat earth. The neglected inertial and aerodynamic terms are included in higher order terms of the expansion, which are determined from the solution of first-order linear partial differential equations requiring only quadrature integrations. These quadrature integrations can be performed rapidly, so that real-time approximate optimization can be used to construct the launch guidance law.

  16. Advancements in sensing and perception using structured lighting techniques :an LDRD final report.

    SciTech Connect

    Novick, David Keith; Padilla, Denise D.; Davidson, Patrick A. Jr.; Carlson, Jeffrey J.

    2005-09-01

    This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Advancements in Sensing and Perception using Structured Lighting Techniques''. There is an ever-increasing need for robust, autonomous ground vehicles for counterterrorism and defense missions. Although there has been nearly 30 years of government-sponsored research, it is undisputed that significant advancements in sensing and perception are necessary. We developed an innovative, advanced sensing technology for national security missions serving the Department of Energy, the Department of Defense, and other government agencies. The principal goal of this project was to develop an eye-safe, robust, low-cost, lightweight, 3D structured lighting sensor for use in broad daylight outdoor applications. The market for this technology is wide open due to the unavailability of such a sensor. Currently available laser scanners are slow, bulky and heavy, expensive, fragile, short-range, sensitive to vibration (highly problematic for moving platforms), and unreliable for outdoor use in bright sunlight conditions. Eye-safety issues are a primary concern for currently available laser-based sensors. Passive, stereo-imaging sensors are available for 3D sensing but suffer from several limitations : computationally intensive, require a lighted environment (natural or man-made light source), and don't work for many scenes or regions lacking texture or with ambiguous texture. Our approach leveraged from the advanced capabilities of modern CCD camera technology and Center 6600's expertise in 3D world modeling, mapping, and analysis, using structured lighting. We have a diverse customer base for indoor mapping applications and this research extends our current technology's lifecycle and opens a new market base for outdoor 3D mapping. Applications include precision mapping, autonomous navigation, dexterous manipulation, surveillance and

  17. Statistically Optimal Approximations of Astronomical Signals: Implications to Classification and Advanced Study of Variable Stars

    NASA Astrophysics Data System (ADS)

    Andronov, I. L.; Chinarova, L. L.; Kudashkina, L. S.; Marsakova, V. I.; Tkachenko, M. G.

    2016-06-01

    We have elaborated a set of new algorithms and programs for advanced time series analysis of (generally) multi-component multi-channel observations with irregularly spaced times of observations, which is a common case for large photometric surveys. Previous self-review on these methods for periodogram, scalegram, wavelet, autocorrelation analysis as well as on "running" or "sub-interval" local approximations were self-reviewed in (2003ASPC..292..391A). For an approximation of the phase light curves of nearly-periodic pulsating stars, we use a Trigonometric Polynomial (TP) fit of the statistically optimal degree and initial period improvement using differential corrections (1994OAP.....7...49A). For the determination of parameters of "characteristic points" (minima, maxima, crossings of some constant value etc.) we use a set of methods self-reviewed in 2005ASPC..335...37A, Results of the analysis of the catalogs compiled using these programs are presented in 2014AASP....4....3A. For more complicated signals, we use "phenomenological approximations" with "special shapes" based on functions defined on sub-intervals rather on the complete interval. E. g. for the Algol-type stars we developed the NAV ("New Algol Variable") algorithm (2012Ap.....55..536A, 2012arXiv1212.6707A, 2015JASS...32..127A), which was compared to common methods of Trigonometric Polynomial Fit (TP) or local Algebraic Polynomial (A) fit of a fixed or (alternately) statistically optimal degree. The method allows determine the minimal set of parameters required for the "General Catalogue of Variable Stars", as well as an extended set of phenomenological and astrophysical parameters which may be used for the classification. Totally more that 1900 variable stars were studied in our group using these methods in a frame of the "Inter-Longitude Astronomy" campaign (2010OAP....23....8A) and the "Ukrainian Virtual Observatory" project (2012KPCB...28...85V).

  18. Optimal Design of Groundwater Remediation Problems under Uncertainty Using Probabilistic Multi-objective Evolutionary Technique

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Wu, J.

    2011-12-01

    The previous work in the field of multi-objective optimization under uncertainty has concerned with the probabilistic multi-objective algorithm itself, how to effectively evaluate an estimate of uncertain objectives and identify a set of reliable Pareto optimal solutions. However, the design of a robust and reliable groundwater remediation system encounters major difficulties owing to the inherent uncertainty of hydrogeological parameters such as hydraulic conductivity (K). Thus, we need to make reduction of uncertainty associated with the site characteristics of the contaminated aquifers. In this study, we first use the Sequential Gaussian Simulation (SGSIM) to generate 1000 conditional realizations of lnK based on the sampled conditioning data acquired by field test. It is worthwhile to note that the cost for field test often weighs heavily upon the remediation cost and must thus be taken into account in the tradeoff between the solution reliability and remedial cost optimality. In this situation, we perform Monte Carlo simulation to make an uncertainty analysis of lnK realizations associated with the different number of conditioning data points. The results indicate that the uncertainty of the site characteristics and the contaminant concentration output from transport model is decreasing and then tends toward stabilization with the increase of conditioning data. This study presents a probabilistic multi-objective evolutionary algorithm (PMOEA) that integrates noisy genetic algorithm (NGA) and probabilistic multi-objective genetic algorithm (MOGA). The evident difference between deterministic MOGA and probabilistic MOGA is the use of probabilistic Pareto domination ranking and niche technique to ensure that each solution found is most reliable and robust. The proposed algorithm is then evaluated through a synthetic pump-and-treat (PAT) groundwater remediation test case. The 1000 lnK realizations generated by SGSIM with appropriate number of conditioning data (30

  19. Investigation to advance prediction techniques of the low-speed aerodynamics of V/STOL aircraft

    NASA Technical Reports Server (NTRS)

    Maskew, B.; Strash, D.; Nathman, J.; Dvorak, F. A.

    1985-01-01

    A computer program, VSAERO, has been applied to a number of V/STOL configurations with a view to advancing prediction techniques for the low-speed aerodynamic characteristics. The program couples a low-order panel method with surface streamline calculation and integral boundary layer procedures. The panel method--which uses piecewise constant source and doublet panels-includes an iterative procedure for wake shape and models boundary layer displacement effect using the source transpiration technique. Certain improvements to a basic vortex tube jet model were installed in the code prior to evaluation. Very promising results were obtained for surface pressures near a jet issuing at 90 deg from a flat plate. A solid core model was used in the initial part of the jet with a simple entrainment model. Preliminary representation of the downstream separation zone significantly improve the correlation. The program accurately predicted the pressure distribution inside the inlet on the Grumman 698-411 design at a range of flight conditions. Furthermore, coupled viscous/potential flow calculations gave very close correlation with experimentally determined operational boundaries dictated by the onset of separation inside the inlet. Experimentally observed degradation of these operational boundaries between nacelle-alone tests and tests on the full configuration were also indicated by the calculation. Application of the program to the General Dynamics STOL fighter design were equally encouraging. Very close agreement was observed between experiment and calculation for the effects of power on pressure distribution, lift and lift curve slope.

  20. Advancing the Frontiers in Nanocatalysis, Biointerfaces, and Renewable Energy Conversion by Innovations of Surface Techniques

    SciTech Connect

    Somorjai, G.A.; Frei, H.; Park, J.Y.

    2009-07-23

    The challenge of chemistry in the 21st century is to achieve 100% selectivity of the desired product molecule in multipath reactions ('green chemistry') and develop renewable energy based processes. Surface chemistry and catalysis play key roles in this enterprise. Development of in situ surface techniques such as high-pressure scanning tunneling microscopy, sum frequency generation (SFG) vibrational spectroscopy, time-resolved Fourier transform infrared methods, and ambient pressure X-ray photoelectron spectroscopy enabled the rapid advancement of three fields: nanocatalysts, biointerfaces, and renewable energy conversion chemistry. In materials nanoscience, synthetic methods have been developed to produce monodisperse metal and oxide nanoparticles (NPs) in the 0.8-10 nm range with controlled shape, oxidation states, and composition; these NPs can be used as selective catalysts since chemical selectivity appears to be dependent on all of these experimental parameters. New spectroscopic and microscopic techniques have been developed that operate under reaction conditions and reveal the dynamic change of molecular structure of catalysts and adsorbed molecules as the reactions proceed with changes in reaction intermediates, catalyst composition, and oxidation states. SFG vibrational spectroscopy detects amino acids, peptides, and proteins adsorbed at hydrophobic and hydrophilic interfaces and monitors the change of surface structure and interactions with coadsorbed water. Exothermic reactions and photons generate hot electrons in metal NPs that may be utilized in chemical energy conversion. The photosplitting of water and carbon dioxide, an important research direction in renewable energy conversion, is discussed.

  1. Advanced Modeling Techniques to Study Anthropogenic Influences on Atmospheric Chemical Budgets

    NASA Technical Reports Server (NTRS)

    Mathur, Rohit

    1997-01-01

    This research work is a collaborative effort between research groups at MCNC and the University of North Carolina at Chapel Hill. The overall objective of this research is to improve the level of understanding of the processes that determine the budgets of chemically and radiatively active compounds in the atmosphere through development and application of advanced methods for calculating the chemical change in atmospheric models. The research performed during the second year of this project focused on four major aspects: (1) The continued development and refinement of multiscale modeling techniques to address the issue of the disparate scales of the physico-chemical processes that govern the fate of atmospheric pollutants; (2) Development and application of analysis methods utilizing process and mass balance techniques to increase the interpretive powers of atmospheric models and to aid in complementary analysis of model predictions and observations; (3) Development of meteorological and emission inputs for initial application of the chemistry/transport model over the north Atlantic region; and, (4) The continued development and implementation of a totally new adaptive chemistry representation that changes the details of what is represented as the underlying conditions change.

  2. Development of Advanced In-Situ Techniques for Chemistry Monitoring and Corrosion Mitigation in SCWO Environments

    SciTech Connect

    Macdonald, D. D.; Lvov, S. N.

    2000-03-31

    This project is developing sensing technologies and corrosion monitoring techniques for use in super critical water oxidation (SCWO) systems to reduce the volume of mixed low-level nuclear waste by oxidizing organic components in a closed cycle system where CO2 and other gaseous oxides are produced, leaving the radioactive elements concentrated in ash. The technique uses water at supercritical temperatures under highly oxidized conditions by maintaining a high fugacity of molecular oxygen in the system, which causes high corrosion rates of even the most corrosive resistant reactor materials. This project significantly addresses the high corrosion shortcoming through development of (a) advanced electrodes and sensors for in situ potentiometric monitoring of pH in high subcritical and supercritical aqueous solutions, (b) an approach for evaluating the association constants for 1-1 aqueous electrolytes using a flow-through electrochemical thermocell; (c) an electrochemical noise sensor for the in situ measurement of corrosion rate in subcritical and supercritical aqueous systems; (d) a model for estimating the effect of pressure on reaction rates, including corrosion reactions, in high subcritical and supercritical aqueous systems. The project achieved all objectives, except for installing some of the sensors into a fully operating SCWO system.

  3. Advanced system identification techniques for wind turbine structures with special emphasis on modal parameters

    SciTech Connect

    Bialasiewicz, J.T.

    1995-06-01

    The goal of this research is to develop advanced system identification techniques that can be used to accurately measure the frequency response functions of a wind-turbine structure immersed in wind noise. To allow for accurate identification, the authors have developed a special test signal called the Pseudo-Random Binary Sequence (PRBS). The Matlab program that generates this signal allows the user to interactively tailor its parameters for the frequency range of interest based on the response of the wind turbine under test. By controlling NREL`s Mobile Hydraulic Shaker System, which is attached to the wind turbine structure, the PRBS signal produces the wide-band excitation necessary to perform system identification in the presence of wind noise. The techniques presented here will enable researchers to obtain modal parameters from an operating wind turbine, including frequencies, damping coefficients, and mode shapes. More importantly, the algorithms they have developed and tested (so far using input-output data from a simulated structure) permit state-space representation of the system under test, particularly the modal state space representation. This is the only system description that reveals the internal behavior the system, such as the interaction between the physical parameters, and which, in contrast to transfer functions, is valid for non-zero initial conditions.

  4. Advanced 3D-Sonographic Imaging as a Precise Technique to Evaluate Tumor Volume

    PubMed Central

    Pflanzer, R.; Hofmann, M.; Shelke, A.; Habib, A.; Derwich, W.; Schmitz-Rixen, T.; Bernd, A.; Kaufmann, R.; Bereiter-Hahn, J.

    2014-01-01

    Determination of tumor volume in subcutaneously inoculated xenograft models is a standard procedure for clinical and preclinical evaluation of tumor response to treatment. Practitioners frequently use a hands-on caliper method in conjunction with a simplified formula to assess tumor volume. Non-invasive and more precise techniques as investigation by MR or (μ)CT exist but come with various adverse effects in terms of radiation, complex setup or elevated cost of investigations. Therefore, we propose an advanced three-dimensional sonographic imaging technique to determine small tumor volumes in xenografts with high precision and minimized observer variability. We present a study on xenograft carcinoma tumors from which volumes and shapes were calculated with the standard caliper method as well as with a clinically available three-dimensional ultrasound scanner and subsequent processing software. Statistical analysis reveals the suitability of this non-invasive approach for the purpose of a quick and precise calculation of tumor volume in small rodents. PMID:25500076

  5. Development of heat transfer enhancement techniques for external cooling of an advanced reactor vessel

    NASA Astrophysics Data System (ADS)

    Yang, Jun

    Nucleate boiling is a well-recognized means for passively removing high heat loads (up to ˜106 W/m2) generated by a molten reactor core under severe accident conditions while maintaining relatively low reactor vessel temperature (<800 °C). With the upgrade and development of advanced power reactors, however, enhancing the nucleate boiling rate and its upper limit, Critical Heat Flux (CHF), becomes the key to the success of external passive cooling of reactor vessel undergoing core disrupture accidents. In the present study, two boiling heat transfer enhancement methods have been proposed, experimentally investigated and theoretically modelled. The first method involves the use of a suitable surface coating to enhance downward-facing boiling rate and CHF limit so as to substantially increase the possibility of reactor vessel surviving high thermal load attack. The second method involves the use of an enhanced vessel/insulation design to facilitate the process of steam venting through the annular channel formed between the reactor vessel and the insulation structure, which in turn would further enhance both the boiling rate and CHF limit. Among the various available surface coating techniques, metallic micro-porous layer surface coating has been identified as an appropriate coating material for use in External Reactor Vessel Cooling (ERVC) based on the overall consideration of enhanced performance, durability, the ease of manufacturing and application. Since no previous research work had explored the feasibility of applying such a metallic micro-porous layer surface coating on a large, downward facing and curved surface such as the bottom head of a reactor vessel, a series of characterization tests and experiments were performed in the present study to determine a suitable coating material composition and application method. Using the optimized metallic micro-porous surface coatings, quenching and steady-state boiling experiments were conducted in the Sub

  6. EPS in Environmental Microbial Biofilms as Examined by Advanced Imaging Techniques

    NASA Astrophysics Data System (ADS)

    Neu, T. R.; Lawrence, J. R.

    2006-12-01

    Biofilm communities are highly structured associations of cellular and polymeric components which are involved in biogenic and geogenic environmental processes. Furthermore, biofilms are also important in medical (infection), industrial (biofouling) and technological (biofilm engineering) processes. The interfacial microbial communities in a specific habitat are highly dynamic and change according to the environmental parameters affecting not only the cellular but also the polymeric constituents of the system. Through their EPS biofilms interact with dissolved, colloidal and particulate compounds from the bulk water phase. For a long time the focus in biofilm research was on the cellular constituents in biofilms and the polymer matrix in biofilms has been rather neglected. The polymer matrix is produced not only by different bacteria and archaea but also by eukaryotic micro-organisms such as algae and fungi. The mostly unidentified mixture of EPS compounds is responsible for many biofilm properties and is involved in biofilm functionality. The chemistry of the EPS matrix represents a mixture of polymers including polysaccharides, proteins, nucleic acids, neutral polymers, charged polymers, amphiphilic polymers and refractory microbial polymers. The analysis of the EPS may be done destructively by means of extraction and subsequent chemical analysis or in situ by means of specific probes in combination with advanced imaging. In the last 15 years laser scanning microscopy (LSM) has been established as an indispensable technique for studying microbial communities. LSM with 1-photon and 2-photon excitation in combination with fluorescence techniques allows 3-dimensional investigation of fully hydrated, living biofilm systems. This approach is able to reveal data on biofilm structural features as well as biofilm processes and interactions. The fluorescent probes available allow the quantitative assessment of cellular as well as polymer distribution. For this purpose

  7. Advanced radiation techniques for inspection of diesel engine combustion chamber materials components. Final report

    SciTech Connect

    1995-10-09

    Heavy duty truck engines must meet stringent life cycle cost and regulatory requirements. Meeting these requirements has resulted in convergence on 4-stroke 6-in-line, turbocharged, and after-cooled engines with direct-injection combustion systems. These engines provide much higher efficiencies (42%, fuel consumption 200 g/kW-hr) than automotive engines (31%, fuel consumption 270 g/kW-hr), but at higher initial cost. Significant near-term diesel engine improvements are necessary and are spurred by continuing competitive, Middle - East oil problems and Congressional legislation. As a result of these trends and pressures, Caterpillar has been actively pursuing a low-fuel consumption engine research program with emphasis on product quality through process control and product inspection. The goal of this project is to combine the nondestructive evaluation and computational resources and expertise available at LLNL with the diesel engine and manufacturing expertise of the Caterpillar Corporation to develop in-process monitoring and inspection techniques for diesel engine combustion chamber components and materials. Early development of these techniques will assure the optimization of the manufacturing process by design/inspection interface. The transition from the development stage to the manufacturing stage requires a both a thorough understanding of the processes and a way of verifying conformance to process standards. NDE is one of the essential tools in accomplishing both elements and in this project will be integrated with Caterpillar`s technological and manufacturing expertise to accomplish the project goals.

  8. Evaluation and optimization of the structural parameter of diesel nozzle basing on synchrotron radiation imaging techniques

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Gao, Y.; Gong, H.; Li, L.

    2016-04-01

    Lacking of efficient methods, industry currently uses one only parameter—fuel flow rate—to evaluate the nozzle quality, which is far from satisfying the current emission regulations worldwide. By utilizing synchrotron radiation high energy X-ray in Shanghai Synchrotron Radiation Facility (SSRF), together with the imaging techniques, the 3D models of two nozzles with the same design dimensions were established, and the influence of parameters fluctuation in the azimuthal direction were analyzed in detail. Results indicate that, due to the orifice misalignment, even with the same design dimension, the inlet rounding radius of orifices differs greatly, and its fluctuation in azimuthal direction is also large. This difference will cause variation in the flow characteristics at orifice outlet and then further affect the spray characteristics. The study also indicates that, more precise investigation and insight into the evaluation and optimization of diesel nozzle structural parameter are needed.

  9. Optimization of a wood dryer kiln using the mixed integer programming technique: A case study

    SciTech Connect

    Gustafsson, S.I.

    1999-07-01

    When wood is to be utilized as a raw material for furniture, buildings, etc., it must be dried from approximately 100% to 6% moisture content. This is achieved at least partly in a drying kiln. Heat for this purpose is provided by electrical means, or by steam from boilers fired with wood chips or oil. By making a close examination of monitored values from an actual drying kiln it has been possible to optimize the use of steam and electricity using the so called mixed integer programming technique. Owing to the operating schedule for the drying kiln it has been necessary to divide the drying process in very short time intervals, i.e., a number of minutes. Since a drying cycle takes about two or three weeks, a considerable mathematical problem is presented and this has to be solved.

  10. Engine Yaw Augmentation for Hybrid-Wing-Body Aircraft via Optimal Control Allocation Techniques

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.; Yoo, Seung-Yeun

    2011-01-01

    Asymmetric engine thrust was implemented in a hybrid-wing-body non-linear simulation to reduce the amount of aerodynamic surface deflection required for yaw stability and control. Hybrid-wing-body aircraft are especially susceptible to yaw surface deflection due to their decreased bare airframe yaw stability resulting from the lack of a large vertical tail aft of the center of gravity. Reduced surface deflection, especially for trim during cruise flight, could reduce the fuel consumption of future aircraft. Designed as an add-on, optimal control allocation techniques were used to create a control law that tracks total thrust and yaw moment commands with an emphasis on not degrading the baseline system. Implementation of engine yaw augmentation is shown and feasibility is demonstrated in simulation with a potential drag reduction of 2 to 4 percent. Future flight tests are planned to demonstrate feasibility in a flight environment.

  11. Probabilistic risk assessment techniques help in identifying optimal equipment design for in-situ vitrification

    SciTech Connect

    Lucero, V.; Meale, B.M.; Purser, F.E.

    1990-01-01

    The analysis discussed in this paper was performed as part of the buried waste remediation efforts at the Idaho National Engineering Laboratory (INEL). The specific type of remediation discussed herein involves a thermal treatment process for converting contaminated soil and waste into a stable, chemically-inert form. Models of the proposed process were developed using probabilistic risk assessment (PRA) fault tree and event tree modeling techniques. The models were used to determine the appropriateness of the conceptual design by identifying potential hazards of system operations. Additional models were developed to represent the reliability aspects of the system components. By performing various sensitivities with the models, optimal design modifications are being identified to substantiate an integrated, cost-effective design representing minimal risk to the environment and/or public with maximum component reliability. 4 figs.

  12. Advanced quadrature sets and acceleration and preconditioning techniques for the discrete ordinates method in parallel computing environments

    NASA Astrophysics Data System (ADS)

    Longoni, Gianluca

    In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain

  13. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques.

    PubMed

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi

    2016-04-21

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349

  14. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques

    NASA Astrophysics Data System (ADS)

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E.; Lo, Yeh-Chi

    2016-04-01

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as  -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients.

  15. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    NASA Astrophysics Data System (ADS)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  16. Recent Advances in Stable Isotope Techniques for N2O Source Partitioning in Soils

    NASA Astrophysics Data System (ADS)

    Baggs, E.; Mair, L.; Mahmood, S.

    2007-12-01

    The use of 13C, 15N and 18O enables us to overcome uncertainties associated with soil C and N processes and to assess the links between species diversity and ecosystem function. Recent advances in stable isotope techniques enable determination of process rates, and are fundamental for examining interactions between C and N cycles. Here we will introduce the 15N-, 18O- and 13C-enrichment techniques we have developed to distinguish between different N2O-producing processes in situ in soils, presenting selected results, and will critically assess their potential, alone and in combination with molecular techniques, to help address key research questions for soil biogeochemistry and microbial ecology. We have developed 15N- 18O-enrichment techniques to distinguish between, and to quantify, N2O production during ammonia oxidation, nitrifier denitrification and denitrification. This provides a great advantage over natural abundance approaches as it enables quantification of N2O from each microbial source, which can be coupled with quantification of N2 production, and used to examine interactions between different processes and cycles. These approaches have also provided new insights into the N cycle and how it interacts with the C cycle. For example, we now know that ammonia oxidising bacteria significantly contribute to N2O emissions from soils, both via the traditionally accepted ammonia oxidation pathway, and also via denitrification (nitrifier denitrification) which can proceed even under aerobic conditions. We are also linking emissions from each source to diversity and activity of relevant microbial functional groups, for example through the development and application of a specific nirK primer for the nitrite reductase in ammonia oxidising bacteria. Recently, isotopomers have been proposed as an alternative for source partitioning N2O at natural abundance levels, and offers the potential to investigate N2O production from nitrate ammonification, and overcomes the

  17. Application of Optimization Techniques to Design of Unconventional Rocket Nozzle Configurations

    NASA Technical Reports Server (NTRS)

    Follett, W.; Ketchum, A.; Darian, A.; Hsu, Y.

    1996-01-01

    Several current rocket engine concepts such as the bell-annular tri-propellant engine, and the linear aerospike being proposed for the X-33 require unconventional three dimensional rocket nozzles which must conform to rectangular or sector shaped envelopes to meet integration constraints. These types of nozzles exist outside the current experience database, therefore, the application of efficient design methods for these propulsion concepts is critical to the success of launch vehicle programs. The objective of this work is to optimize several different nozzle configurations, including two- and three-dimensional geometries. Methodology includes coupling computational fluid dynamic (CFD) analysis to genetic algorithms and Taguchi methods as well as implementation of a streamline tracing technique. Results of applications are shown for several geometeries including: three dimensional thruster nozzles with round or super elliptic throats and rectangualar exits, two- and three-dimensional thrusters installed within a bell nozzle, and three dimensional thrusters with round throats and sector shaped exits. Due to the novel designs considered for this study, there is little experience which can be used to guide the effort and limit the design space. With a nearly infinite parameter space to explore, simple parametric design studies cannot possibly search the entire design space within the time frame required to impact the design cycle. For this reason, robust and efficient optimization methods are required to explore and exploit the design space to achieve high performance engine designs. Five case studies which examine the application of various techniques in the engineering environment are presented in this paper.

  18. Development and in vitro evaluation of oral controlled release formulations of celecoxib using optimization techniques.

    PubMed

    Chandran, Sajeev; Ravi, Punnarao; Saha, Ranendra N

    2006-07-01

    The objective of this study was to develop controlled release matrix embedded formulations of celecoxib (CCX) as candidate drug using hydroxy propyl methyl cellulose (HPMC) and ethyl cellulose (EC), either alone or in combination, using optimization techniques like polynomial method and composite design. This would enable development of controlled release formulations with predictable and better release characteristics in lesser number of trials. Controlled release matrix tablets of CCX were prepared by wet granulation method. The in vitro release rate studies were carried out in USP dissolution apparatus (paddle method) in 900 ml of sodium phosphate buffer (pH 7.4) with 1% v/v tween-80. The in vitro drug release data was suitably transformed and used to develop mathematical models using first order polynomial equation and composite design techniques of optimization. In the formulations prepared using HPMC alone, the release rate decreased as the polymer proportion in the matrix base was increased. Whereas in case of formulations prepared using EC alone, only marginal difference was observed in the release rate upon increasing the polymer proportion. In case of formulations containing combination of HPMC and EC, the release of the drug was found to be dependent on the relative proportions of HPMC and EC used in the tablet matrix. The release of the drug from these formulations was extended up to 21 h indicating they can serve as once daily controlled release formulations for CCX. Mathematical analysis of the release kinetics indicates a near approximate Fickian release character for most of the designed formulations. Mathematical equation developed by transforming the in vitro release data using composite design model showed better correlation between observed and predicted t(50%) (time required for 50% of the drug release) when compared to first order polynomial equation model. The equation thus developed can be used to predict the release characteristics of the

  19. Development of linear and nonlinear hand-arm vibration models using optimization and linearization techniques.

    PubMed

    Rakheja, S; Gurram, R; Gouw, G J

    1993-10-01

    Hand-arm vibration (HAV) models serve as an effective tool to assess the vibration characteristics of the hand-tool system and to evaluate the attenuation performance of vibration isolation mechanisms. This paper describes a methodology to identify the parameters of HAV models, whether linear or nonlinear, using mechanical impedance data and a nonlinear programming based optimization technique. Three- and four-degrees-of-freedom (DOF) linear, piecewise linear and nonlinear HAV models are formulated and analyzed to yield impedance characteristics in the 5-1000 Hz frequency range. A local equivalent linearization algorithm, based upon the principle of energy similarity, is implemented to simulate the nonlinear HAV models. Optimization methods are employed to identify the model parameters, such that the magnitude and phase errors between the computed and measured impedance characteristics are minimum in the entire frequency range. The effectiveness of the proposed method is demonstrated through derivations of models that correlate with the measured X-axis impedance characteristics of the hand-arm system, proposed by ISO. The results of the study show that a linear model cannot predict the impedance characteristics in the entire frequency range, while a piecewise linear model yields an accurate estimation. PMID:8253830

  20. AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    NASA Astrophysics Data System (ADS)

    Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.

    2015-11-01

    Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.