Sample records for conventional design methods

  1. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  2. Comparison of performance due to guided hyperlearning, unguided hyperlearning, and conventional learning in mathematics: an empirical study

    NASA Astrophysics Data System (ADS)

    Fathurrohman, Maman; Porter, Anne; Worthy, Annette L.

    2014-07-01

    In this paper, the use of guided hyperlearning, unguided hyperlearning, and conventional learning methods in mathematics are compared. The design of the research involved a quasi-experiment with a modified single-factor multiple treatment design comparing the three learning methods, guided hyperlearning, unguided hyperlearning, and conventional learning. The participants were from three first-year university classes, numbering 115 students in total. Each group received guided, unguided, or conventional learning methods in one of the three different topics, namely number systems, functions, and graphing. The students' academic performance differed according to the type of learning. Evaluation of the three methods revealed that only guided hyperlearning and conventional learning were appropriate methods for the psychomotor aspects of drawing in the graphing topic. There was no significant difference between the methods when learning the cognitive aspects involved in the number systems topic and the functions topic.

  3. First-order reliability application and verification methods for semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-11-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.

  4. Comparison of Performance Due to Guided Hyperlearning, Unguided Hyperlearning, and Conventional Learning in Mathematics: An Empirical Study

    ERIC Educational Resources Information Center

    Fathurrohman, Maman; Porter, Anne; Worthy, Annette L.

    2014-01-01

    In this paper, the use of guided hyperlearning, unguided hyperlearning, and conventional learning methods in mathematics are compared. The design of the research involved a quasi-experiment with a modified single-factor multiple treatment design comparing the three learning methods, guided hyperlearning, unguided hyperlearning, and conventional…

  5. The equivalent magnetizing method applied to the design of gradient coils for MRI.

    PubMed

    Lopez, Hector Sanchez; Liu, Feng; Crozier, Stuart

    2008-01-01

    This paper presents a new method for the design of gradient coils for Magnetic Resonance Imaging systems. The method is based on the equivalence between a magnetized volume surrounded by a conducting surface and its equivalent representation in surface current/charge density. We demonstrate that the curl of the vertical magnetization induces a surface current density whose stream line defines the coil current pattern. This method can be applied for coils wounds on arbitrary surface shapes. A single layer unshielded transverse gradient coil is designed and compared, with the designs obtained using two conventional methods. Through the presented example we demonstrate that the generated unconventional current patterns obtained using the magnetizing current method produces a superior gradient coil performance than coils designed by applying conventional methods.

  6. How to Study Chronic Diseases-Implications of the Convention on the Rights of Persons with Disabilities for Research Designs.

    PubMed

    von Peter, Sebastian; Bieler, Patrick

    2017-01-01

    The Convention on the Rights of Persons with Disabilities (CRPD) has been received considerable attention internationally. The Convention's main arguments are conceptually analyzed. Implications for the development of research designs are elaborated upon. The Convention entails both a human rights and a sociopolitical dimension. Advancing a relational notion of disability, it enters a rather foreign terrain to medical sciences. Research designs have to be changed accordingly. Research designs in accordance with the CRPD should employ and further develop context-sensitive research strategies and interdisciplinary collaboration. Complex designs that allow for a relational analysis of personalized effects have to be established and evaluated, thereby systematically integrating qualitative methods.

  7. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  8. Confocal laser induced fluorescence with comparable spatial localization to the conventional method

    NASA Astrophysics Data System (ADS)

    Thompson, Derek S.; Henriquez, Miguel F.; Scime, Earl E.; Good, Timothy N.

    2017-10-01

    We present measurements of ion velocity distributions obtained by laser induced fluorescence (LIF) using a single viewport in an argon plasma. A patent pending design, which we refer to as the confocal fluorescence telescope, combines large objective lenses with a large central obscuration and a spatial filter to achieve high spatial localization along the laser injection direction. Models of the injection and collection optics of the two assemblies are used to provide a theoretical estimate of the spatial localization of the confocal arrangement, which is taken to be the full width at half maximum of the spatial optical response. The new design achieves approximately 1.4 mm localization at a focal length of 148.7 mm, improving on previously published designs by an order of magnitude and approaching the localization achieved by the conventional method. The confocal method, however, does so without requiring a pair of separated, perpendicular optical paths. The confocal technique therefore eases the two window access requirement of the conventional method, extending the application of LIF to experiments where conventional LIF measurements have been impossible or difficult, or where multiple viewports are scarce.

  9. If We Build It, They Will Come! Exploring the Role of ICTs in Curriculum Design and Development: The Myths, Miracles and Affordances

    ERIC Educational Resources Information Center

    Naidu, S.

    2007-01-01

    Central to the argument about the influence of media on learning is how this influence is measured or ascertained. Conventional methods which comprise the use of true and quasi-experimental designs are inadequate. Several lessons can be learned from this observation on the media debate. The first is that, conventional methods of ascertaining the…

  10. Comparative performance of conventional OPC concrete and HPC designed by densified mixture design algorithm

    NASA Astrophysics Data System (ADS)

    Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti

    2017-12-01

    This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.

  11. Hybrid texture generator

    NASA Astrophysics Data System (ADS)

    Miyata, Kazunori; Nakajima, Masayuki

    1995-04-01

    A method is given for synthesizing a texture by using the interface of a conventional drawing tool. The majority of conventional texture generation methods are based on the procedural approach, and can generate a variety of textures that are adequate for generating a realistic image. But it is hard for a user to imagine what kind of texture will be generated simply by looking at its parameters. Furthermore, it is difficult to design a new texture freely without a knowledge of all the procedures for texture generation. Our method offers a solution to these problems, and has the following four merits: First, a variety of textures can be obtained by combining a set of feature lines and attribute functions. Second, data definitions are flexible. Third, the user can preview a texture together with its feature lines. Fourth, people can design their own textures interactively and freely by using the interface of a conventional drawing tool. For users who want to build this texture generation method into their own programs, we also give the language specifications for generating a texture. This method can interactively provide a variety of textures, and can also be used for typographic design.

  12. Comparative evaluation of surface porosities in conventional heat polymerized acrylic resin cured by water bath and microwave energy with microwavable acrylic resin cured by microwave energy

    PubMed Central

    Singh, Sunint; Palaskar, Jayant N.; Mittal, Sanjeev

    2013-01-01

    Background: Conventional heat cure poly methyl methacrylate (PMMA) is the most commonly used denture base resin despite having some short comings. Lengthy polymerization time being one of them and in order to overcome this fact microwave curing method was recommended. Unavailability of specially designed microwavable acrylic resin made it unpopular. Therefore, in this study, conventional heat cure PMMA was polymerized by microwave energy. Aim and Objectives: This study was designed to evaluate the surface porosities in PMMA cured by conventional water bath and microwave energy and compare it with microwavable acrylic resin cured by microwave energy. Materials and Methods: Wax samples were obtained by pouring molten wax into a metal mold of 25 mm × 12 mm × 3 mm dimensions. These samples were divided into three groups namely C, CM, and M. Group C denotes conventional heat cure PMMA cured by water bath method, CM denotes conventional heat cure PMMA cured by microwave energy, M denotes specially designed microwavable acrylic denture base resin cured by microwave energy. After polymerization, each sample was scanned in three pre-marked areas for surface porosities using the optical microscope. As per the literature available, this instrument is being used for the first time to measure the porosity in acrylic resin. It is a reliable method of measuring area of surface pores. Portion of the sample being scanned is displayed on the computer and with the help of software area of each pore was measured and data were analyzed. Results: Conventional heat cure PMMA samples cured by microwave energy showed maximum porosities than the samples cured by conventional water bath method and microwavable acrylic resin cured by microwave energy. Higher percentage of porosities was statistically significant, but well within the range to be clinically acceptable. Conclusion: Within the limitations of this in-vitro study, conventional heat cure PMMA can be cured by microwave energy without compromising on its property such as surface porosity. PMID:24015000

  13. The Importance of Considering Differences in Study Design in Network Meta-analysis: An Application Using Anti-Tumor Necrosis Factor Drugs for Ulcerative Colitis.

    PubMed

    Cameron, Chris; Ewara, Emmanuel; Wilson, Florence R; Varu, Abhishek; Dyrda, Peter; Hutton, Brian; Ingham, Michael

    2017-11-01

    Adaptive trial designs present a methodological challenge when performing network meta-analysis (NMA), as data from such adaptive trial designs differ from conventional parallel design randomized controlled trials (RCTs). We aim to illustrate the importance of considering study design when conducting an NMA. Three NMAs comparing anti-tumor necrosis factor drugs for ulcerative colitis were compared and the analyses replicated using Bayesian NMA. The NMA comprised 3 RCTs comparing 4 treatments (adalimumab 40 mg, golimumab 50 mg, golimumab 100 mg, infliximab 5 mg/kg) and placebo. We investigated the impact of incorporating differences in the study design among the 3 RCTs and presented 3 alternative methods on how to convert outcome data derived from one form of adaptive design to more conventional parallel RCTs. Combining RCT results without considering variations in study design resulted in effect estimates that were biased against golimumab. In contrast, using the 3 alternative methods to convert outcome data from one form of adaptive design to a format more consistent with conventional parallel RCTs facilitated more transparent consideration of differences in study design. This approach is more likely to yield appropriate estimates of comparative efficacy when conducting an NMA, which includes treatments that use an alternative study design. RCTs based on adaptive study designs should not be combined with traditional parallel RCT designs in NMA. We have presented potential approaches to convert data from one form of adaptive design to more conventional parallel RCTs to facilitate transparent and less-biased comparisons.

  14. Machining and characterization of self-reinforced polymers

    NASA Astrophysics Data System (ADS)

    Deepa, A.; Padmanabhan, K.; Kuppan, P.

    2017-11-01

    This Paper focuses on obtaining the mechanical properties and the effect of the different machining techniques on self-reinforced composites sample and to derive the best machining method with remarkable properties. Each sample was tested by the Tensile and Flexural tests, fabricated using hot compaction test and those loads were calculated. These composites are machined using conventional methods because of lack of advanced machinery in most of the industries. The advanced non-conventional methods like Abrasive water jet machining were used. These machining techniques are used to get the better output for the composite materials with good mechanical properties compared to conventional methods. But the use of non-conventional methods causes the changes in the work piece, tool properties and more economical compared to the conventional methods. Finding out the best method ideal for the designing of these Self Reinforced Composites with and without defects and the use of Scanning Electron Microscope (SEM) analysis for the comparing the microstructure of the PP and PE samples concludes our process.

  15. Energy-efficient ovens for unpolluted balady bread

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gadalla, M.A.; Mansour, M.S.; Mahdy, E.

    A new bread oven has been developed, tested and presented in this work for local balady bread. The design has the advantage of being efficient and producing unpolluted bread. An extensive study of the conventional and available designs has been carried out in order to help developing the new design. Evaluation of the conventional design is based on numerous tests and measurements. A computer code utilizing the indirect method has been developed to evaluate the thermal performance of the tested ovens. The present design achieves higher thermal efficiency of about 50% than the conventional ones. In addition, its capital costmore » is much cheaper than other imported designs. Thus, the present design achieves higher efficiency, pollutant free products and less cost. Moreover, it may be modified for different types of bread baking systems.« less

  16. Radar, target and ranging

    NASA Astrophysics Data System (ADS)

    1984-09-01

    This Test Operations Procedure (TOP) provides conventional test methods employing conventional test instrumentation for testing conventional radars. Single tests and subtests designed to test radar components, transmitters, receivers, antennas, etc., and system performance are conducted with single item instruments such as meters, generators, attenuators, counters, oscillators, plotters, etc., and with adequate land areas for conducting field tests.

  17. Investigation of AWG demultiplexer based SOI for CWDM application

    NASA Astrophysics Data System (ADS)

    Juhari, Nurjuliana; Susthitha Menon, P.; Shaari, Sahbudin; Annuar Ehsan, Abang

    2017-11-01

    9-channel Arrayed Waveguide Grating (AWG) demultiplexer for conventional and tapered structure were simulated using beam propagation method (BPM) with channel spacing of 20 nm. The AWG demultiplexer was design using high refractive index (n 3.47) material namely silicon-on-insulator (SOI) with rib waveguide structure. The characteristics of insertion loss, adjacent crosstalk and output spectrum response at central wavelength of 1.55 μm for both designs were compared and analyzed. The conventional AWG produced a minimum insertion loss of 6.64 dB whereas the tapered AWG design reduced the insertion loss by 2.66 dB. The lowest adjacent crosstalk value of -16.96 dB was obtained in the conventional AWG design and this was much smaller compared to the tapered AWG design where the lowest crosstalk value is -17.23 dB. Hence, a tapered AWG design significantly reduces the insertion loss but has a slightly higher adjacent crosstalk compared to the conventional AWG design. On the other hand, the output spectrum responses that are obtained from both designs were close to the Coarse Wavelength Division Multiplexing (CWDM) wavelength grid.

  18. Guidelines for the Effective Use of Entity-Attribute-Value Modeling for Biomedical Databases

    PubMed Central

    Dinu, Valentin; Nadkarni, Prakash

    2007-01-01

    Purpose To introduce the goals of EAV database modeling, to describe the situations where Entity-Attribute-Value (EAV) modeling is a useful alternative to conventional relational methods of database modeling, and to describe the fine points of implementation in production systems. Methods We analyze the following circumstances: 1) data are sparse and have a large number of applicable attributes, but only a small fraction will apply to a given entity; 2) numerous classes of data need to be represented, each class has a limited number of attributes, but the number of instances of each class is very small. We also consider situations calling for a mixed approach where both conventional and EAV design are used for appropriate data classes. Results and Conclusions In robust production systems, EAV-modeled databases trade a modest data sub-schema for a complex metadata sub-schema. The need to design the metadata effectively makes EAV design potentially more challenging than conventional design. PMID:17098467

  19. Supercritical Fluid Technologies to Fabricate Proliposomes.

    PubMed

    Falconer, James R; Svirskis, Darren; Adil, Ali A; Wu, Zimei

    2015-01-01

    Proliposomes are stable drug carrier systems designed to form liposomes upon addition of an aqueous phase. In this review, current trends in the use of supercritical fluid (SCF) technologies to prepare proliposomes are discussed. SCF methods are used in pharmaceutical research and industry to address limitations associated with conventional methods of pro/liposome fabrication. The SCF solvent methods of proliposome preparation are eco-friendly (known as green technology) and, along with the SCF anti-solvent methods, could be advantageous over conventional methods; enabling better design of particle morphology (size and shape). The major hurdles of SCF methods include poor scalability to industrial manufacturing which may result in variable particle characteristics. In the case of SCF anti-solvent methods, another hurdle is the reliance on organic solvents. However, the amount of solvent required is typically less than that used by the conventional methods. Another hurdle is that most of the SCF methods used have complicated manufacturing processes, although once the setup has been completed, SCF technologies offer a single-step process in the preparation of proliposomes compared to the multiple steps required by many other methods. Furthermore, there is limited research into how proliposomes will be converted into liposomes for the end-user, and how such a product can be prepared reproducibly in terms of vesicle size and drug loading. These hurdles must be overcome and with more research, SCF methods, especially where the SCF acts as a solvent, have the potential to offer a strong alternative to the conventional methods to prepare proliposomes.

  20. The Implementation of PAIKEM (Active, Innovative, Creative, Effective, and Exciting Learning) and Conventional Learning Method to Improve Student Learning Results

    ERIC Educational Resources Information Center

    Priyono

    2018-01-01

    The research aims to find the differences in students' learning results by implementing both PAIKEM (Active, Innovative, Creative, Effective, and Exciting Learning) and conventional learning methods for students with high and low motivation. This research used experimental design on two groups, a group of high motivation students and a group of…

  1. Development of quadruped walking locomotion gait generator using a hybrid method

    NASA Astrophysics Data System (ADS)

    Jasni, F.; Shafie, A. A.

    2013-12-01

    The earth, in many areas is hardly reachable by the wheeled or tracked locomotion system. Thus, walking locomotion system is becoming a favourite option for mobile robot these days. This is because of the ability of walking locomotion to move on the rugged and unlevel terrains. However, to develop a walking locomotion gait for a robot is not a simple task. Central Pattern Generator (CPGs) method is a biological inspired method that is introduced as a method to develop the gait for the walking robot recently to tackle the issue faced by the conventional method of pre-designed trajectory based method. However, research shows that even the CPG method do have some limitations. Thus, in this paper, a hybrid method that combines CPG and the pre-designed trajectory based method is introduced to develop a walking gait for quadruped walking robot. The 3-D foot trajectories and the joint angle trajectories developed using the proposed method are compared with the data obtained via the conventional method of pre-designed trajectory to confirm the performance.

  2. ATLAS, an integrated structural analysis and design system. Volume 6: Design module theory

    NASA Technical Reports Server (NTRS)

    Backman, B. F.

    1979-01-01

    The automated design theory underlying the operation of the ATLAS Design Module is decribed. The methods, applications and limitations associated with the fully stressed design, the thermal fully stressed design and a regional optimization algorithm are presented. A discussion of the convergence characteristics of the fully stressed design is also included. Derivations and concepts specific to the ATLAS design theory are shown, while conventional terminology and established methods are identified by references.

  3. Radiation shielding design of a new tomotherapy facility.

    PubMed

    Zacarias, Albert; Balog, John; Mills, Michael

    2006-10-01

    It is expected that intensity modulated radiation therapy (IMRT) and image guided radiation therapy (IGRT) will replace a large portion of radiation therapy treatments currently performed with conventional MLC-based 3D conformal techniques. IGRT may become the standard of treatment in the future for prostate and head and neck cancer. Many established facilities may convert existing vaults to perform this treatment method using new or upgraded equipment. In the future, more facilities undoubtedly will be considering de novo designs for their treatment vaults. A reevaluation of the design principles used in conventional vault design is of benefit to those considering this approach with a new tomotherapy facility. This is made more imperative as the design of the TomoTherapy system is unique in several aspects and does not fit well into the formalism of NCRP 49 for a conventional linear accelerator.

  4. "Glitch Logic" and Applications to Computing and Information Security

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Katkoori, Srinivas

    2009-01-01

    This paper introduces a new method of information processing in digital systems, and discusses its potential benefits to computing and information security. The new method exploits glitches caused by delays in logic circuits for carrying and processing information. Glitch processing is hidden to conventional logic analyses and undetectable by traditional reverse engineering techniques. It enables the creation of new logic design methods that allow for an additional controllable "glitch logic" processing layer embedded into a conventional synchronous digital circuits as a hidden/covert information flow channel. The combination of synchronous logic with specific glitch logic design acting as an additional computing channel reduces the number of equivalent logic designs resulting from synthesis, thus implicitly reducing the possibility of modification and/or tampering with the design. The hidden information channel produced by the glitch logic can be used: 1) for covert computing/communication, 2) to prevent reverse engineering, tampering, and alteration of design, and 3) to act as a channel for information infiltration/exfiltration and propagation of viruses/spyware/Trojan horses.

  5. Voltage Drop Compensation Method for Active Matrix Organic Light Emitting Diode Displays

    NASA Astrophysics Data System (ADS)

    Choi, Sang-moo; Ryu, Do-hyung; Kim, Keum-nam; Choi, Jae-beom; Kim, Byung-hee; Berkeley, Brian

    2011-03-01

    In this paper, the conventional voltage drop compensation methods are reviewed and the novel design and driving scheme, the advanced power de-coupled (aPDC) driving method, is proposed to effectively compensate the voltage IR drop of active matrix light emitting diode (AMOLED) displays. The advanced PDC driving scheme can be applied to general AMOLED pixel circuits that have been developed with only minor modification or without requiring modification in pixel circuit. A 14-in. AMOLED panel with the aPDC driving scheme was fabricated. Long range uniformity (LRU) of the 14-in. AMOLED panel was improved from 43% without the aPDC driving scheme, to over 87% at the same brightness by using the scheme and the layout complexity of the panel with new design scheme is less than that of the panel with the conventional design scheme.

  6. Triangular model integrating clinical teaching and assessment

    PubMed Central

    Abdelaziz, Adel; Koshak, Emad

    2014-01-01

    Structuring clinical teaching is a challenge facing medical education curriculum designers. A variety of instructional methods on different domains of learning are indicated to accommodate different learning styles. Conventional methods of clinical teaching, like training in ambulatory care settings, are prone to the factor of coincidence in having varieties of patient presentations. Accordingly, alternative methods of instruction are indicated to compensate for the deficiencies of these conventional methods. This paper presents an initiative that can be used to design a checklist as a blueprint to guide appropriate selection and implementation of teaching/learning and assessment methods in each of the educational courses and modules based on educational objectives. Three categories of instructional methods were identified, and within each a variety of methods were included. These categories are classroom-type settings, health services-based settings, and community service-based settings. Such categories have framed our triangular model of clinical teaching and assessment. PMID:24624002

  7. Triangular model integrating clinical teaching and assessment.

    PubMed

    Abdelaziz, Adel; Koshak, Emad

    2014-01-01

    Structuring clinical teaching is a challenge facing medical education curriculum designers. A variety of instructional methods on different domains of learning are indicated to accommodate different learning styles. Conventional methods of clinical teaching, like training in ambulatory care settings, are prone to the factor of coincidence in having varieties of patient presentations. Accordingly, alternative methods of instruction are indicated to compensate for the deficiencies of these conventional methods. This paper presents an initiative that can be used to design a checklist as a blueprint to guide appropriate selection and implementation of teaching/learning and assessment methods in each of the educational courses and modules based on educational objectives. Three categories of instructional methods were identified, and within each a variety of methods were included. These categories are classroom-type settings, health services-based settings, and community service-based settings. Such categories have framed our triangular model of clinical teaching and assessment.

  8. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  9. Performance-based seismic design of steel frames utilizing colliding bodies algorithm.

    PubMed

    Veladi, H

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.

  10. Data on processing of Ti-25Nb-25Zr β-titanium alloys via powder metallurgy route: Methodology, microstructure and mechanical properties.

    PubMed

    Ueda, D; Dirras, G; Hocini, A; Tingaud, D; Ameyama, K; Langlois, P; Vrel, D; Trzaska, Z

    2018-04-01

    The data presented in this article are related to the research article entitled "Cyclic Shear behavior of conventional and harmonic structure-designed Ti-25Nb-25Zr β-titanium alloy: Back-stress hardening and twinning inhibition" (Dirras et al., 2017) [1]. The datasheet describes the methods used to fabricate two β-titanium alloys having conventional microstructure and so-called harmonic structure (HS) design via a powder metallurgy route, namely the spark plasma sintering (SPS) route. The data show the as-processed unconsolidated powder microstructures as well as the post-SPS ones. The data illustrate the mechanical response under cyclic shear loading of consolidated alloy specimens. The data show how electron back scattering diffraction(EBSD) method is used to clearly identify induced deformation features in the case of the conventional alloy.

  11. Modified Involute Helical Gears: Computerized Design, Simulation of Meshing, and Stress Analysis

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert (Technical Monitor); Litvin, Faydor L.; Gonzalez-Perez, Ignacio; Carnevali, Luca; Kawasaki, Kazumasa; Fuentes-Aznar, Alfonso

    2003-01-01

    The computerized design, methods for generation, simulation of meshing, and enhanced stress analysis of modified involute helical gears is presented. The approaches proposed for modification of conventional involute helical gears are based on conjugation of double-crowned pinion with a conventional helical involute gear. Double-crowning of the pinion means deviation of cross-profile from an involute one and deviation in longitudinal direction from a helicoid surface. Using the method developed, the pinion-gear tooth surfaces are in point-contact, the bearing contact is localized and oriented longitudinally, and edge contact is avoided. Also, the influence of errors of aligment on the shift of bearing contact, vibration, and noise are reduced substantially. The theory developed is illustrated with numerical examples that confirm the advantages of the gear drives of the modified geometry in comparison with conventional helical involute gears.

  12. Modified Involute Helical Gears: Computerized Design, Simulation of Meshing and Stress Analysis

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The computerized design, methods for generation, simulation of meshing, and enhanced stress analysis of modified involute helical gears is presented. The approaches proposed for modification of conventional involute helical gears are based on conjugation of double-crowned pinion with a conventional helical involute gear. Double-crowning of the pinion means deviation of cross-profile from an involute one and deviation in longitudinal direction from a helicoid surface. Using the method developed, the pinion-gear tooth surfaces are in point-contact, the bearing contact is localized and oriented longitudinally, and edge contact is avoided. Also, the influence of errors of alignment on the shift of bearing contact, vibration, and noise are reduced substantially. The theory developed is illustrated with numerical examples that confirm the advantages of the gear drives of the modified geometry in comparison with conventional helical involute gears.

  13. Utility of a Newly Designed Film Holder for Premolar Bitewing Radiography.

    PubMed

    Safi, Yaser; Esmaeelinejad, Mohammad; Vasegh, Zahra; Valizadeh, Solmaz; Aghdasi, Mohammad Mehdi; Sarani, Omid; Afsahi, Mahmoud

    2015-11-01

    Bitewing radiography is a valuable technique for assessment of proximal caries, alveolar crest and periodontal status. Technical errors during radiography result in erroneous radiographic interpretation, misdiagnosis, possible mistreatment or unnecessary exposure of patient for taking a repeat radiograph. In this study, we aimed to evaluate the efficacy of a film holder modified from the conventional one and compared it with that of conventional film holder. Our study population comprised of 70 patients who were referred to the Radiology Department for bilateral premolar bitewing radiographs as requested by their attending clinician. Bitewing radiographs in each patient were taken using the newly designed holder in one side and the conventional holder in the other side. The acceptability of the two holders from the perspectives of the technician and patients was determined using a 0-20 point scale. The frequency of overlap and film positioning errors was calculated for each method. The conventional holder had greater acceptability among patients compared to the newly designed holder (mean score of 16.59 versus 13.37). From the technicians' point of view, the newly designed holder was superior to the conventional holder (mean score of 17.33 versus 16.44). The frequency of overlap was lower using the newly designed holder (p<0.001) and it allowed more accurate film positioning (p=0.005). The newly designed holder may facilitate the process of radiography for technicians and may be associated with less frequency of radiographic errors compared to the conventional holder.

  14. Design component method for sensitivity analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Seong, Hwai G.

    1986-01-01

    A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.

  15. Computer-assisted versus conventional free fibula flap technique for craniofacial reconstruction: an outcomes comparison.

    PubMed

    Seruya, Mitchel; Fisher, Mark; Rodriguez, Eduardo D

    2013-11-01

    There has been rising interest in computer-aided design/computer-aided manufacturing for preoperative planning and execution of osseous free flap reconstruction. The purpose of this study was to compare outcomes between computer-assisted and conventional fibula free flap techniques for craniofacial reconstruction. A two-center, retrospective review was carried out on patients who underwent fibula free flap surgery for craniofacial reconstruction from 2003 to 2012. Patients were categorized by the type of reconstructive technique: conventional (between 2003 and 2009) or computer-aided design/computer-aided manufacturing (from 2010 to 2012). Demographics, surgical factors, and perioperative and long-term outcomes were compared. A total of 68 patients underwent microsurgical craniofacial reconstruction: 58 conventional and 10 computer-aided design and manufacturing fibula free flaps. By demographics, patients undergoing the computer-aided design/computer-aided manufacturing method were significantly older and had a higher rate of radiotherapy exposure compared with conventional patients. Intraoperatively, the median number of osteotomies was significantly higher (2.0 versus 1.0, p=0.002) and the median ischemia time was significantly shorter (120 minutes versus 170 minutes, p=0.004) for the computer-aided design/computer-aided manufacturing technique compared with conventional techniques; operative times were shorter for patients undergoing the computer-aided design/computer-aided manufacturing technique, although this did not reach statistical significance. Perioperative and long-term outcomes were equivalent for the two groups, notably, hospital length of stay, recipient-site infection, partial and total flap loss, and rate of soft-tissue and bony tissue revisions. Microsurgical craniofacial reconstruction using a computer-assisted fibula flap technique yielded significantly shorter ischemia times amidst a higher number of osteotomies compared with conventional techniques. Therapeutic, III.

  16. Distributive Distillation Enabled by Microchannel Process Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arora, Ravi

    The application of microchannel technology for distributive distillation was studied to achieve the Grand Challenge goals of 25% energy savings and 10% return on investment. In Task 1, a detailed study was conducted and two distillation systems were identified that would meet the Grand Challenge goals if the microchannel distillation technology was used. Material and heat balance calculations were performed to develop process flow sheet designs for the two distillation systems in Task 2. The process designs were focused on two methods of integrating the microchannel technology 1) Integrating microchannel distillation to an existing conventional column, 2) Microchannel distillation formore » new plants. A design concept for a modular microchannel distillation unit was developed in Task 3. In Task 4, Ultrasonic Additive Machining (UAM) was evaluated as a manufacturing method for microchannel distillation units. However, it was found that a significant development work would be required to develop process parameters to use UAM for commercial distillation manufacturing. Two alternate manufacturing methods were explored. Both manufacturing approaches were experimentally tested to confirm their validity. The conceptual design of the microchannel distillation unit (Task 3) was combined with the manufacturing methods developed in Task 4 and flowsheet designs in Task 2 to estimate the cost of the microchannel distillation unit and this was compared to a conventional distillation column. The best results were for a methanol-water separation unit for the use in a biodiesel facility. For this application microchannel distillation was found to be more cost effective than conventional system and capable of meeting the DOE Grand Challenge performance requirements.« less

  17. Reverse design of a bull's eye structure for oblique incidence and wider angular transmission efficiency.

    PubMed

    Yamada, Akira; Terakawa, Mitsuhiro

    2015-04-10

    We present a design method of a bull's eye structure with asymmetric grooves for focusing oblique incident light. The design method is capable of designing transmission peaks to a desired oblique angle with capability of collecting light from a wider range of angles. The bull's eye groove geometry for oblique incidence is designed based on the electric field intensity pattern around an isolated subwavelength aperture on a thin gold film at oblique incidence, calculated by the finite difference time domain method. Wide angular transmission efficiency is successfully achieved by overlapping two different bull's eye groove patterns designed with different peak angles. Our novel design method would overcome the angular limitations of the conventional methods.

  18. Control-structure interaction study for the Space Station solar dynamic power module

    NASA Technical Reports Server (NTRS)

    Cheng, J.; Ianculescu, G.; Ly, J.; Kim, M.

    1991-01-01

    The authors investigate the feasibility of using a conventional PID (proportional plus integral plus derivative) controller design to perform the pointing and tracking functions for the Space Station Freedom solar dynamic power module. Using this simple controller design, the control/structure interaction effects were also studied without assuming frequency bandwidth separation. From the results, the feasibility of a simple solar dynamic control solution with a reduced-order model, which satisfies the basic system pointing and stability requirements, is suggested. However, the conventional control design approach is shown to be very much influenced by the order of reduction of the plant model, i.e., the number of the retained elastic modes from the full-order model. This suggests that, for complex large space structures, such as the Space Station Freedom solar dynamic, the conventional control system design methods may not be adequate.

  19. Methods for Combining Payload Parameter Variations with Input Environment

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.; Straayer, J. W.

    1975-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.

  20. Simulation methods to estimate design power: an overview for applied research

    PubMed Central

    2011-01-01

    Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447

  1. Design of a novel freeform lens for LED uniform illumination and conformal phosphor coating.

    PubMed

    Hu, Run; Luo, Xiaobing; Zheng, Huai; Qin, Zong; Gan, Zhiqiang; Wu, Bulong; Liu, Sheng

    2012-06-18

    A conformal phosphor coating can realize a phosphor layer with uniform thickness, which could enhance the angular color uniformity (ACU) of light-emitting diode (LED) packaging. In this study, a novel freeform lens was designed for simultaneous realization of LED uniform illumination and conformal phosphor coating. The detailed algorithm of the design method, which involves an extended light source and double refractions, was presented. The packaging configuration of the LED modules and the modeling of the light-conversion process were also presented. Monte Carlo ray-tracing simulations were conducted to validate the design method by comparisons with a conventional freeform lens. It is demonstrated that for the LED module with the present freeform lens, the illumination uniformity and ACU was 0.89 and 0.9283, respectively. The present freeform lens can realize equivalent illumination uniformity, but the angular color uniformity can be enhanced by 282.3% when compared with the conventional freeform lens.

  2. Design and Drawing for Production. Syllabus. Field Test Edition II.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany.

    This syllabus, which replaces the New York State Education Department publication "Mechanical Drawing and Design," is intended for use in teaching a high school course in design and drawing for production. The materials included in the guide reflect a shift away from the conventional methods of teaching design and drawing to a greater…

  3. Autonomous control systems - Architecture and fundamental issues

    NASA Technical Reports Server (NTRS)

    Antsaklis, P. J.; Passino, K. M.; Wang, S. J.

    1988-01-01

    A hierarchical functional autonomous controller architecture is introduced. In particular, the architecture for the control of future space vehicles is described in detail; it is designed to ensure the autonomous operation of the control system and it allows interaction with the pilot and crew/ground station, and the systems on board the autonomous vehicle. The fundamental issues in autonomous control system modeling and analysis are discussed. It is proposed to utilize a hybrid approach to modeling and analysis of autonomous systems. This will incorporate conventional control methods based on differential equations and techniques for the analysis of systems described with a symbolic formalism. In this way, the theory of conventional control can be fully utilized. It is stressed that autonomy is the design requirement and intelligent control methods appear at present, to offer some of the necessary tools to achieve autonomy. A conventional approach may evolve and replace some or all of the `intelligent' functions. It is shown that in addition to conventional controllers, the autonomous control system incorporates planning, learning, and FDI (fault detection and identification).

  4. Towards non-conventional methods of designing register-based epidemiological studies: An application to pediatric research.

    PubMed

    Gong, Tong; Brew, Bronwyn; Sjölander, Arvid; Almqvist, Catarina

    2017-07-01

    Various epidemiological designs have been applied to investigate the causes and consequences of fetal growth restriction in register-based observational studies. This review seeks to provide an overview of several conventional designs, including cohort, case-control and more recently applied non-conventional designs such as family-based designs. We also discuss some practical points regarding the application and interpretation of family-based designs. Definitions of each design, the study population, the exposure and the outcome measures are briefly summarised. Examples of study designs are taken from the field of low birth-weight research for illustrative purposes. Also examined are relative advantages and disadvantages of each design in terms of assumptions, potential selection and information bias, confounding and generalisability. Kinship data linkage, statistical models and result interpretation are discussed specific to family-based designs. When all information is retrieved from registers, there is no evident preference of the case-control design over the cohort design to estimate odds ratios. All conventional designs included in the review are prone to bias, particularly due to residual confounding. Family-based designs are able to reduce such bias and strengthen causal inference. In the field of low birth-weight research, family-based designs have been able to confirm a negative association not confounded by genetic or shared environmental factors between low birth weight and the risk of asthma. We conclude that there is a broader need for family-based design in observational research as evidenced by the meaningful contributions to the understanding of the potential causal association between low birth weight and subsequent outcomes.

  5. Systematic methods for the design of a class of fuzzy logic controllers

    NASA Astrophysics Data System (ADS)

    Yasin, Saad Yaser

    2002-09-01

    Fuzzy logic control, a relatively new branch of control, can be used effectively whenever conventional control techniques become inapplicable or impractical. Various attempts have been made to create a generalized fuzzy control system and to formulate an analytically based fuzzy control law. In this study, two methods, the left and right parameterization method and the normalized spline-base membership function method, were utilized for formulating analytical fuzzy control laws in important practical control applications. The first model was used to design an idle speed controller, while the second was used to control an inverted control problem. The results of both showed that a fuzzy logic control system based on the developed models could be used effectively to control highly nonlinear and complex systems. This study also investigated the application of fuzzy control in areas not fully utilizing fuzzy logic control. Three important practical applications pertaining to the automotive industries were studied. The first automotive-related application was the idle speed of spark ignition engines, using two fuzzy control methods: (1) left and right parameterization, and (2) fuzzy clustering techniques and experimental data. The simulation and experimental results showed that a conventional controller-like performance fuzzy controller could be designed based only on experimental data and intuitive knowledge of the system. In the second application, the automotive cruise control problem, a fuzzy control model was developed using parameters adaptive Proportional plus Integral plus Derivative (PID)-type fuzzy logic controller. Results were comparable to those using linearized conventional PID and linear quadratic regulator (LQR) controllers and, in certain cases and conditions, the developed controller outperformed the conventional PID and LQR controllers. The third application involved the air/fuel ratio control problem, using fuzzy clustering techniques, experimental data, and a conversion algorithm, to develop a fuzzy-based control algorithm. Results were similar to those obtained by recently published conventional control based studies. The influence of the fuzzy inference operators and parameters on performance and stability of the fuzzy logic controller was studied Results indicated that, the selections of certain parameters or combinations of parameters, affect greatly the performance and stability of the fuzzy controller. Diagnostic guidelines used to tune or change certain factors or parameters to improve controller performance were developed based on knowledge gained from conventional control methods and knowledge gained from the experimental and the simulation results of this study.

  6. Methods for combining payload parameter variations with input environment. [calculating design limit loads compatible with probabilistic structural design criteria

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.

    1976-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.

  7. Application of Grey Relational Analysis to Decision-Making during Product Development

    ERIC Educational Resources Information Center

    Hsiao, Shih-Wen; Lin, Hsin-Hung; Ko, Ya-Chuan

    2017-01-01

    A multi-attribute decision-making (MADM) approach was proposed in this study as a prediction method that differs from the conventional production and design methods for a product. When a client has different dimensional requirements, this approach can quickly provide a company with design decisions for each product. The production factors of a…

  8. METHOD FOR THE SUPERCRITICAL FLUID EXTRACTION OF SOILS/SEDIMENTS

    EPA Science Inventory

    Supercritical fluid extraction has been publicized as an extraction method which has several advantages over conventional methods, and it is expected to result in substantial cost and labor savings. This study was designed to evaluate the feasibility of using supercritical fluid ...

  9. Evaluation of bearing capacity of piles from cone penetration test data.

    DOT National Transportation Integrated Search

    2007-12-01

    A statistical analysis and ranking criteria were used to compare the CPT methods and the conventional alpha design method. Based on the results, the de Ruiter/Beringen and LCPC methods showed the best capability in predicting the measured load carryi...

  10. Efficient method to design RF pulses for parallel excitation MRI using gridding and conjugate gradient

    PubMed Central

    Feng, Shuo

    2014-01-01

    Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420

  11. Efficient method to design RF pulses for parallel excitation MRI using gridding and conjugate gradient.

    PubMed

    Feng, Shuo; Ji, Jim

    2014-04-01

    Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.

  12. Evaluation of the marginal fit of metal copings fabricated on three different marginal designs using conventional and accelerated casting techniques: an in vitro study.

    PubMed

    Vaidya, Sharad; Parkash, Hari; Bhargava, Akshay; Gupta, Sharad

    2014-01-01

    Abundant resources and techniques have been used for complete coverage crown fabrication. Conventional investing and casting procedures for phosphate-bonded investments require a 2- to 4-h procedure before completion. Accelerated casting techniques have been used, but may not result in castings with matching marginal accuracy. The study measured the marginal gap and determined the clinical acceptability of single cast copings invested in a phosphate-bonded investment with the use of conventional and accelerated methods. One hundred and twenty cast coping samples were fabricated using conventional and accelerated methods, with three finish lines: Chamfer, shoulder and shoulder with bevel. Sixty copings were prepared with each technique. Each coping was examined with a stereomicroscope at four predetermined sites and measurements of marginal gaps were documented for each. A master chart was prepared for all the data and was analyzed using Statistical Package for the Social Sciences version. Evidence of marginal gap was then evaluated by t-test. Analysis of variance and Post-hoc analysis were used to compare two groups as well as to make comparisons between three subgroups . Measurements recorded showed no statistically significant difference between conventional and accelerated groups. Among the three marginal designs studied, shoulder with bevel showed the best marginal fit with conventional as well as accelerated casting techniques. Accelerated casting technique could be a vital alternative to the time-consuming conventional casting technique. The marginal fit between the two casting techniques showed no statistical difference.

  13. Tradeoff studies in multiobjective insensitive design of airplane control systems

    NASA Technical Reports Server (NTRS)

    Schy, A. A.; Giesy, D. P.

    1983-01-01

    A computer aided design method for multiobjective parameter-insensitive design of airplane control systems is described. Methods are presented for trading off nominal values of design objectives against sensitivities of the design objectives to parameter uncertainties, together with guidelines for designer utilization of the methods. The methods are illustrated by application to the design of a lateral stability augmentation system for two supersonic flight conditions of the Shuttle Orbiter. Objective functions are conventional handling quality measures and peak magnitudes of control deflections and rates. The uncertain parameters are assumed Gaussian, and numerical approximations of the stochastic behavior of the objectives are described. Results of applying the tradeoff methods to this example show that stochastic-insensitive designs are distinctly different from deterministic multiobjective designs. The main penalty for achieving significant decrease in sensitivity is decreased speed of response for the nominal system.

  14. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  15. Simulation methods to estimate design power: an overview for applied research.

    PubMed

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  16. Quantifying electrical impacts on redundant wire insertion in 7nm unidirectional designs

    NASA Astrophysics Data System (ADS)

    Mohyeldin, Ahmed; Schroeder, Uwe Paul; Srinivasan, Ramya; Narisetty, Haritez; Malik, Shobhit; Madhavan, Sriram

    2017-04-01

    In nano-meter scale Integrated Circuits, via fails due to random defects is a well-known yield detractor, and via redundancy insertion is a common method to help enhance semiconductors yield. For the case of Self Aligned Double Patterning (SADP), which might require unidirectional design layers as in the case of some advanced technology nodes, the conventional methods of inserting redundant vias don't work any longer. This is because adding redundant vias conventionally requires adding metal shapes in the non-preferred direction, which will violate the SADP design constraints in that case. Therefore, such metal layers fabricated using unidirectional SADP require an alternative method for providing the needed redundancy. This paper proposes a post-layout Design for Manufacturability (DFM) redundancy insertion method tailored for the design requirements introduced by unidirectional metal layers. The proposed method adds redundant wires in the preferred direction - after searching for nearby vacant routing tracks - in order to provide redundant paths for electrical signals. This method opportunistically adds robustness against failures due to silicon defects without impacting area or incurring new design rule violations. Implementation details of this redundancy insertion method will be explained in this paper. One known challenge with similar DFM layout fixing methods is the possible introduction of undesired electrical impact, causing other unintentional failures in design functionality. In this paper, a study is presented to quantify the electrical impacts of such redundancy insertion scheme and to examine if that electrical impact can be tolerated. The paper will show results to evaluate DFM insertion rates and corresponding electrical impact for a given design utilization and maximum inserted wire length. Parasitic extraction and static timing analysis results will be presented. A typical digital design implemented using GLOBALFOUNDRIES 7nm technology is used for demonstration. The provided results can help evaluate such extensive DFM insertion method from an electrical standpoint. Furthermore, the results could provide guidance on how to implement the proposed method of adding electrical redundancy such that intolerable electrical impacts could be avoided.

  17. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  18. Optimization of rotor shaft shrink fit method for motor using "Robust design"

    NASA Astrophysics Data System (ADS)

    Toma, Eiji

    2018-01-01

    This research is collaborative investigation with the general-purpose motor manufacturer. To review construction method in production process, we applied the parameter design method of quality engineering and tried to approach the optimization of construction method. Conventionally, press-fitting method has been adopted in process of fitting rotor core and shaft which is main component of motor, but quality defects such as core shaft deflection occurred at the time of press fitting. In this research, as a result of optimization design of "shrink fitting method by high-frequency induction heating" devised as a new construction method, its construction method was feasible, and it was possible to extract the optimum processing condition.

  19. "Drug" Discovery with the Help of Organic Chemistry.

    PubMed

    Itoh, Yukihiro; Suzuki, Takayoshi

    2017-01-01

    The first step in "drug" discovery is to find compounds binding to a potential drug target. In modern medicinal chemistry, the screening of a chemical library, structure-based drug design, and ligand-based drug design, or a combination of these methods, are generally used for identifying the desired compounds. However, they do not necessarily lead to success and there is no infallible method for drug discovery. Therefore, it is important to explore medicinal chemistry based on not only the conventional methods but also new ideas. So far, we have found various compounds as drug candidates. In these studies, some strategies based on organic chemistry have allowed us to find drug candidates, through 1) construction of a focused library using organic reactions and 2) rational design of enzyme inhibitors based on chemical reactions catalyzed by the target enzyme. Medicinal chemistry based on organic chemical reactions could be expected to supplement the conventional methods. In this review, we present drug discovery with the help of organic chemistry showing examples of our explorative studies on histone deacetylase inhibitors and lysine-specific demethylase 1 inhibitors.

  20. Passive Samplers for Investigations of Air Quality: Method Description, Implementation, and Comparison to Alternative Sampling Methods

    EPA Science Inventory

    This Paper covers the basics of passive sampler design, compares passive samplers to conventional methods of air sampling, and discusses considerations when implementing a passive sampling program. The Paper also discusses field sampling and sample analysis considerations to ensu...

  1. A review of the application of propensity score methods yielded increasing use, advantages in specific settings, but not substantially different estimates compared with conventional multivariable methods

    PubMed Central

    Stürmer, Til; Joshi, Manisha; Glynn, Robert J.; Avorn, Jerry; Rothman, Kenneth J.; Schneeweiss, Sebastian

    2006-01-01

    Objective Propensity score analyses attempt to control for confounding in non-experimental studies by adjusting for the likelihood that a given patient is exposed. Such analyses have been proposed to address confounding by indication, but there is little empirical evidence that they achieve better control than conventional multivariate outcome modeling. Study design and methods Using PubMed and Science Citation Index, we assessed the use of propensity scores over time and critically evaluated studies published through 2003. Results Use of propensity scores increased from a total of 8 papers before 1998 to 71 in 2003. Most of the 177 published studies abstracted assessed medications (N=60) or surgical interventions (N=51), mainly in cardiology and cardiac surgery (N=90). Whether PS methods or conventional outcome models were used to control for confounding had little effect on results in those studies in which such comparison was possible. Only 9 out of 69 studies (13%) had an effect estimate that differed by more than 20% from that obtained with a conventional outcome model in all PS analyses presented. Conclusions Publication of results based on propensity score methods has increased dramatically, but there is little evidence that these methods yield substantially different estimates compared with conventional multivariable methods. PMID:16632131

  2. Design of plywood and paper flywheel rotors

    NASA Astrophysics Data System (ADS)

    Hagen, D. L.

    Technical and economic design factors of cellulosic rotors are compared with conventional materials for stationary flywheel energy storage systems. Wood species, operation in a vacuum, assembly and costs of plywood rotors are evaluated. Wound kraft paper, twine and veneer rotors are examined. Two bulb attachments are designed. Support stiffness is shown to be constrained by the material strength, rotor configuration and speed ratio. Plywood moisture equilibrium during manufacture and assembly is critical. Disk shaping and rotor assembly are described. Potential self-centering dynamic balancing methods and equipment are described. Detailed measurements of the distribution of strengths, densities and specific energy of conventional Finnish Birch plywood and of custom made hexagonal Birch plywood are detailed. High resolution tensile tests were performed while monitoring the acoustic emissions with micoprocessor controlled data acquisition. Preliminary duration of load tests were performed on vacuum dried hexagonal birch plywood. Economics of cellulosic and conventional rotors were examined.

  3. Comparison of variational real-space representations of the kinetic energy operator

    NASA Astrophysics Data System (ADS)

    Skylaris, Chris-Kriton; Diéguez, Oswaldo; Haynes, Peter D.; Payne, Mike C.

    2002-08-01

    We present a comparison of real-space methods based on regular grids for electronic structure calculations that are designed to have basis set variational properties, using as a reference the conventional method of finite differences (a real-space method that is not variational) and the reciprocal-space plane-wave method which is fully variational. We find that a definition of the finite-difference method [P. Maragakis, J. Soler, and E. Kaxiras, Phys. Rev. B 64, 193101 (2001)] satisfies one of the two properties of variational behavior at the cost of larger errors than the conventional finite-difference method. On the other hand, a technique which represents functions in a number of plane waves which is independent of system size closely follows the plane-wave method and therefore also the criteria for variational behavior. Its application is only limited by the requirement of having functions strictly localized in regions of real space, but this is a characteristic of an increasing number of modern real-space methods, as they are designed to have a computational cost that scales linearly with system size.

  4. A new design of groundwater sampling device and its application.

    PubMed

    Tsai, Yih-jin; Kuo, Ming-ching T

    2005-01-01

    Compounds in the atmosphere contaminate samples of groundwater. An inexpensive and simple method for collecting groundwater samples is developed to prevent contamination when the background concentration of contaminants is high. This new design of groundwater sampling device involves a glass sampling bottle with a Teflon-lined valve at each end. A cleaned and dried sampling bottle was connected to a low flow-rate peristaltic pump with Teflon tubing and was filled with water. No headspace volume was remained in the sampling bottle. The sample bottle was then packed in a PVC bag to prevent the target component from infiltrating into the water sample through the valves. In this study, groundwater was sampled at six wells using both the conventional method and the improved method. The analysis of trichlorofluoromethane (CFC-11) concentrations at these six wells indicates that all the groundwater samples obtained by the conventional sampling method were contaminated by CFC-11 from the atmosphere. The improved sampling method greatly eliminated the problems of contamination, preservation and quantitative analysis of natural water.

  5. A Comparison of Video versus Conventional Visual Reinforcement in 7- to 16-Month-Old Infants

    ERIC Educational Resources Information Center

    Lowery, Kristy J.; von Hapsburg, Deborah; Plyler, Erin L.; Johnstone, Patti

    2009-01-01

    Purpose: To compare response patterns to video visual reinforcement audiometry (VVRA) and conventional visual reinforcement audiometry (CVRA) in infants 7-16 months of age. Method: Fourteen normal-hearing infants aged 7-16 months (8 male, 6 female) participated. A repeated measures design was used. Each infant was tested with VVRA and CVRA over 2…

  6. Economic aspects of interlocking hollow brick system designed for industrialized building system

    NASA Astrophysics Data System (ADS)

    Tahir, Mahmood Md.; Saggaff, Anis; Ngian, Shek Poi; Sulaiman, Arizu

    2017-11-01

    Construction industry has moved forward into a technology driven where a transition is in progress from conventional method to a more advanced and mechanised system known as the Industrialised Building System (IBS). However, the need to implement the IBS should be well understood by all construction players such as designer, architect, contraction, erectors and construction workers. Therefore, there is a need to educate all these construction players which should be spearheaded by authorities such as Construction Industrial Development Board where enforcement trough building by laws as well as initiative to those that adopt the IBS in their construction. This paper reports on economic aspects of using interlocking hollow brick system in construction as an alternative method offered for Industrialized Building System. The main objective is to address the economic aspects of using interlocking block system in terms of time, costs, and utilization of manpower and to present some of the experimental tests results related to Interlocking Hollow Brick System (IHBS). Example of savings from the use of IHBS is presented in this paper by comparing the construction of two storey terrace house with build-up area of about 200 square meter with conventional construction method of typical reinforced concrete construction (RCC) compared to IHBS. The comparison shows that the implementation of IHBS can reduce construction time, cost, and utilization of man power up to 26.6% compared to the conventional method. Moreover, the construction time using IHBS can also be reduced by up to 50% as compared to the conventional construction.

  7. Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks

    NASA Astrophysics Data System (ADS)

    Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li

    2016-06-01

    Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.

  8. Multidisciplinary design of a rocket-based combined cycle SSTO launch vehicle using Taguchi methods

    NASA Technical Reports Server (NTRS)

    Olds, John R.; Walberg, Gerald D.

    1993-01-01

    Results are presented from the optimization process of a winged-cone configuration SSTO launch vehicle that employs a rocket-based ejector/ramjet/scramjet/rocket operational mode variable-cycle engine. The Taguchi multidisciplinary parametric-design method was used to evaluate the effects of simultaneously changing a total of eight design variables, rather than changing them one at a time as in conventional tradeoff studies. A combination of design variables was in this way identified which yields very attractive vehicle dry and gross weights.

  9. Multi-loop control of UPS inverter with a plug-in odd-harmonic repetitive controller.

    PubMed

    Razi, Reza; Karbasforooshan, Mohammad-Sadegh; Monfared, Mohammad

    2017-03-01

    This paper proposes an improved multi-loop control scheme for the single-phase uninterruptible power supply (UPS) inverter by using a plug-in odd-harmonic repetitive controller to regulate the output voltage. In the suggested control method, the output voltage and the filter capacitor current are used as the outer and inner loop feedback signals, respectively and the instantaneous value of the reference voltage feedforwarded to the output of the controller. Instead of conventional linear (proportional-integral/-resonant) and conventional repetitive controllers, a plug-in odd-harmonic repetitive controller is employed in the outer loop to regulate the output voltage, which occupies less memory space and offers faster tracking performance compared to the conventional one. Also, a simple proportional controller is used in the inner loop for active damping of possible resonances and improving the transient performance. The feedforward of the converter reference voltage enhances the robust performance of the system and simplifies the system modelling and the controller design. A step-by-step design procedure is presented for the proposed controller, which guarantees stability of the system under worst-case scenarios. Simulation and experimental results validate the excellent steady-state and transient performance of the proposed control scheme and provide the exact comparison of the proposed method with the conventional multi-loop control method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Development and evaluation of a simplified superpave IDT testing system for implementation in mix design and control : final report, March 2008.

    DOT National Transportation Integrated Search

    2009-08-01

    Asphalt mixtures designed using modern conventional methods, whether Marshall or Superpave methodologies, fail to address the cracking performance of these mixtures. Research previously conducted at the University of Florida for the Florida Departmen...

  11. Intelligent control for PMSM based on online PSO considering parameters change

    NASA Astrophysics Data System (ADS)

    Song, Zhengqiang; Yang, Huiling

    2018-03-01

    A novel online particle swarm optimization method is proposed to design speed and current controllers of vector controlled interior permanent magnet synchronous motor drives considering stator resistance variation. In the proposed drive system, the space vector modulation technique is employed to generate the switching signals for a two-level voltage-source inverter. The nonlinearity of the inverter is also taken into account due to the dead-time, threshold and voltage drop of the switching devices in order to simulate the system in the practical condition. Speed and PI current controller gains are optimized with PSO online, and the fitness function is changed according to the system dynamic and steady states. The proposed optimization algorithm is compared with conventional PI control method in the condition of step speed change and stator resistance variation, showing that the proposed online optimization method has better robustness and dynamic characteristics compared with conventional PI controller design.

  12. Design and Implementation of Viterbi Decoder Using VHDL

    NASA Astrophysics Data System (ADS)

    Thakur, Akash; Chattopadhyay, Manju K.

    2018-03-01

    A digital design conversion of Viterbi decoder for ½ rate convolutional encoder with constraint length k = 3 is presented in this paper. The design is coded with the help of VHDL, simulated and synthesized using XILINX ISE 14.7. Synthesis results show a maximum frequency of operation for the design is 100.725 MHz. The requirement of memory is less as compared to conventional method.

  13. Spacecraft mass estimation, relationships and engine data: Task 1.1 of the lunar base systems study

    NASA Technical Reports Server (NTRS)

    1988-01-01

    A collection of scaling equations, weight statements, scaling factors, etc., useful for doing conceptual designs of spacecraft are given. Rules of thumb and methods of calculating quantities of interest are provided. Basic relationships for conventional, and several non-conventional, propulsion systems (nuclear, solar electric and solar thermal) are included. The equations and other data were taken from a number of sources and are not at all consistent with each other in level of detail or method, but provide useful references for early estimation purposes.

  14. Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.

  15. Design and implement of mobile equipment management system based on QRcode

    NASA Astrophysics Data System (ADS)

    Yu, Runze; Duan, Xiaohui; Jiao, Bingli

    2017-08-01

    A mobile equipment management system based on QRcode is proposed for remote and convenient device management. Unlike conventional systems, the system here makes managers accessible to real-time information with smart phones. Compared with the conventional method, which can only be operated with specific devices, this lightweight and efficient tele management mode is conducive to the asset management in multiple scenarios.

  16. Adaptive sampling in research on risk-related behaviors.

    PubMed

    Thompson, Steven K; Collins, Linda M

    2002-11-01

    This article introduces adaptive sampling designs to substance use researchers. Adaptive sampling is particularly useful when the population of interest is rare, unevenly distributed, hidden, or hard to reach. Examples of such populations are injection drug users, individuals at high risk for HIV/AIDS, and young adolescents who are nicotine dependent. In conventional sampling, the sampling design is based entirely on a priori information, and is fixed before the study begins. By contrast, in adaptive sampling, the sampling design adapts based on observations made during the survey; for example, drug users may be asked to refer other drug users to the researcher. In the present article several adaptive sampling designs are discussed. Link-tracing designs such as snowball sampling, random walk methods, and network sampling are described, along with adaptive allocation and adaptive cluster sampling. It is stressed that special estimation procedures taking the sampling design into account are needed when adaptive sampling has been used. These procedures yield estimates that are considerably better than conventional estimates. For rare and clustered populations adaptive designs can give substantial gains in efficiency over conventional designs, and for hidden populations link-tracing and other adaptive procedures may provide the only practical way to obtain a sample large enough for the study objectives.

  17. Fourier transform and particle swarm optimization based modified LQR algorithm for mitigation of vibrations using magnetorheological dampers

    NASA Astrophysics Data System (ADS)

    Kumar, Gaurav; Kumar, Ashok

    2017-11-01

    Structural control has gained significant attention in recent times. The standalone issue of power requirement during an earthquake has already been solved up to a large extent by designing semi-active control systems using conventional linear quadratic control theory, and many other intelligent control algorithms such as fuzzy controllers, artificial neural networks, etc. In conventional linear-quadratic regulator (LQR) theory, it is customary to note that the values of the design parameters are decided at the time of designing the controller and cannot be subsequently altered. During an earthquake event, the response of the structure may increase or decrease, depending the quasi-resonance occurring between the structure and the earthquake. In this case, it is essential to modify the value of the design parameters of the conventional LQR controller to obtain optimum control force to mitigate the vibrations due to the earthquake. A few studies have been done to sort out this issue but in all these studies it was necessary to maintain a database of the earthquake. To solve this problem and to find the optimized design parameters of the LQR controller in real time, a fast Fourier transform and particle swarm optimization based modified linear quadratic regulator method is presented here. This method comprises four different algorithms: particle swarm optimization (PSO), the fast Fourier transform (FFT), clipped control algorithm and the LQR. The FFT helps to obtain the dominant frequency for every time window. PSO finds the optimum gain matrix through the real-time update of the weighting matrix R, thereby, dispensing with the experimentation. The clipped control law is employed to match the magnetorheological (MR) damper force with the desired force given by the controller. The modified Bouc-Wen phenomenological model is taken to recognize the nonlinearities in the MR damper. The assessment of the advised method is done by simulation of a three-story structure having an MR damper at the ground floor level subjected to three different near-fault historical earthquake time histories, and the outcomes are equated with those of simple conventional LQR. The results establish that the advised methodology is more effective than conventional LQR controllers in reducing inter-storey drift, relative displacement, and acceleration response.

  18. Composite turbine blade design options for Claude (open) cycle OTEC power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penney, T R

    1985-11-01

    Small-scale turbine rotors made from composites offer several technical advantages for a Claude (open) cycle ocean thermal energy conversion (OTEC) power system. Westinghouse Electric Corporation has designed a composite turbine rotor/disk using state-of-the-art analysis methods for large-scale (100-MW/sub e/) open cycle OTEC applications. Near-term demonstrations using conventional low-pressure turbine blade shapes with composite material would achieve feasibility and modern credibility of the open cycle OTEC power system. Application of composite blades for low-pressure turbo-machinery potentially improves the reliability of conventional metal blades affected by stress corrosion.

  19. Formal Methods, Design, and Collaborative Learning in the First Computer Science Course.

    ERIC Educational Resources Information Center

    Troeger, Douglas R.

    1995-01-01

    A new introductory computer science course at City College of New York builds on a foundation of logic to teach programming based on a "design idea," a strong departure from conventional programming courses. Reduced attrition and increased student and teacher enthusiasm have resulted. (MSE)

  20. A New Approach to Flood Protection Design and Riparian Management

    Treesearch

    Philip B. Williams; Mitchell L. Swanson

    1989-01-01

    Conventional engineering methods of flood control design focus narrowly on the efficient conveyance of water, with little regard for environmental resource planning and natural geomorphic processes. Consequently, flood control projects are often environmentally disastrous, expensive to maintain, and even inadequate to control floods. In addition, maintenance programs...

  1. High-speed engine/component performance assessment using exergy and thrust-based methods

    NASA Technical Reports Server (NTRS)

    Riggins, D. W.

    1996-01-01

    This investigation summarizes a comparative study of two high-speed engine performance assessment techniques based on energy (available work) and thrust-potential (thrust availability). Simple flow-fields utilizing Rayleigh heat addition and one-dimensional flow with friction are used to demonstrate the fundamental inability of conventional energy techniques to predict engine component performance, aid in component design, or accurately assess flow losses. The use of the thrust-based method on these same examples demonstrates its ability to yield useful information in all these categories. Energy and thrust are related and discussed from the stand-point of their fundamental thermodynamic and fluid dynamic definitions in order to explain the differences in information obtained using the two methods. The conventional definition of energy is shown to include work which is inherently unavailable to an aerospace Brayton engine. An engine-based energy is then developed which accurately accounts for this inherently unavailable work; performance parameters based on this quantity are then shown to yield design and loss information equivalent to the thrust-based method.

  2. Early detection of materials degradation

    NASA Astrophysics Data System (ADS)

    Meyendorf, Norbert

    2017-02-01

    Lightweight components for transportation and aerospace applications are designed for an estimated lifecycle, taking expected mechanical and environmental loads into account. The main reason for catastrophic failure of components within the expected lifecycle are material inhomogeneities, like pores and inclusions as origin for fatigue cracks, that have not been detected by NDE. However, material degradation by designed or unexpected loading conditions or environmental impacts can accelerate the crack initiation or growth. Conventional NDE methods are usually able to detect cracks that are formed at the end of the degradation process, but methods for early detection of fatigue, creep, and corrosion are still a matter of research. For conventional materials ultrasonic, electromagnetic, or thermographic methods have been demonstrated as promising. Other approaches are focused to surface damage by using optical methods or characterization of the residual surface stresses that can significantly affect the creation of fatigue cracks. For conventional metallic materials, material models for nucleation and propagation of damage have been successfully applied for several years. Material microstructure/property relations are well established and the effect of loading conditions on the component life can be simulated. For advanced materials, for example carbon matrix composites or ceramic matrix composites, the processes of nucleation and propagation of damage is still not fully understood. For these materials NDE methods can not only be used for the periodic inspections, but can significantly contribute to the material scientific knowledge to understand and model the behavior of composite materials.

  3. Improved design of constrained model predictive tracking control for batch processes against unknown uncertainties.

    PubMed

    Wu, Sheng; Jin, Qibing; Zhang, Ridong; Zhang, Junfeng; Gao, Furong

    2017-07-01

    In this paper, an improved constrained tracking control design is proposed for batch processes under uncertainties. A new process model that facilitates process state and tracking error augmentation with further additional tuning is first proposed. Then a subsequent controller design is formulated using robust stable constrained MPC optimization. Unlike conventional robust model predictive control (MPC), the proposed method enables the controller design to bear more degrees of tuning so that improved tracking control can be acquired, which is very important since uncertainties exist inevitably in practice and cause model/plant mismatches. An injection molding process is introduced to illustrate the effectiveness of the proposed MPC approach in comparison with conventional robust MPC. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Food powders flowability characterization: theory, methods, and applications.

    PubMed

    Juliano, Pablo; Barbosa-Cánovas, Gustavo V

    2010-01-01

    Characterization of food powders flowability is required for predicting powder flow from hoppers in small-scale systems such as vending machines or at the industrial scale from storage silos or bins dispensing into powder mixing systems or packaging machines. This review covers conventional and new methods used to measure flowability in food powders. The method developed by Jenike (1964) for determining hopper outlet diameter and hopper angle has become a standard for the design of bins and is regarded as a standard method to characterize flowability. Moreover, there are a number of shear cells that can be used to determine failure properties defined by Jenike's theory. Other classic methods (compression, angle of repose) and nonconventional methods (Hall flowmeter, Johanson Indicizer, Hosokawa powder tester, tensile strength tester, powder rheometer), used mainly for the characterization of food powder cohesiveness, are described. The effect of some factors preventing flow, such as water content, temperature, time consolidation, particle composition and size distribution, is summarized for the characterization of specific food powders with conventional and other methods. Whereas time-consuming standard methods established for hopper design provide flow properties, there is yet little comparative evidence demonstrating that other rapid methods may provide similar flow prediction.

  5. Validated green high-performance liquid chromatographic methods for the determination of coformulated pharmaceuticals: a comparison with reported conventional methods.

    PubMed

    Elzanfaly, Eman S; Hegazy, Maha A; Saad, Samah S; Salem, Maissa Y; Abd El Fattah, Laila E

    2015-03-01

    The introduction of sustainable development concepts to analytical laboratories has recently gained interest, however, most conventional high-performance liquid chromatography methods do not consider either the effect of the used chemicals or the amount of produced waste on the environment. The aim of this work was to prove that conventional methods can be replaced by greener ones with the same analytical parameters. The suggested methods were designed so that they neither use nor produce harmful chemicals and produce minimum waste to be used in routine analysis without harming the environment. This was achieved by using green mobile phases and short run times. Four mixtures were chosen as models for this study; clidinium bromide/chlordiazepoxide hydrochloride, phenobarbitone/pipenzolate bromide, mebeverine hydrochloride/sulpiride, and chlorphenoxamine hydrochloride/caffeine/8-chlorotheophylline either in their bulk powder or in their dosage forms. The methods were validated with respect to linearity, precision, accuracy, system suitability, and robustness. The developed methods were compared to the reported conventional high-performance liquid chromatography methods regarding their greenness profile. The suggested methods were found to be greener and more time- and solvent-saving than the reported ones; hence they can be used for routine analysis of the studied mixtures without harming the environment. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. The design of wavefront coded imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Shun; Cen, Zhaofeng; Li, Xiaotong

    2016-10-01

    Wavefront Coding is a new method to extend the depth of field, which combines optical design and signal processing together. By using optical design software ZEMAX ,we designed a practical wavefront coded imaging system based on a conventional Cooke triplet system .Unlike conventional optical system, the wavefront of this new system is modulated by a specially designed phase mask, which makes the point spread function (PSF)of optical system not sensitive to defocus. Therefore, a series of same blurred images obtained at the image plane. In addition, the optical transfer function (OTF) of the wavefront coded imaging system is independent of focus, which is nearly constant with misfocus and has no regions of zeros. All object information can be completely recovered through digital filtering at different defocus positions. The focus invariance of MTF is selected as merit function in this design. And the coefficients of phase mask are set as optimization goals. Compared to conventional optical system, wavefront coded imaging system obtains better quality images under different object distances. Some deficiencies appear in the restored images due to the influence of digital filtering algorithm, which are also analyzed in this paper. The depth of field of the designed wavefront coded imaging system is about 28 times larger than initial optical system, while keeping higher optical power and resolution at the image plane.

  7. Unstructured Finite Volume Computational Thermo-Fluid Dynamic Method for Multi-Disciplinary Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    1998-01-01

    This paper describes a finite volume computational thermo-fluid dynamics method to solve for Navier-Stokes equations in conjunction with energy equation and thermodynamic equation of state in an unstructured coordinate system. The system of equations have been solved by a simultaneous Newton-Raphson method and compared with several benchmark solutions. Excellent agreements have been obtained in each case and the method has been found to be significantly faster than conventional Computational Fluid Dynamic(CFD) methods and therefore has the potential for implementation in Multi-Disciplinary analysis and design optimization in fluid and thermal systems. The paper also describes an algorithm of design optimization based on Newton-Raphson method which has been recently tested in a turbomachinery application.

  8. Design Tool Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.

  9. A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Byun, K.; Hamlet, A. F.

    2017-12-01

    There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.

  10. Acquired Codes of Meaning in Data Visualization and Infographics: Beyond Perceptual Primitives.

    PubMed

    Byrne, Lydia; Angus, Daniel; Wiles, Janet

    2016-01-01

    While information visualization frameworks and heuristics have traditionally been reluctant to include acquired codes of meaning, designers are making use of them in a wide variety of ways. Acquired codes leverage a user's experience to understand the meaning of a visualization. They range from figurative visualizations which rely on the reader's recognition of shapes, to conventional arrangements of graphic elements which represent particular subjects. In this study, we used content analysis to codify acquired meaning in visualization. We applied the content analysis to a set of infographics and data visualizations which are exemplars of innovative and effective design. 88% of the infographics and 71% of data visualizations in the sample contain at least one use of figurative visualization. Conventions on the arrangement of graphics are also widespread in the sample. In particular, a comparison of representations of time and other quantitative data showed that conventions can be specific to a subject. These results suggest that there is a need for information visualization research to expand its scope beyond perceptual channels, to include social and culturally constructed meaning. Our paper demonstrates a viable method for identifying figurative techniques and graphic conventions and integrating them into heuristics for visualization design.

  11. Design and fabrication of a reflection far ultraviolet polarizer and retarder

    NASA Technical Reports Server (NTRS)

    Kim, Jongmin; Zukic, Muamer; Wilson, Michele M.; Torr, Douglas G.

    1993-01-01

    New methods have been developed for the design of a far ultraviolet multilayer reflection polarizer and retarder. A MgF2/Al/MgF2 three-layer structure deposited on a thick opaque Al film (substrate) is used for the design of polarizers and retarders. The induced transmission and absorption method is used for the design of a polarizer and layer-by-layer electric field calculation method is used for the design of a quarterwave retarder. In order to fabricate these designs in a conventional high vacuum chamber, we have to minimize the oxidation of the Al layers and somehow characterize the oxidized layer. X-ray photoelectron spectroscopy is used to investigate the amount and profile of oxidation. Depth profiling results and a seven layer oxidation model are presented.

  12. Modified Brown-Forsythe Procedure for Testing Interaction Effects in Split-Plot Designs

    ERIC Educational Resources Information Center

    Vallejo, Guillermo; Ato, Manuel

    2006-01-01

    The standard univariate and multivariate methods are conventionally used to analyze continuous data from groups by trials repeated measures designs, in spite of being extremely sensitive to departures from the multisample sphericity assumption when group sizes are unequal. However, in the last 10 years several authors have offered alternative…

  13. Infared beak treatment method compared with conventional hot blade amputation in laying hens

    USDA-ARS?s Scientific Manuscript database

    Infrared lasers have been widely used for noninvasive surgical applications in human medicine and their results are reliable, predictable and reproducible. Infrared lasers have recently been designed with the expressed purpose of providing a less painful, more precise beak trimming method compared w...

  14. Extending the Capture Volume of an Iris Recognition System Using Wavefront Coding and Super-Resolution.

    PubMed

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao; Chang, Chin-Chen

    2016-12-01

    Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the system's high recognition rate.

  15. APPLICATION OF THE ELECTROMAGNETIC BOREHOLE FLOWMETER

    EPA Science Inventory

    Spatial variability of saturated zone hydraulic properties has important implications with regard to sampling wells for water quality parameters, use of conventional methods to estimate transmissivity, and remedial system design. Characterization of subsurface heterogeneity requ...

  16. Spiral Gradient Coil Design for Use in Cylindrical MRI Systems.

    PubMed

    Wang, Yaohui; Xin, Xuegang; Liu, Feng; Crozier, Stuart

    2018-04-01

    In magnetic resonance imaging, the stream function based method is commonly used in the design of gradient coils. However, this method can be prone to errors associated with the discretization of continuous current density and wire connections. In this paper, we propose a novel gradient coil design scheme that works directly in the wire space, avoiding the system errors that may appear in the stream function approaches. Specifically, the gradient coil pattern is described with dedicated spiral functions adjusted to allow the coil to produce the required field gradients in the imaging area, minimal stray field, and other engineering terms. The performance of a designed spiral gradient coil was compared with its stream-function counterpart. The numerical evaluation shows that when compared with the conventional solution, the inductance and resistance was reduced by 20.9 and 10.5%, respectively. The overall coil performance (evaluated by the figure of merit (FoM)) was improved up to 26.5% for the x -gradient coil design; for the z-gradient coil design, the inductance and resistance were reduced by 15.1 and 6.7% respectively, and the FoM was increased by 17.7%. In addition, by directly controlling the wire distributions, the spiral gradient coil design was much sparser than conventional coils.

  17. MRAC Revisited: Guaranteed Performance with Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmaje

    2010-01-01

    This paper presents modification of the conventional model reference adaptive control (MRAC) architecture in order to achieve guaranteed transient performance both in the output and input signals of an uncertain system. The proposed modification is based on the tracking error feedback to the reference model. It is shown that approach guarantees tracking of a given command and the ideal control signal (one that would be designed if the system were known) not only asymptotically but also in transient by a proper selection of the error feedback gain. The method prevents generation of high frequency oscillations that are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference command of any magnitude form any initial position without re-tuning. The benefits of the method are demonstrated in simulations.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkler, Jon; Booten, Chuck

    Residential building codes and voluntary labeling programs are continually increasing the energy efficiency requirements of residential buildings. Improving a building's thermal enclosure and installing energy-efficient appliances and lighting can result in significant reductions in sensible cooling loads leading to smaller air conditioners and shorter cooling seasons. However due to fresh air ventilation requirements and internal gains, latent cooling loads are not reduced by the same proportion. Thus, it's becoming more challenging for conventional cooling equipment to control indoor humidity at part-load cooling conditions and using conventional cooling equipment in a non-conventional building poses the potential risk of high indoor humidity.more » The objective of this project was to investigate the impact the chosen design condition has on the calculated part-load cooling moisture load, and compare calculated moisture loads and the required dehumidification capacity to whole-building simulations. Procedures for sizing whole-house supplemental dehumidification equipment have yet to be formalized; however minor modifications to current Air-Conditioner Contractors of America (ACCA) Manual J load calculation procedures are appropriate for calculating residential part-load cooling moisture loads. Though ASHRAE 1% DP design conditions are commonly used to determine the dehumidification requirements for commercial buildings, an appropriate DP design condition for residential buildings has not been investigated. Two methods for sizing supplemental dehumidification equipment were developed and tested. The first method closely followed Manual J cooling load calculations; whereas the second method made more conservative assumptions impacting both sensible and latent loads.« less

  19. Robust recognition of degraded machine-printed characters using complementary similarity measure and error-correction learning

    NASA Astrophysics Data System (ADS)

    Hagita, Norihiro; Sawaki, Minako

    1995-03-01

    Most conventional methods in character recognition extract geometrical features such as stroke direction, connectivity of strokes, etc., and compare them with reference patterns in a stored dictionary. Unfortunately, geometrical features are easily degraded by blurs, stains and the graphical background designs used in Japanese newspaper headlines. This noise must be removed before recognition commences, but no preprocessing method is completely accurate. This paper proposes a method for recognizing degraded characters and characters printed on graphical background designs. This method is based on the binary image feature method and uses binary images as features. A new similarity measure, called the complementary similarity measure, is used as a discriminant function. It compares the similarity and dissimilarity of binary patterns with reference dictionary patterns. Experiments are conducted using the standard character database ETL-2 which consists of machine-printed Kanji, Hiragana, Katakana, alphanumeric, an special characters. The results show that this method is much more robust against noise than the conventional geometrical feature method. It also achieves high recognition rates of over 92% for characters with textured foregrounds, over 98% for characters with textured backgrounds, over 98% for outline fonts, and over 99% for reverse contrast characters.

  20. A spiral, bi-planar gradient coil design for open magnetic resonance imaging.

    PubMed

    Zhang, Peng; Shi, Yikai; Wang, Wendong; Wang, Yaohui

    2018-01-01

    To design planar gradient coil for MRI applications without discretization of continuous current density and loop-loop connection errors. In the new design method, the coil current is represented using a spiral curve function described by just a few control parameters. Using a proper parametric equation set, an ensemble of spiral contours is reshaped to satisfy the coil design requirements, such as gradient linearity, inductance and shielding. In the given case study, by using the spiral coil design, the magnetic field errors in the imaging area were reduced from 5.19% (non-spiral design) to 4.47% (spiral design) for the transverse gradient coils, and for the longitudinal gradient coil design, the magnetic field errors were reduced to 5.02% (spiral design). The numerical evaluation shows that when compared with conventional wire loop, the inductance and resistance of spiral coil was reduced by 11.55% and 8.12% for x gradient coil, respectively. A novel spiral gradient coil design for biplanar MRI systems, the new design offers better magnetic field gradients, smooth contours than the conventional connected counterpart, which improves manufacturability.

  1. The Ultimate Pile Bearing Capacity from Conventional and Spectral Analysis of Surface Wave (SASW) Measurements

    NASA Astrophysics Data System (ADS)

    Faizah Bawadi, Nor; Anuar, Shamilah; Rahim, Mustaqqim A.; Mansor, A. Faizal

    2018-03-01

    A conventional and seismic method for determining the ultimate pile bearing capacity was proposed and compared. The Spectral Analysis of Surface Wave (SASW) method is one of the non-destructive seismic techniques that do not require drilling and sampling of soils, was used in the determination of shear wave velocity (Vs) and damping (D) profile of soil. The soil strength was found to be directly proportional to the Vs and its value has been successfully applied to obtain shallow bearing capacity empirically. A method is proposed in this study to determine the pile bearing capacity using Vs and D measurements for the design of pile and also as an alternative method to verify the bearing capacity from the other conventional methods of evaluation. The objectives of this study are to determine Vs and D profile through frequency response data from SASW measurements and to compare pile bearing capacities obtained from the method carried out and conventional methods. All SASW test arrays were conducted near the borehole and location of conventional pile load tests. In obtaining skin and end bearing pile resistance, the Hardin and Drnevich equation has been used with reference strains obtained from the method proposed by Abbiss. Back analysis results of pile bearing capacities from SASW were found to be 18981 kN and 4947 kN compared to 18014 kN and 4633 kN of IPLT with differences of 5% and 6% for Damansara and Kuala Lumpur test sites, respectively. The results of this study indicate that the seismic method proposed in this study has the potential to be used in estimating the pile bearing capacity.

  2. The Tool for Designing Engineering Systems Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.

  3. Hydrocarbonaceous material processing methods and apparatus

    DOEpatents

    Brecher, Lee E [Laramie, WY

    2011-07-12

    Methods and apparatus are disclosed for possibly producing pipeline-ready heavy oil from substantially non-pumpable oil feeds. The methods and apparatus may be designed to produce such pipeline-ready heavy oils in the production field. Such methods and apparatus may involve thermal soaking of liquid hydrocarbonaceous inputs in thermal environments (2) to generate, though chemical reaction, an increased distillate amount as compared with conventional boiling technologies.

  4. Air data system optimization using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Deshpande, Samir M.; Kumar, Renjith R.; Seywald, Hans; Siemers, Paul M., III

    1992-01-01

    An optimization method for flush-orifice air data system design has been developed using the Genetic Algorithm approach. The optimization of the orifice array minimizes the effect of normally distributed random noise in the pressure readings on the calculation of air data parameters, namely, angle of attack, sideslip angle and freestream dynamic pressure. The optimization method is applied to the design of Pressure Distribution/Air Data System experiment (PD/ADS) proposed for inclusion in the Aeroassist Flight Experiment (AFE). Results obtained by the Genetic Algorithm method are compared to the results obtained by conventional gradient search method.

  5. Broadband metamaterial lens antennas with special properties by controlling both refractive-index distribution and feed directivity

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Shi, Chuan Bo; Chen, Tian Yi; Qing Qi, Mei; Li, Yun Bo; Cui, Tie Jun

    2018-04-01

    A new method is proposed to design gradient refractive-index metamaterial lens antennas by optimizing both the refractive-index distribution of the lens and the feed directivity. Comparing to the conventional design methods, source optimization provides a new degree of freedom to control aperture fields effectively. To demonstrate this method, two lenses with special properties based on this method are designed, to emit high-efficiency plane waves and fan-shaped beams, respectively. Both lenses have good performance and wide frequency band from 12 to 18 GHz, verifying the validity of the proposed method. The plane-wave emitting lens realized a high aperture efficiency of 75%, and the fan-beam lens achieved a high gain of 15 dB over board bandwidth. The experimental results have good agreement with the design targets and full-wave simulations.

  6. Preparation and characterization of progesterone dispersions using supercritical carbon dioxide.

    PubMed

    Falconer, James R; Wen, Jingyuan; Zargar-Shoshtari, Sara; Chen, John J; Farid, Mohammed; Tallon, Stephen J; Alany, Raid G

    2014-04-01

    Supercritical fluid methods offer an alternative to conventional mixing methods, particularly for heat sensitive drugs and where an organic solvent is undesirable. To design, develop and construct a unit for the particles from a gas-saturated suspension/solution (PGSS) method and form endogenous progesterone (PGN) dispersion systems using SC-CO2. The PGN dispersions were manufactured using three selected excipients: polyethylene glycol (PEG) 400/4000 (50:50), Gelucire 44/14 and D-α-tocopheryl PEG 1000 succinate (TPGS). Semisolid dispersions of PGN prepared by PGSS method were compared to the conventional methods; comelting (CM), cosolvent (CS) and physical mixing (PM). The dispersion systems made were characterized by Raman and Fourier transform infrared (FTIR) spectroscopies, X-ray powder diffraction (XRPD), scanning electron microscopy (SEM), PGN recovery, uniformity and in vitro dissolution, analyzed by high-performance liquid chromatography (HPLC). Raman spectra revealed no changes in the crystalline structure of PGN treated with SC-CO2 compared to that of untreated PGN. XRPD and FTIR showed the presence of peaks and bands for PGN confirming that PGN has been incorporated well with each individual excipient. All PGN dispersions prepared by the PGSS method resulted in the improvement of PGN dissolution rates compared to that prepared by the conventional methods and untreated PGN after 60 min (p value < 0.05). The novel PGN dispersions prepared by the PGSS method offer the great potential to enhance PGN dissolution rate, reduce preparation time and form stable crystalline dispersion systems over those prepared by conventional methods.

  7. 3D Displays And User Interface Design For A Radiation Therapy Treatment Planning CAD Tool

    NASA Astrophysics Data System (ADS)

    Mosher, Charles E.; Sherouse, George W.; Chaney, Edward L.; Rosenman, Julian G.

    1988-06-01

    The long term goal of the project described in this paper is to improve local tumor control through the use of computer-aided treatment design methods that can result in selection of better treatment plans compared with conventional planning methods. To this end, a CAD tool for the design of radiation treatment beams is described. Crucial to the effectiveness of this tool are high quality 3D display techniques. We have found that 2D and 3D display methods dramatically improve the comprehension of the complex spatial relationships between patient anatomy, radiation beams, and dose distributions. In order to take full advantage of these displays, an intuitive and highly interactive user interface was created. If the system is to be used by physicians unfamiliar with computer systems, it is essential that a user interface is incorporated that allows the user to navigate through each step of the design process in a manner similar to what they are used to. Compared with conventional systems, we believe our display and CAD tools will allow the radiotherapist to achieve more accurate beam targetting leading to a better radiation dose configuration to the tumor volume. This would result in a reduction of the dose to normal tissue.

  8. Design of high-linear CMOS circuit using a constant transconductance method for gamma-ray spectroscopy system

    NASA Astrophysics Data System (ADS)

    Jung, I. I.; Lee, J. H.; Lee, C. S.; Choi, Y.-W.

    2011-02-01

    We propose a novel circuit to be applied to the front-end integrated circuits of gamma-ray spectroscopy systems. Our circuit is designed as a type of current conveyor (ICON) employing a constant- gm (transconductance) method which can significantly improve the linearity in the amplified signals by using a large time constant and the time-invariant characteristics of an amplifier. The constant- gm method is obtained by a feedback control which keeps the transconductance of the input transistor constant. To verify the performance of the propose circuit, the time constant variations for the channel resistances are simulated with the TSMC 0.18 μm transistor parameters using HSPICE, and then compared with those of a conventional ICON. As a result, the proposed ICON shows only 0.02% output linearity variation and 0.19% time constant variation for the input amplitude up to 100 mV. These are significantly small values compared to a conventional ICON's 1.39% and 19.43%, respectively, for the same conditions.

  9. Launch vehicle systems design analysis

    NASA Technical Reports Server (NTRS)

    Ryan, Robert; Verderaime, V.

    1993-01-01

    Current launch vehicle design emphasis is on low life-cycle cost. This paper applies total quality management (TQM) principles to a conventional systems design analysis process to provide low-cost, high-reliability designs. Suggested TQM techniques include Steward's systems information flow matrix method, quality leverage principle, quality through robustness and function deployment, Pareto's principle, Pugh's selection and enhancement criteria, and other design process procedures. TQM quality performance at least-cost can be realized through competent concurrent engineering teams and brilliance of their technical leadership.

  10. A method for the design of transonic flexible wings

    NASA Technical Reports Server (NTRS)

    Smith, Leigh Ann; Campbell, Richard L.

    1990-01-01

    Methodology was developed for designing airfoils and wings at transonic speeds which includes a technique that can account for static aeroelastic deflections. This procedure is capable of designing either supercritical or more conventional airfoil sections. Methods for including viscous effects are also illustrated and are shown to give accurate results. The methodology developed is an interactive system containing three major parts. A design module was developed which modifies airfoil sections to achieve a desired pressure distribution. This design module works in conjunction with an aerodynamic analysis module, which for this study is a small perturbation transonic flow code. Additionally, an aeroelastic module is included which determines the wing deformation due to the calculated aerodynamic loads. Because of the modular nature of the method, it can be easily coupled with any aerodynamic analysis code.

  11. Preliminary design optimization of joined-wing aircraft

    NASA Technical Reports Server (NTRS)

    Gallman, John W.; Kroo, Ilan M.; Smith, Stephen C.

    1990-01-01

    The joined wing is an innovative aircraft configuration that has a its tail connected to the wing forming a diamond shape in both top and plan view. This geometric arrangement utilizes the tail for both pitch control and as a structural support for the wing. Several researchers have studied this configuration and predicted significant reductions in trimmed drag or structural weight when compared with a conventional T-tail configuration. Kroo et al. compared the cruise drag of joined wings with conventional designs of the same lifting-surface area and structural weight. This study showed an 11 percent reduction in cruise drag for the lifting system of a joined wing. Although this reduction in cruise drag is significant, a complete design study is needed before any economic savings can be claimed for a joined-wing transport. Mission constraints, such as runway length, could increase the wing area and eliminate potential drag savings. Since other design codes do not accurately represent the interaction between structures and aerodynamics for joined wings, we developed a new design code for this study. The aerodynamic and structural analyses in this study are significantly more sophisticated than those used in most conventional design codes. This sophistication was needed to predict the aerodynamic interference between the wing and tail and the stresses in the truss-like structure. This paper describes these analysis methods, discusses some problems encountered when applying the numerical optimizer NPSOL, and compares optimum joined wings with conventional aircraft on the basis of cruise drag, lifting surface weight, and direct operating cost (DOC).

  12. Comparison of Conventional and Computer-Aided Drafting Methods from the View of Time and Drafting Quality

    ERIC Educational Resources Information Center

    Ozkan, Aysen; Yildirim, Kemal

    2016-01-01

    Problem Statement: Drafting course is essential for students in the design disciplines for becoming more organized and for complying with standards in the educational system. Drafting knowledge is crucial, both for comprehension of the issues and for the implementation phase. In any design project, drafting performance and success are as important…

  13. New layer-based imaging and rapid prototyping techniques for computer-aided design and manufacture of custom dental restoration.

    PubMed

    Lee, M-Y; Chang, C-C; Ku, Y C

    2008-01-01

    Fixed dental restoration by conventional methods greatly relies on the skill and experience of the dental technician. The quality and accuracy of the final product depends mostly on the technician's subjective judgment. In addition, the traditional manual operation involves many complex procedures, and is a time-consuming and labour-intensive job. Most importantly, no quantitative design and manufacturing information is preserved for future retrieval. In this paper, a new device for scanning the dental profile and reconstructing 3D digital information of a dental model based on a layer-based imaging technique, called abrasive computer tomography (ACT) was designed in-house and proposed for the design of custom dental restoration. The fixed partial dental restoration was then produced by rapid prototyping (RP) and computer numerical control (CNC) machining methods based on the ACT scanned digital information. A force feedback sculptor (FreeForm system, Sensible Technologies, Inc., Cambridge MA, USA), which comprises 3D Touch technology, was applied to modify the morphology and design of the fixed dental restoration. In addition, a comparison of conventional manual operation and digital manufacture using both RP and CNC machining technologies for fixed dental restoration production is presented. Finally, a digital custom fixed restoration manufacturing protocol integrating proposed layer-based dental profile scanning, computer-aided design, 3D force feedback feature modification and advanced fixed restoration manufacturing techniques is illustrated. The proposed method provides solid evidence that computer-aided design and manufacturing technologies may become a new avenue for custom-made fixed restoration design, analysis, and production in the 21st century.

  14. Impact of virtual microscopy with conventional microscopy on student learning in dental histology.

    PubMed

    Hande, Alka Harish; Lohe, Vidya K; Chaudhary, Minal S; Gawande, Madhuri N; Patil, Swati K; Zade, Prajakta R

    2017-01-01

    In dental histology, the assimilation of histological features of different dental hard and soft tissues is done by conventional microscopy. This traditional method of learning prevents the students from screening the entire slide and change of magnification. To address these drawbacks, modification in conventional microscopy has evolved and become motivation for changing the learning tool. Virtual microscopy is the technique in which there is complete digitization of the microscopic glass slide, which can be analyzed on a computer. This research is designed to evaluate the effectiveness of virtual microscopy with conventional microscopy on student learning in dental histology. A cohort of 105 students were included and randomized into three groups: A, B, and C. Group A students studied the microscopic features of oral histologic lesions by conventional microscopy, Group B by virtual microscopy, and Group C by both conventional and virtual microscopy. The students' understanding of the subject was evaluated by a prepared questionnaire. The effectiveness of the study designs on knowledge gains and satisfaction levels was assessed by statistical assessment of differences in mean test scores. The difference in score between Groups A, B, and C at pre- and post-test was highly significant. This enhanced understanding of the subject may be due to benefits of using virtual microscopy in teaching histology. The augmentation of conventional microscopy with virtual microscopy shows enhancement of the understanding of the subject as compared to the use of conventional microscopy and virtual microscopy alone.

  15. Development of 3D pseudo pin-by-pin calculation methodology in ANC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B.; Mayhue, L.; Huria, H.

    2012-07-01

    Advanced cores and fuel assembly designs have been developed to improve operational flexibility, economic performance and further enhance safety features of nuclear power plants. The simulation of these new designs, along with strong heterogeneous fuel loading, have brought new challenges to the reactor physics methodologies currently employed in the industrial codes for core analyses. Control rod insertion during normal operation is one operational feature in the AP1000{sup R} plant of Westinghouse next generation Pressurized Water Reactor (PWR) design. This design improves its operational flexibility and efficiency but significantly challenges the conventional reactor physics methods, especially in pin power calculations. Themore » mixture loading of fuel assemblies with significant neutron spectrums causes a strong interaction between different fuel assembly types that is not fully captured with the current core design codes. To overcome the weaknesses of the conventional methods, Westinghouse has developed a state-of-the-art 3D Pin-by-Pin Calculation Methodology (P3C) and successfully implemented in the Westinghouse core design code ANC. The new methodology has been qualified and licensed for pin power prediction. The 3D P3C methodology along with its application and validation will be discussed in the paper. (authors)« less

  16. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  17. An In-Depth Review on Direct Additive Manufacturing of Metals

    NASA Astrophysics Data System (ADS)

    Azam, Farooq I.; Rani, Ahmad Majdi Abdul; Altaf, Khurram; Rao, T. V. V. L. N.; Aimi Zaharin, Haizum

    2018-03-01

    Additive manufacturing (AM), also known as 3D Printing, is a revolutionary manufacturing technique which has been developing rapidly in the last 30 years. The evolution of this precision manufacturing process from rapid prototyping to ready-to-use parts has significantly alleviated manufacturing constraints and design freedom has been outstandingly widened. AM is a non-conventional manufacturing technique which utilizes a 3D CAD model data to build parts by adding one material layer at a time, rather than removing it and fulfills the demand for manufacturing parts with complex geometric shapes, great dimensional accuracy, and easy to assemble parts. Additive manufacturing of metals has become the area of extensive research, progressing towards the production of final products and replacing conventional manufacturing methods. This paper provides an insight to the available metal additive manufacturing technologies that can be used to produce end user products without using conventional manufacturing methods. The paper also includes the comparison of mechanical and physical properties of parts produced by AM with the parts manufactured using conventional processes.

  18. Clinical effect of individualized parenteral nutrition vs conventional method in patients undergoing autologous hematopoietic SCT.

    PubMed

    Tavakoli-Ardakani, M; Neman, B; Mehdizadeh, M; Hajifathali, A; Salamzadeh, J; Tabarraee, M

    2013-07-01

    Malnutrition in patients undergoing hematopoietic SCT is known as a risk factor for adverse effects and is directly or indirectly responsible for excess mortality and morbidity. We designed the present study to evaluate the effects of individualized parenteral nutrition (PN) and compare the present method to the conventional PN. Individualized PN based on the Harris-Benedict equation was administered to 30 patients after hematopoietic SCT and was compared with an age, gender and disease matched group of patients who underwent hematopoietic SCT with conventional PN. These two groups were compared on clinical, hematological, nutritional outcomes. Comparing duration of hospital stay (P value<0.0001), infection (P value = 0.01), time to platelet engraftment (P value = 0.02), units of packed cell transfusion (P value = 0.006) and decrease in body weight (P value = 0.004) showed significant differences between the two groups. In conclusion, the use of individualized PN seems more beneficial than conventional PN.

  19. The Effects of Cognitive Conflict Management on Cognitive Development and Science Achievement

    ERIC Educational Resources Information Center

    Budiman, Zainol Badli; Halim, Lilia; Mohd Meerah, Subahan; Osman, Kamisah

    2014-01-01

    Three teaching methods were compared in this study, namely a Cognitive Conflict Management Module (CCM) that is infused into Cognitive Acceleration through Science Education (CASE), (Module A) CASE without CCM (Module B) and a conventional teaching method. This study employed a pre- and post-test quasi-experimental design using non-equivalent…

  20. Simulation and experimental design of a new advanced variable step size Incremental Conductance MPPT algorithm for PV systems.

    PubMed

    Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir

    2016-05-01

    Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. An adaptive two-stage dose-response design method for establishing proof of concept.

    PubMed

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  2. User-composable Electronic Health Record Improves Efficiency of Clinician Data Viewing for Patient Case Appraisal: A Mixed-Methods Study

    PubMed Central

    Senathirajah, Yalini; Kaufman, David; Bakken, Suzanne

    2016-01-01

    Background: Challenges in the design of electronic health records (EHRs) include designing usable systems that must meet the complex, rapidly changing, and high-stakes information needs of clinicians. The ability to move and assemble elements together on the same page has significant human-computer interaction (HCI) and efficiency advantages, and can mitigate the problems of negotiating multiple fixed screens and the associated cognitive burdens. Objective: We compare MedWISE—a novel EHR that supports user-composable displays—with a conventional EHR in terms of the number of repeat views of data elements for patient case appraisal. Design and Methods: The study used mixed-methods for examination of clinical data viewing in four patient cases. The study compared use of an experimental user-composable EHR with use of a conventional EHR, for case appraisal. Eleven clinicians used a user-composable EHR in a case appraisal task in the laboratory setting. This was compared with log file analysis of the same patient cases in the conventional EHR. We investigated the number of repeat views of the same clinical information during a session and across these two contexts, and compared them using Fisher’s exact test. Results: There was a significant difference (p<.0001) in proportion of cases with repeat data element viewing between the user-composable EHR (14.6 percent) and conventional EHR (72.6 percent). Discussion and Conclusion: Users of conventional EHRs repeatedly viewed the same information elements in the same session, as revealed by log files. Our findings are consistent with the hypothesis that conventional systems require that the user view many screens and remember information between screens, causing the user to forget information and to have to access the information a second time. Other mechanisms (such as reduction in navigation over a population of users due to interface sharing, and information selection) may also contribute to increased efficiency in the experimental system. Systems that allow a composable approach that enables the user to gather together on the same screen any desired information elements may confer cognitive support benefits that can increase productive use of systems by reducing fragmented information. By reducing cognitive overload, it can also enhance the user experience. PMID:27195306

  3. Development of a quantum mechanics-based free-energy perturbation method: use in the calculation of relative solvation free energies.

    PubMed

    Reddy, M Rami; Singh, U C; Erion, Mark D

    2004-05-26

    Free-energy perturbation (FEP) is considered the most accurate computational method for calculating relative solvation and binding free-energy differences. Despite some success in applying FEP methods to both drug design and lead optimization, FEP calculations are rarely used in the pharmaceutical industry. One factor limiting the use of FEP is its low throughput, which is attributed in part to the dependence of conventional methods on the user's ability to develop accurate molecular mechanics (MM) force field parameters for individual drug candidates and the time required to complete the process. In an attempt to find an FEP method that could eventually be automated, we developed a method that uses quantum mechanics (QM) for treating the solute, MM for treating the solute surroundings, and the FEP method for computing free-energy differences. The thread technique was used in all transformations and proved to be essential for the successful completion of the calculations. Relative solvation free energies for 10 structurally diverse molecular pairs were calculated, and the results were in close agreement with both the calculated results generated by conventional FEP methods and the experimentally derived values. While considerably more CPU demanding than conventional FEP methods, this method (QM/MM-based FEP) alleviates the need for development of molecule-specific MM force field parameters and therefore may enable future automation of FEP-based calculations. Moreover, calculation accuracy should be improved over conventional methods, especially for calculations reliant on MM parameters derived in the absence of experimental data.

  4. Production of hydrogen by electron transfer catalysis using conventional and photochemical means

    NASA Technical Reports Server (NTRS)

    Rillema, D. P.

    1981-01-01

    Alternate methods of generating hydrogen from the sulfuric acid thermal or electrochemical cycles are presented. A number of processes requiring chemical, electrochemical or photochemical methods are also presented. These include the design of potential photoelectrodes and photocatalytic membranes using Ru impregnated nafion tubing, and the design of experiments to study the catalyzed electrolytic formation of hydrogen and sulfuric acid from sulfur dioxide and water using quinones as catalysts. Experiments are carried out to determine the value of these approaches to energy conversion.

  5. Topology-optimized metasurfaces: impact of initial geometric layout.

    PubMed

    Yang, Jianji; Fan, Jonathan A

    2017-08-15

    Topology optimization is a powerful iterative inverse design technique in metasurface engineering and can transform an initial layout into a high-performance device. With this method, devices are optimized within a local design phase space, making the identification of suitable initial geometries essential. In this Letter, we examine the impact of initial geometric layout on the performance of large-angle (75 deg) topology-optimized metagrating deflectors. We find that when conventional metasurface designs based on dielectric nanoposts are used as initial layouts for topology optimization, the final devices have efficiencies around 65%. In contrast, when random initial layouts are used, the final devices have ultra-high efficiencies that can reach 94%. Our numerical experiments suggest that device topologies based on conventional metasurface designs may not be suitable to produce ultra-high-efficiency, large-angle metasurfaces. Rather, initial geometric layouts with non-trivial topologies and shapes are required.

  6. Concepts for the development of nanoscale stable precipitation-strengthened steels manufactured by conventional methods

    DOE PAGES

    Yablinsky, C. A.; Tippey, K. E.; Vaynman, S.; ...

    2014-11-11

    In this study, the development of oxide dispersion strengthened ferrous alloys has shown that microstructures designed for excellent irradiation resistance and thermal stability ideally contain stable nanoscale precipitates and dislocation sinks. Based upon this understanding, the microstructures of conventionally manufactured ferritic and ferritic-martensitic steels can be designed to include controlled volume fractions of fine, stable precipitates and dislocation sinks via specific alloying and processing paths. The concepts proposed here are categorized as advanced high-Cr ferritic-martensitic (AHCr-FM) and novel tailored precipitate ferritic (TPF) steels, which have the potential to improve the in-reactor performance of conventionally manufactured alloys. AHCr-FM steels have modifiedmore » alloy content relative to current reactor materials (such as alloy NF616/P92) to maximize desirable precipitates and control phase stability. TPF steels are designed to incorporate nickel aluminides, in addition to microalloy carbides, in a ferritic matrix to produce fine precipitate arrays with good thermal stability. Both alloying concepts may also benefit from thermomechanical processing to establish dislocation sinks and modify phase transformation behaviors. Alloying and processing paths toward designed microstructures are discussed for both AHCr-FM and TPF material classes.« less

  7. A fast pulse design for parallel excitation with gridding conjugate gradient.

    PubMed

    Feng, Shuo; Ji, Jim

    2013-01-01

    Parallel excitation (pTx) is recognized as a crucial technique in high field MRI to address the transmit field inhomogeneity problem. However, it can be time consuming to design pTx pulses which is not desirable. In this work, we propose a pulse design with gridding conjugate gradient (CG) based on the small-tip-angle approximation. The two major time consuming matrix-vector multiplications are substituted by two operators which involves with FFT and gridding only. Simulation results have shown that the proposed method is 3 times faster than conventional method and the memory cost is reduced by 1000 times.

  8. A Way to Select Electrical Sheets of the Segment Stator Core Motors.

    NASA Astrophysics Data System (ADS)

    Enomoto, Yuji; Kitamura, Masashi; Sakai, Toshihiko; Ohara, Kouichiro

    The segment stator core, high density winding coil, high-energy-product permanent magnet are indispensable technologies in the development of a compact and also high efficient motors. The conventional design method for the segment stator core mostly depended on experienced knowledge of selecting a suitable electromagnetic material, far from optimized design. Therefore, we have developed a novel design method in the selection of a suitable electromagnetic material based on the correlation evaluation between the material characteristics and motor performance. It enables the selection of suitable electromagnetic material that will meet the motor specification.

  9. Familiarity Vs Trust: A Comparative Study of Domain Scientists' Trust in Visual Analytics and Conventional Analysis Methods.

    PubMed

    Dasgupta, Aritra; Lee, Joon-Yong; Wilson, Ryan; Lafrance, Robert A; Cramer, Nick; Cook, Kristin; Payne, Samuel

    2017-01-01

    Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems.

  10. Using methods from the data mining and machine learning literature for disease classification and prediction: A case study examining classification of heart failure sub-types

    PubMed Central

    Austin, Peter C.; Tu, Jack V.; Ho, Jennifer E.; Levy, Daniel; Lee, Douglas S.

    2014-01-01

    Objective Physicians classify patients into those with or without a specific disease. Furthermore, there is often interest in classifying patients according to disease etiology or subtype. Classification trees are frequently used to classify patients according to the presence or absence of a disease. However, classification trees can suffer from limited accuracy. In the data-mining and machine learning literature, alternate classification schemes have been developed. These include bootstrap aggregation (bagging), boosting, random forests, and support vector machines. Study design and Setting We compared the performance of these classification methods with those of conventional classification trees to classify patients with heart failure according to the following sub-types: heart failure with preserved ejection fraction (HFPEF) vs. heart failure with reduced ejection fraction (HFREF). We also compared the ability of these methods to predict the probability of the presence of HFPEF with that of conventional logistic regression. Results We found that modern, flexible tree-based methods from the data mining literature offer substantial improvement in prediction and classification of heart failure sub-type compared to conventional classification and regression trees. However, conventional logistic regression had superior performance for predicting the probability of the presence of HFPEF compared to the methods proposed in the data mining literature. Conclusion The use of tree-based methods offers superior performance over conventional classification and regression trees for predicting and classifying heart failure subtypes in a population-based sample of patients from Ontario. However, these methods do not offer substantial improvements over logistic regression for predicting the presence of HFPEF. PMID:23384592

  11. Factorial design studies of antiretroviral drug-loaded stealth liposomal injectable: PEGylation, lyophilization and pharmacokinetic studies

    NASA Astrophysics Data System (ADS)

    Sudhakar, Beeravelli; Krishna, Mylangam Chaitanya; Murthy, Kolapalli Venkata Ramana

    2016-01-01

    The aim of the present study was to formulate and evaluate the ritonavir-loaded stealth liposomes by using 32 factorial design and intended to delivered by parenteral delivery. Liposomes were prepared by ethanol injection method using 32 factorial designs and characterized for various physicochemical parameters such as drug content, size, zeta potential, entrapment efficiency and in vitro drug release. The optimization process was carried out using desirability and overlay plots. The selected formulation was subjected to PEGylation using 10 % PEG-10000 solution. Stealth liposomes were characterized for the above-mentioned parameters along with surface morphology, Fourier transform infrared spectrophotometer, differential scanning calorimeter, stability and in vivo pharmacokinetic studies in rats. Stealth liposomes showed better result compared to conventional liposomes due to effect of PEG-10000. The in vivo studies revealed that stealth liposomes showed better residence time compared to conventional liposomes and pure drug solution. The conventional liposomes and pure drug showed dose-dependent pharmacokinetics, whereas stealth liposomes showed long circulation half-life compared to conventional liposomes and pure ritonavir solution. The results of statistical analysis showed significance difference as the p value is (<0.05) by one-way ANOVA. The result of the present study revealed that stealth liposomes are promising tool in antiretroviral therapy.

  12. Numerical investigation & comparison of a tandem-bladed turbocharger centrifugal compressor stage with conventional design

    NASA Astrophysics Data System (ADS)

    Danish, Syed Noman; Qureshi, Shafiq Rehman; EL-Leathy, Abdelrahman; Khan, Salah Ud-Din; Umer, Usama; Ma, Chaochen

    2014-12-01

    Extensive numerical investigations of the performance and flow structure in an unshrouded tandem-bladed centrifugal compressor are presented in comparison to a conventional compressor. Stage characteristics are explored for various tip clearance levels, axial spacings and circumferential clockings. Conventional impeller was modified to tandem-bladed design with no modifications in backsweep angle, meridional gas passage and camber distributions in order to have a true comparison with conventional design. Performance degradation is observed for both the conventional and tandem designs with increase in tip clearance. Linear-equation models for correlating stage characteristics with tip clearance are proposed. Comparing two designs, it is clearly evident that the conventional design shows better performance at moderate flow rates. However; near choke flow, tandem design gives better results primarily because of the increase in throat area. Surge point flow rate also seems to drop for tandem compressor resulting in increased range of operation.

  13. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  14. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  15. Optical Design And Analysis Of Carbon Dioxide Laser Fusion Systems Using Interferometry And Fast Fourier Transform Techniques

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. K.

    1980-11-01

    The optical design and analysis of the LASL carbon dioxide laser fusion systems required the use of techniques that are quite different from the currently used method in conventional optical design problems. The necessity for this is explored and the method that has been successfully used at Los Alamos to understand these systems is discussed with examples. This method involves characterization of the various optical components in their mounts by a Zernike polynomial set and using fast Fourier transform techniques to propagate the beam, taking diffraction and other nonlinear effects that occur in these types of systems into account. The various programs used for analysis are briefly discussed.

  16. The transfer function method for gear system dynamics applied to conventional and minimum excitation gearing designs

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1982-01-01

    A transfer function method for predicting the dynamic responses of gear systems with more than one gear mesh is developed and applied to the NASA Lewis four-square gear fatigue test apparatus. Methods for computing bearing-support force spectra and temporal histories of the total force transmitted by a gear mesh, the force transmitted by a single pair of teeth, and the maximum root stress in a single tooth are developed. Dynamic effects arising from other gear meshes in the system are included. A profile modification design method to minimize the vibration excitation arising from a pair of meshing gears is reviewed and extended. Families of tooth loading functions required for such designs are developed and examined for potential excitation of individual tooth vibrations. The profile modification design method is applied to a pair of test gears.

  17. Evaluation of direct and indirect additive manufacture of maxillofacial prostheses.

    PubMed

    Eggbeer, Dominic; Bibb, Richard; Evans, Peter; Ji, Lu

    2012-09-01

    The efficacy of computer-aided technologies in the design and manufacture of maxillofacial prostheses has not been fully proven. This paper presents research into the evaluation of direct and indirect additive manufacture of a maxillofacial prosthesis against conventional laboratory-based techniques. An implant/magnet-retained nasal prosthesis case from a UK maxillofacial unit was selected as a case study. A benchmark prosthesis was fabricated using conventional laboratory-based techniques for comparison against additive manufactured prostheses. For the computer-aided workflow, photogrammetry, computer-aided design and additive manufacture (AM) methods were evaluated in direct prosthesis body fabrication and indirect production using an additively manufactured mould. Qualitative analysis of position, shape, colour and edge quality was undertaken. Mechanical testing to ISO standards was also used to compare the silicone rubber used in the conventional prosthesis with the AM material. Critical evaluation has shown that utilising a computer-aided work-flow can produce a prosthesis body that is comparable to that produced using existing best practice. Technical limitations currently prevent the direct fabrication method demonstrated in this paper from being clinically viable. This research helps prosthesis providers understand the application of a computer-aided approach and guides technology developers and researchers to address the limitations identified.

  18. A compressor designed for the energy research and development agency automotive gas turbine program

    NASA Technical Reports Server (NTRS)

    Galvas, M. R.

    1975-01-01

    A centrifugal compressor was designed for a gas turbine powered automobile as part of the Energy Research and Development Agency program to demonstrate emissions characteristics that meet 1978 standards with fuel economy and acceleration which are competitive with conventionally powered vehicles. A backswept impeller was designed for the compressor in order to attain the efficiency goal range required for the objectives of this program. Details of the design and method of flow analysis of the compressor are presented.

  19. APPLICATION OF THE ELECTROMAGNETIC BOREHOLE FLOWMETER (EPA/600/R-98/058)

    EPA Science Inventory

    Spatial variability of saturated zone hydraulic properties has important implications with regard to sampling wells for water quality parameters, use of conventional methods to estimate transmissivity, and remedial system design. Characterization of subsurface heterogeneity requi...

  20. APPLICATION OF THE ELECTROMAGNETIC BOREHOLE FLOWMETER (EPA/600/SR-98/058)

    EPA Science Inventory

    Spatial variability of saturated zone hydraulic properties has important implications with regard to sampling wells for water quality parameters, use of conventional methods to estimate transmissivity, and remedial system design. Characterization of subsurface heterogeneity requi...

  1. Use of a modified GreenScreen tool to conduct a screening-level comparative hazard assessment of conventional silver and two forms of nanosilver.

    PubMed

    Sass, Jennifer; Heine, Lauren; Hwang, Nina

    2016-11-08

    Increased concern for potential health and environmental impacts of chemicals, including nanomaterials, in consumer products is driving demand for greater transparency regarding potential risks. Chemical hazard assessment is a powerful tool to inform product design, development and procurement and has been integrated into alternative assessment frameworks. The extent to which assessment methods originally designed for conventionally-sized materials can be used for nanomaterials, which have size-dependent physical and chemical properties, have not been well established. We contracted with a certified GreenScreen profiler to conduct three GreenScreen hazard assessments, for conventional silver and two forms of nanosilver. The contractor summarized publicly available literature, and used defined GreenScreen hazard criteria and expert judgment to assign and report hazard classification levels, along with indications of confidence in those assignments. Where data were not available, a data gap (DG) was assigned. Using the individual endpoint scores, an aggregated benchmark score (BM) was applied. Conventional silver and low-soluble nanosilver were assigned the highest possible hazard score and a silica-silver nanocomposite called AGS-20 could not be scored due to data gaps. AGS-20 is approved for use as antimicrobials by the US Environmental Protection Agency. An existing method for chemical hazard assessment and communication can be used - with minor adaptations- to compare hazards across conventional and nano forms of a substance. The differences in data gaps and in hazard profiles support the argument that each silver form should be considered unique and subjected to hazard assessment to inform regulatory decisions and decisions about product design and development. A critical limitation of hazard assessments for nanomaterials is the lack of nano-specific hazard data - where data are available, we demonstrate that existing hazard assessment systems can work. The work is relevant for risk assessors and regulators. We recommend that regulatory agencies and others require more robust data sets on each novel nanomaterial before granting market approval.

  2. High Speed Civil Transport Design Using Collaborative Optimization and Approximate Models

    NASA Technical Reports Server (NTRS)

    Manning, Valerie Michelle

    1999-01-01

    The design of supersonic aircraft requires complex analysis in multiple disciplines, posing, a challenge for optimization methods. In this thesis, collaborative optimization, a design architecture developed to solve large-scale multidisciplinary design problems, is applied to the design of supersonic transport concepts. Collaborative optimization takes advantage of natural disciplinary segmentation to facilitate parallel execution of design tasks. Discipline-specific design optimization proceeds while a coordinating mechanism ensures progress toward an optimum and compatibility between disciplinary designs. Two concepts for supersonic aircraft are investigated: a conventional delta-wing design and a natural laminar flow concept that achieves improved performance by exploiting properties of supersonic flow to delay boundary layer transition. The work involves the development of aerodynamics and structural analyses, and integration within a collaborative optimization framework. It represents the most extensive application of the method to date.

  3. Effects of an E-Learning Module on Students' Attitudes in an Electronics Class

    ERIC Educational Resources Information Center

    Getuno, Daniel M.; Kiboss, Joel K.; Changeiywo, Johnson M.; Ogola, Leo B.

    2015-01-01

    Research has shown that students exhibit negative attitudes towards Electronics especially when they are taught using the conventional method. This is in addition to poor instructional methods that do not promote individualization of instruction or make learning interesting. The purpose of this study was to design an e-learning module in…

  4. Inverse-designed stretchable metalens with tunable focal distance

    NASA Astrophysics Data System (ADS)

    Callewaert, Francois; Velev, Vesselin; Jiang, Shizhou; Sahakian, Alan Varteres; Kumar, Prem; Aydin, Koray

    2018-02-01

    In this paper, we present an inverse-designed 3D-printed all-dielectric stretchable millimeter wave metalens with a tunable focal distance. A computational inverse-design method is used to design a flat metalens made of disconnected polymer building blocks with complex shapes, as opposed to conventional monolithic lenses. The proposed metalens provides better performance than a conventional Fresnel lens, using lesser amount of material and enabling larger focal distance tunability. The metalens is fabricated using a commercial 3D-printer and attached to a stretchable platform. Measurements and simulations show that the focal distance can be tuned by a factor of 4 with a stretching factor of only 75%, a nearly diffraction-limited focal spot, and with a 70% relative focusing efficiency, defined as the ratio between power focused in the focal spot and power going through the focal plane. The proposed platform can be extended for design and fabrication of multiple electromagnetic devices working from visible to microwave radiation depending on scaling of the devices.

  5. Finite element analysis of container ship's cargo hold using ANSYS and POSEIDON software

    NASA Astrophysics Data System (ADS)

    Tanny, Tania Tamiz; Akter, Naznin; Amin, Osman Md.

    2017-12-01

    Nowadays ship structural analysis has become an integral part of the preliminary ship design providing further support for the development and detail design of ship structures. Structural analyses of container ship's cargo holds are carried out for the balancing of their safety and capacity, as those ships are exposed to the high risk of structural damage during voyage. Two different design methodologies have been considered for the structural analysis of a container ship's cargo hold. One is rule-based methodology and the other is a more conventional software based analyses. The rule based analysis is done by DNV-GL's software POSEIDON and the conventional package based analysis is done by ANSYS structural module. Both methods have been applied to analyze some of the mechanical properties of the model such as total deformation, stress-strain distribution, Von Mises stress, Fatigue etc., following different design bases and approaches, to indicate some guidance's for further improvements in ship structural design.

  6. Stereo Sound Field Controller Design Using Partial Model Matching on the Frequency Domain

    NASA Astrophysics Data System (ADS)

    Kumon, Makoto; Miike, Katsuhiro; Eguchi, Kazuki; Mizumoto, Ikuro; Iwai, Zenta

    The objective of sound field control is to make the acoustic characteristics of a listening room close to those of the desired system. Conventional methods apply feedforward controllers, such as digital filters, to achieve this objective. However, feedback controllers are also necessary in order to attenuate noise or to compensate the uncertainty of the acoustic characteristics of the listening room. Since acoustic characteristics are well modeled on the frequency domain, it is efficient to design controllers with respect to frequency responses, but it is difficult to design a multi input multi output (MIMO) control system on a wide frequency domain. In the present study, a partial model matching method on the frequency domain was adopted because this method requires only sampled data, rather than complex mathematical models of the plant, in order to design controllers for MIMO systems. The partial model matching method was applied to design two-degree-of-freedom controllers for acoustic equalization and noise reduction. Experiments demonstrated effectiveness of the proposed method.

  7. Design for a Crane Metallic Structure Based on Imperialist Competitive Algorithm and Inverse Reliability Strategy

    NASA Astrophysics Data System (ADS)

    Fan, Xiao-Ning; Zhi, Bo

    2017-07-01

    Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.

  8. Empirical evaluation of a virtual laboratory approach to teach lactate dehydrogenase enzyme kinetics.

    PubMed

    Booth, Christine; Cheluvappa, Rajkumar; Bellinson, Zack; Maguire, Danni; Zimitat, Craig; Abraham, Joyce; Eri, Rajaraman

    2016-06-01

    Personalised instruction is increasingly recognised as crucial for efficacious learning today. Our seminal work delineates and elaborates on the principles, development and implementation of a specially-designed adaptive, virtual laboratory. We strived to teach laboratory skills associated with lactate dehydrogenase (LDH) enzyme kinetics to 2nd-year biochemistry students using our adaptive learning platform. Pertinent specific aims were to:(1)design/implement a web-based lesson to teach lactate dehydrogenase(LDH) enzyme kinetics to 2nd-year biochemistry students(2)determine its efficacious in improving students' comprehension of enzyme kinetics(3)assess their perception of its usefulness/manageability(vLab versus Conventional Tutorial). Our tools were designed using HTML5 technology. We hosted the program on an adaptive e-learning platform (AeLP). Provisions were made to interactively impart informed laboratory skills associated with measuring LDH enzyme kinetics. A series of e-learning methods were created. Tutorials were generated for interactive teaching and assessment. The learning outcomes herein were on par with that from a conventional classroom tutorial. Student feedback showed that the majority of students found the vLab learning experience "valuable"; and the vLab format/interface "well-designed". However, there were a few technical issues with the 1st roll-out of the platform. Our pioneering effort resulted in productive learning with the vLab, with parity with that from a conventional tutorial. Our contingent discussion emphasises not only the cornerstone advantages, but also the shortcomings of the AeLP method utilised. We conclude with an astute analysis of possible extensions and applications of our methodology.

  9. Remote sensing techniques for prediction of watershed runoff

    NASA Technical Reports Server (NTRS)

    Blanchard, B. J.

    1975-01-01

    Hydrologic parameters of watersheds for use in mathematical models and as design criteria for flood detention structures are sometimes difficult to quantify using conventional measuring systems. The advent of remote sensing devices developed in the past decade offers the possibility that watershed characteristics such as vegetative cover, soils, soil moisture, etc., may be quantified rapidly and economically. Experiments with visible and near infrared data from the LANDSAT-1 multispectral scanner indicate a simple technique for calibration of runoff equation coefficients is feasible. The technique was tested on 10 watersheds in the Chickasha area and test results show more accurate runoff coefficients were obtained than with conventional methods. The technique worked equally as well using a dry fall scene. The runoff equation coefficients were then predicted for 22 subwatersheds with flood detention structures. Predicted values were again more accurate than coefficients produced by conventional methods.

  10. Distribution and uptake pathways of organochlorine pesticides in greenhouse and conventional vegetables.

    PubMed

    Zhang, Anping; Luo, Wenxiu; Sun, Jianqiang; Xiao, Hang; Liu, Weiping

    2015-02-01

    The application of greenhouse vegetable cultivation has dramatically expanded worldwide during the last several decades. However, little information is available on the distribution and uptake of pesticides in greenhouse vegetables. To bridge this knowledge gap, the present study was initiated to investigate the distribution and uptake of organochlorine pesticides (OCPs) in vegetables from plastic greenhouse and conventional cultivation methods. The uptake pathways of OCPs were not significantly different between these two cultivation methods. The arithmetic means of OCP concentrations in greenhouse vegetables were higher than those in conventional vegetables, although there was no significant difference. This small difference raised the concern of whether the tiny difference could be magnified to a significant difference by bioaccumulation in the food chain. The issue should be addressed by a well-designed scheme in future studies. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Nondestructive Evaluation of the Friction Weld Process on 2195/2219 Grade Aluminum

    NASA Technical Reports Server (NTRS)

    Suits, Michael W.; Clark, Linda S.; Cox, Dwight E.

    1999-01-01

    In 1996, NASA's Marshall Space Flight Center began an ambitious program designed to find alternative methods of repairing conventional TIG (Tungsten Inert Gas) welds and VPPA (Variable Polarity Plasma Arc) welds on the Space Shuttle External Tank without producing additional heat-related anomalies or conditions. Therefore, a relatively new method, invented by The Welding Institute (TWI) in Cambridge, England, called Friction Stir Welding (FSW), was investigated for use in this application, as well as being used potentially as an initial weld process. As with the conventional repair welding processes, nondestructive evaluation (NDE) plays a crucial role in the verification of these repairs. Since it was feared that conventional NDE might have trouble with this type of weld structure (due to shape of nugget, grain structure, etc.) it was imperative that a complete study be performed to address the adequacy of the NDE process. This paper summarizes that process.

  12. Development and In Vitro Bioactivity Profiling of Alternative Sustainable Nanomaterials

    EPA Science Inventory

    Sustainable, environmentally benign nanomaterials (NMs) are being designed as alternatives based on functionality to conventional metal-based nanomaterials (NMs) in order to minimize potential risk to human health and the environment. Development of rapid methods to evaluate the ...

  13. Structured light system calibration method with optimal fringe angle.

    PubMed

    Li, Beiwen; Zhang, Song

    2014-11-20

    For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H)  mm×250(W)  mm×500(D)  mm.

  14. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  15. Subsurface and Surface Characterization using an Information Framework Model

    NASA Astrophysics Data System (ADS)

    Samuel-Ojo, Olusola

    Groundwater plays a critical dual role as a reservoir of fresh water for human consumption and as a cause of the most severe problems when dealing with construction works below the water table. This is why it is critical to monitor groundwater recharge, distribution, and discharge on a continuous basis. The conventional method of monitoring groundwater employs a network of sparsely distributed monitoring wells and it is laborious, expensive, and intrusive. The problem of sparse data and undersampling reduces the accuracy of sampled survey data giving rise to poor interpretation. This dissertation addresses this problem by investigating groundwater-deformation response in order to augment the conventional method. A blend of three research methods was employed, namely design science research, geological methods, and geophysical methods, to examine whether persistent scatterer interferometry, a remote sensing technique, might augment conventional groundwater monitoring. Observation data (including phase information for displacement deformation from permanent scatterer interferometric synthetic aperture radar and depth to groundwater data) was obtained from the Water District, Santa Clara Valley, California. An information framework model was built and applied, and then evaluated. Data was preprocessed and decomposed into five components or parts: trend, seasonality, low frequency, high frequency and octave bandwidth. Digital elevation models of observed and predicted hydraulic head were produced, illustrating the piezometric or potentiometric surface. The potentiometric surface characterizes the regional aquifer of the valley showing areal variation of rate of percolation, velocity and permeability, and completely defines flow direction, advising characteristics and design levels. The findings show a geologic forcing phenomenon which explains in part the long-term deformation behavior of the valley, characterized by poroelastic, viscoelastic, elastoplastic and inelastic deformations under the influence of an underlying geologic southward plate motion within the theory of plate tectonics. It also explains the impact of a history of heavy pumpage of groundwater during the agricultural and urbanization era. Thus the persistent scatterer interferometry method offers an attractive, non-intrusive, cost-effective augmentation of the conventional method of monitoring groundwater for water resource development and stability of soil mass.

  16. Near-Optimal Tracking Control of Mobile Robots Via Receding-Horizon Dual Heuristic Programming.

    PubMed

    Lian, Chuanqiang; Xu, Xin; Chen, Hong; He, Haibo

    2016-11-01

    Trajectory tracking control of wheeled mobile robots (WMRs) has been an important research topic in control theory and robotics. Although various tracking control methods with stability have been developed for WMRs, it is still difficult to design optimal or near-optimal tracking controller under uncertainties and disturbances. In this paper, a near-optimal tracking control method is presented for WMRs based on receding-horizon dual heuristic programming (RHDHP). In the proposed method, a backstepping kinematic controller is designed to generate desired velocity profiles and the receding horizon strategy is used to decompose the infinite-horizon optimal control problem into a series of finite-horizon optimal control problems. In each horizon, a closed-loop tracking control policy is successively updated using a class of approximate dynamic programming algorithms called finite-horizon dual heuristic programming (DHP). The convergence property of the proposed method is analyzed and it is shown that the tracking control system based on RHDHP is asymptotically stable by using the Lyapunov approach. Simulation results on three tracking control problems demonstrate that the proposed method has improved control performance when compared with conventional model predictive control (MPC) and DHP. It is also illustrated that the proposed method has lower computational burden than conventional MPC, which is very beneficial for real-time tracking control.

  17. A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.

    PubMed

    Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan

    2017-06-22

    Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.

  18. Nonimaging optics for nonuniform brightness distributions

    NASA Astrophysics Data System (ADS)

    Jenkins, David G.; Winston, Roland

    1995-08-01

    We present a general design method of nonimaging optics that obtains the highest possible concentration for a given absorber shape. This technique, which uses a complimentary edge ray to simplify the geometrical formulism, recovers familiar designs for flat phase space distributions, such as trumpets, and (theta) 1-(theta) 2 concentrators. This method is easy to use and handles diverse boundary conditions, such as reflection, satisfying total internal reflection or design within a material of graded index. Presented is a novel two-stage 2D solar collector with a fixed circular primary mirror and nonimaging secondary. This newly developed secondary gives a 25% improvement over conventional nonimaging concentrators.

  19. Design and multi-physics optimization of rotary MRF brakes

    NASA Astrophysics Data System (ADS)

    Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan

    2018-03-01

    Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.

  20. Analogue based design of MMP-13 (Collagenase-3) inhibitors.

    PubMed

    Sarma, J A R P; Rambabu, G; Srikanth, K; Raveendra, D; Vithal, M

    2002-10-07

    3D-QSAR studies using MFA and RSA methods were performed on a series of 39MMP-13 inhibitors. Model developed by MFA method has a r(2)(cv) (cross-validated) of 0.616 while its r(2) (conventional) value is 0.822. For the RSA model r(2)(cv) and r(2) are 0.681 and 0.847, respectively. Both the models indicate good internal as well as external predictive abilities. These models provide crucial information about the field descriptors for the design of potential inhibitors of MMP-13.

  1. What can formal methods offer to digital flight control systems design

    NASA Technical Reports Server (NTRS)

    Good, Donald I.

    1990-01-01

    Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.

  2. Custom-designed orthopedic implants evaluated using finite element analysis of patient-specific computed tomography data: femoral-component case study

    PubMed Central

    Harrysson, Ola LA; Hosni, Yasser A; Nayfeh, Jamal F

    2007-01-01

    Background Conventional knee and hip implant systems have been in use for many years with good success. However, the custom design of implant components based on patient-specific anatomy has been attempted to overcome existing shortcomings of current designs. The longevity of cementless implant components is highly dependent on the initial fit between the bone surface and the implant. The bone-implant interface design has historically been limited by the surgical tools and cutting guides available; and the cost of fabricating custom-designed implant components has been prohibitive. Methods This paper describes an approach where the custom design is based on a Computed Tomography scan of the patient's joint. The proposed design will customize both the articulating surface and the bone-implant interface to address the most common problems found with conventional knee-implant components. Finite Element Analysis is used to evaluate and compare the proposed design of a custom femoral component with a conventional design. Results The proposed design shows a more even stress distribution on the bone-implant interface surface, which will reduce the uneven bone remodeling that can lead to premature loosening. Conclusion The proposed custom femoral component design has the following advantages compared with a conventional femoral component. (i) Since the articulating surface closely mimics the shape of the distal femur, there is no need for resurfacing of the patella or gait change. (ii) Owing to the resulting stress distribution, bone remodeling is even and the risk of premature loosening might be reduced. (iii) Because the bone-implant interface can accommodate anatomical abnormalities at the distal femur, the need for surgical interventions and fitting of filler components is reduced. (iv) Given that the bone-implant interface is customized, about 40% less bone must be removed. The primary disadvantages are the time and cost required for the design and the possible need for a surgical robot to perform the bone resection. Some of these disadvantages may be eliminated by the use of rapid prototyping technologies, especially the use of Electron Beam Melting technology for quick and economical fabrication of custom implant components. PMID:17854508

  3. Gearing

    NASA Technical Reports Server (NTRS)

    Coy, J. J.; Townsend, D. P.; Zaretsky, E. V.

    1985-01-01

    Gearing technology in its modern form has a history of only 100 years. However, the earliest form of gearing can probably be traced back to fourth century B.C. Greece. Current gear practice and recent advances in the technology are drawn together. The history of gearing is reviewed briefly in the Introduction. Subsequent sections describe types of gearing and their geometry, processing, and manufacture. Both conventional and more recent methods of determining gear stress and deflections are considered. The subjects of life prediction and lubrication are additions to the literature. New and more complete methods of power loss predictions as well as an optimum design of spur gear meshes are described. Conventional and new types of power transmission systems are presented.

  4. Comparative phenotypic and genotypic analysis of Edwardsiella spp. isolates from different hosts and geographic origins, with an emphasis on isolates formerly classified as E. tarda and an evaluation of diagnostic methods

    USDA-ARS?s Scientific Manuscript database

    Aims: Conventional phenotypic and genotypic analyses for the differentiation of phenotypically ambiguous Edwardsiella congeners was evaluated and historical E. tarda designations were linked to current taxonomic nomenclature. Methods and Results: Forty-seven Edwardsiella spp. isolates recovered over...

  5. A New Method for Extubation: Comparison between Conventional and New Methods.

    PubMed

    Yousefshahi, Fardin; Barkhordari, Khosro; Movafegh, Ali; Tavakoli, Vida; Paknejad, Omalbanin; Bina, Payvand; Yousefshahi, Hadi; Sheikh Fathollahi, Mahmood

    2012-08-01

    Extubation is associated with the risk of complications such as accumulated secretion above the endotracheal tube cuff, eventual atelectasia following a reduction in pulmonary volumes because of a lack of physiological positive end expiratory pressure, and intra-tracheal suction. In order to reduce these complications, and, based on basic physiological principles, a new practical extubation method is presented in this article. The study was designed as a six-month prospective cross-sectional clinical trial. Two hundred fifty-seven patients undergoing coronary artery bypass grafting (CABG) were divided into two groups based on their scheduled surgery time. The first group underwent the conventional extubation method, while the other group was extubated according to a new described method. Arterial blood gas (ABG) analysis results before and after extubation were compared between the two groups to find the effect of the extubation method on the ABG parameters and the oxygenation profile. In all time intervals, the partial pressure of oxygen in arterial blood / fraction of inspired oxygen (PaO(2)/FiO(2)) ratio in the new method group patients was improved compared to that in the conventional method; some differences, like PaO(2)/FiO(2) four hours after extubation, were statistically significant, however (p value=0.0063). The new extubation method improved some respiratory parameters and thus attenuated oxygenation complications and amplified oxygenation after extubation.

  6. Controller design approach based on linear programming.

    PubMed

    Tanaka, Ryo; Shibasaki, Hiroki; Ogawa, Hiromitsu; Murakami, Takahiro; Ishida, Yoshihisa

    2013-11-01

    This study explains and demonstrates the design method for a control system with a load disturbance observer. Observer gains are determined by linear programming (LP) in terms of the Routh-Hurwitz stability criterion and the final-value theorem. In addition, the control model has a feedback structure, and feedback gains are determined to be the linear quadratic regulator. The simulation results confirmed that compared with the conventional method, the output estimated by our proposed method converges to a reference input faster when a load disturbance is added to a control system. In addition, we also confirmed the effectiveness of the proposed method by performing an experiment with a DC motor. © 2013 ISA. Published by ISA. All rights reserved.

  7. [Evaluation of production and clinical working time of computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays for complete denture].

    PubMed

    Wei, L; Chen, H; Zhou, Y S; Sun, Y C; Pan, S X

    2017-02-18

    To compare the technician fabrication time and clinical working time of custom trays fabricated using two different methods, the three-dimensional printing custom trays and the conventional custom trays, and to prove the feasibility of the computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays in clinical use from the perspective of clinical time cost. Twenty edentulous patients were recruited into this study, which was prospective, single blind, randomized self-control clinical trials. Two custom trays were fabricated for each participant. One of the custom trays was fabricated using functional suitable denture (FSD) system through CAD/CAM process, and the other was manually fabricated using conventional methods. Then the final impressions were taken using both the custom trays, followed by utilizing the final impression to fabricate complete dentures respectively. The technician production time of the custom trays and the clinical working time of taking the final impression was recorded. The average time spent on fabricating the three-dimensional printing custom trays using FSD system and fabricating the conventional custom trays manually were (28.6±2.9) min and (31.1±5.7) min, respectively. The average time spent on making the final impression with the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually were (23.4±11.5) min and (25.4±13.0) min, respectively. There was significant difference in the technician fabrication time and the clinical working time between the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually (P<0.05). The average time spent on fabricating three-dimensional printing custom trays using FSD system and making the final impression with the trays are less than those of the conventional custom trays fabricated manually, which reveals that the FSD three-dimensional printing custom trays is less time-consuming both in the clinical and laboratory process than the conventional custom trays. In addition, when we manufacture custom trays by three-dimensional printing method, there is no need to pour preliminary cast after taking the primary impression, therefore, it can save the impression material and model material. As to completing denture restoration, manufacturing custom trays using FSD system is worth being popularized.

  8. A practical concept for powered or tethered weight-lifting LTA vehicles

    NASA Technical Reports Server (NTRS)

    Balleyguier, M. A.

    1975-01-01

    A concept for a multi-hull weightlifting airship is presented. The concept is based upon experience in the design and handling of gas-filled balloons for commercial purposes, it was first tested in April, 1972. In the flight test, two barrage balloons were joined side-by-side, with an intermediate frame, and launched in captive flight. The success of this flight test led to plans for a development program calling for a powered, piloted prototype, a follow-on 40 ton model, and a 400 ton transport model. All of these airships utilize a tetrehedric three-line tethering method for loading and unloading phases of flight, which bypasses many of the difficulties inherent in the handling of a conventional airship near the ground. Both initial and operating costs per ton of lift capability are significantly less for the subject design than for either helicopters or airships of conventional mono-hull design.

  9. Robust backstepping control of an interlink converter in a hybrid AC/DC microgrid based on feedback linearisation method

    NASA Astrophysics Data System (ADS)

    Dehkordi, N. Mahdian; Sadati, N.; Hamzeh, M.

    2017-09-01

    This paper presents a robust dc-link voltage as well as a current control strategy for a bidirectional interlink converter (BIC) in a hybrid ac/dc microgrid. To enhance the dc-bus voltage control, conventional methods strive to measure and feedforward the load or source power in the dc-bus control scheme. However, the conventional feedforward-based approaches require remote measurement with communications. Moreover, conventional methods suffer from stability and performance issues, mainly due to the use of the small-signal-based control design method. To overcome these issues, in this paper, the power from DG units of the dc subgrid imposed on the BIC is considered an unmeasurable disturbance signal. In the proposed method, in contrast to existing methods, using the nonlinear model of BIC, a robust controller that does not need the remote measurement with communications effectively rejects the impact of the disturbance signal imposed on the BIC's dc-link voltage. To avoid communication links, the robust controller has a plug-and-play feature that makes it possible to add a DG/load to or remove it from the dc subgrid without distorting the hybrid microgrid stability. Finally, Monte Carlo simulations are conducted to confirm the effectiveness of the proposed control strategy in MATLAB/SimPowerSystems software environment.

  10. Improving the sensitivity and accuracy of gamma activation analysis for the rapid determination of gold in mineral ores.

    PubMed

    Tickner, James; Ganly, Brianna; Lovric, Bojan; O'Dwyer, Joel

    2017-04-01

    Mining companies rely on chemical analysis methods to determine concentrations of gold in mineral ore samples. As gold is often mined commercially at concentrations around 1 part-per-million, it is necessary for any analysis method to provide good sensitivity as well as high absolute accuracy. We describe work to improve both the sensitivity and accuracy of the gamma activation analysis (GAA) method for gold. We present analysis results for several suites of ore samples and discuss the design of a GAA facility designed to replace conventional chemical assay in industrial applications. Copyright © 2017. Published by Elsevier Ltd.

  11. Design method of LED rear fog lamp based on freeform micro-surface reflectors

    NASA Astrophysics Data System (ADS)

    Yu, Jindong; Wu, Heng

    2017-11-01

    We propose a practical method for the design of a light-emitting diode (LED) rear fog lamp based on freeform micro-surface reflectors. The lamp consists of nine LEDs and each of them has a freeform micro-surface reflector correspondingly. The micro-surface reflector design includes three steps. An initial freeform reflector is first built based on the light energy maps. The micro-surface reflector is then constructed on the bias of the initial one. Finally, a two-step method is designed to optimize the micro-surface reflector. With the proposed method, a module is designed and LCW DURIS E5 LED source whose emitting surface is 5.7 mm × 3.0 mm is adopted for simulation. A prototype is also assembled and fabricated to verify the real performance. Both the simulation and experimental results demonstrate that the luminous intensity distribution can well fulfill the requirements of ECE No.38 regulation. Furthermore, more than 79% energy can be saved when compared with the rear fog lamps using conventional sources.

  12. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  13. Conventions and nomenclature for double diffusion encoding NMR and MRI.

    PubMed

    Shemesh, Noam; Jespersen, Sune N; Alexander, Daniel C; Cohen, Yoram; Drobnjak, Ivana; Dyrby, Tim B; Finsterbusch, Jurgen; Koch, Martin A; Kuder, Tristan; Laun, Fredrik; Lawrenz, Marco; Lundell, Henrik; Mitra, Partha P; Nilsson, Markus; Özarslan, Evren; Topgaard, Daniel; Westin, Carl-Fredrik

    2016-01-01

    Stejskal and Tanner's ingenious pulsed field gradient design from 1965 has made diffusion NMR and MRI the mainstay of most studies seeking to resolve microstructural information in porous systems in general and biological systems in particular. Methods extending beyond Stejskal and Tanner's design, such as double diffusion encoding (DDE) NMR and MRI, may provide novel quantifiable metrics that are less easily inferred from conventional diffusion acquisitions. Despite the growing interest on the topic, the terminology for the pulse sequences, their parameters, and the metrics that can be derived from them remains inconsistent and disparate among groups active in DDE. Here, we present a consensus of those groups on terminology for DDE sequences and associated concepts. Furthermore, the regimes in which DDE metrics appear to provide microstructural information that cannot be achieved using more conventional counterparts (in a model-free fashion) are elucidated. We highlight in particular DDE's potential for determining microscopic diffusion anisotropy and microscopic fractional anisotropy, which offer metrics of microscopic features independent of orientation dispersion and thus provide information complementary to the standard, macroscopic, fractional anisotropy conventionally obtained by diffusion MR. Finally, we discuss future vistas and perspectives for DDE. © 2015 Wiley Periodicals, Inc.

  14. Aircraft conceptual design - an adaptable parametric sizing methodology

    NASA Astrophysics Data System (ADS)

    Coleman, Gary John, Jr.

    Aerospace is a maturing industry with successful and refined baselines which work well for traditional baseline missions, markets and technologies. However, when new markets (space tourism) or new constrains (environmental) or new technologies (composite, natural laminar flow) emerge, the conventional solution is not necessarily best for the new situation. Which begs the question "how does a design team quickly screen and compare novel solutions to conventional solutions for new aerospace challenges?" The answer is rapid and flexible conceptual design Parametric Sizing. In the product design life-cycle, parametric sizing is the first step in screening the total vehicle in terms of mission, configuration and technology to quickly assess first order design and mission sensitivities. During this phase, various missions and technologies are assessed. During this phase, the designer is identifying design solutions of concepts and configurations to meet combinations of mission and technology. This research undertaking contributes the state-of-the-art in aircraft parametric sizing through (1) development of a dedicated conceptual design process and disciplinary methods library, (2) development of a novel and robust parametric sizing process based on 'best-practice' approaches found in the process and disciplinary methods library, and (3) application of the parametric sizing process to a variety of design missions (transonic, supersonic and hypersonic transports), different configurations (tail-aft, blended wing body, strut-braced wing, hypersonic blended bodies, etc.), and different technologies (composite, natural laminar flow, thrust vectored control, etc.), in order to demonstrate the robustness of the methodology and unearth first-order design sensitivities to current and future aerospace design problems. This research undertaking demonstrates the importance of this early design step in selecting the correct combination of mission, technologies and configuration to meet current aerospace challenges. Overarching goal is to avoid the reoccurring situation of optimizing an already ill-fated solution.

  15. Digital redesign of anti-wind-up controller for cascaded analog system.

    PubMed

    Chen, Y S; Tsai, J S H; Shieh, L S; Moussighi, M M

    2003-01-01

    The cascaded conventional anti-wind-up (CAW) design method for integral controller is discussed. Then, the prediction-based digital redesign methodology is utilized to find the new pulse amplitude modulated (PAM) digital controller for effective digital control of the analog plant with input saturation constraint. The desired digital controller is determined from existing or pre-designed CAW analog controller. The proposed method provides a novel methodology for indirect digital design of a continuous-time unity output-feedback system with a cascaded analog controller as in the case of PID controllers for industrial control processes with the presence of actuator saturations. It enables us to implement an existing or pre-designed cascaded CAW analog controller via a digital controller effectively.

  16. Supercontinuum Fourier transform spectrometry with balanced detection on a single photodiode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goncharov, Vasily V.; Hall, Gregory E., E-mail: gehall@bnl.gov

    We demonstrate a method of combining a supercontinuum light source with a commercial Fourier transform spectrometer, using a novel approach to dual-beam balanced detection, implemented with phase-sensitive detection on a single light detector. A 40 dB reduction in the relative intensity noise is achieved for broadband light, analogous to conventional balanced detection methods using two matched photodetectors. Unlike conventional balanced detection, however, this method exploits the time structure of the broadband source to interleave signal and reference pulse trains in the time domain, recording the broadband differential signal at the fundamental pulse repetition frequency of the supercontinuum. The method ismore » capable of real-time correction for instability in the supercontinuum spectral structure over a broad range of wavelengths and is compatible with commercially designed spectrometers. A proof-of-principle experimental setup is demonstrated for weak absorption in the 1500-1600 nm region.« less

  17. A review of statistical issues with progression-free survival as an interval-censored time-to-event endpoint.

    PubMed

    Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang

    2013-01-01

    Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.

  18. Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.

  19. Precision chemical heating for diagnostic devices.

    PubMed

    Buser, J R; Diesburg, S; Singleton, J; Guelig, D; Bishop, J D; Zentner, C; Burton, R; LaBarre, P; Yager, P; Weigl, B H

    2015-12-07

    Decoupling nucleic acid amplification assays from infrastructure requirements such as grid electricity is critical for providing effective diagnosis and treatment at the point of care in low-resource settings. Here, we outline a complete strategy for the design of electricity-free precision heaters compatible with medical diagnostic applications requiring isothermal conditions, including nucleic acid amplification and lysis. Low-cost, highly energy dense components with better end-of-life disposal options than conventional batteries are proposed as an alternative to conventional heating methods to satisfy the unique needs of point of care use.

  20. Advancing RF pulse design using an open-competition format: Report from the 2015 ISMRM challenge.

    PubMed

    Grissom, William A; Setsompop, Kawin; Hurley, Samuel A; Tsao, Jeffrey; Velikina, Julia V; Samsonov, Alexey A

    2017-10-01

    To advance the best solutions to two important RF pulse design problems with an open head-to-head competition. Two sub-challenges were formulated in which contestants competed to design the shortest simultaneous multislice (SMS) refocusing pulses and slice-selective parallel transmission (pTx) excitation pulses, subject to realistic hardware and safety constraints. Short refocusing pulses are needed for spin echo SMS imaging at high multiband factors, and short slice-selective pTx pulses are needed for multislice imaging in ultra-high field MRI. Each sub-challenge comprised two phases, in which the first phase posed problems with a low barrier of entry, and the second phase encouraged solutions that performed well in general. The Challenge ran from October 2015 to May 2016. The pTx Challenge winners developed a spokes pulse design method that combined variable-rate selective excitation with an efficient method to enforce SAR constraints, which achieved 10.6 times shorter pulse durations than conventional approaches. The SMS Challenge winners developed a time-optimal control multiband pulse design algorithm that achieved 5.1 times shorter pulse durations than conventional approaches. The Challenge led to rapid step improvements in solutions to significant problems in RF excitation for SMS imaging and ultra-high field MRI. Magn Reson Med 78:1352-1361, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  1. The Accuracy of Shock Capturing in Two Spatial Dimensions

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Casper, Jay H.

    1997-01-01

    An assessment of the accuracy of shock capturing schemes is made for two-dimensional steady flow around a cylindrical projectile. Both a linear fourth-order method and a nonlinear third-order method are used in this study. It is shown, contrary to conventional wisdom, that captured two-dimensional shocks are asymptotically first-order, regardless of the design accuracy of the numerical method. The practical implications of this finding are discussed in the context of the efficacy of high-order numerical methods for discontinuous flows.

  2. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).

  3. Adaptive sensor-based ultra-high accuracy solar concentrator tracker

    NASA Astrophysics Data System (ADS)

    Brinkley, Jordyn; Hassanzadeh, Ali

    2017-09-01

    Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.

  4. Feedstock and Conversion Supply System Design and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobson, J.; Mohammad, R.; Cafferty, K.

    The success of the earlier logistic pathway designs (Biochemical and Thermochemical) from a feedstock perspective was that it demonstrated that through proper equipment selection and best management practices, conventional supply systems (referred to in this report as “conventional designs,” or specifically the 2012 Conventional Design) can be successfully implemented to address dry matter loss, quality issues, and enable feedstock cost reductions that help to reduce feedstock risk of variable supply and quality and enable industry to commercialize biomass feedstock supply chains. The caveat of this success is that conventional designs depend on high density, low-cost biomass with no disruption frommore » incremental weather. In this respect, the success of conventional designs is tied to specific, highly productive regions such as the southeastern U.S. which has traditionally supported numerous pulp and paper industries or the Midwest U.S for corn stover.« less

  5. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    PubMed

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  6. Bayesian adaptive phase II screening design for combination trials

    PubMed Central

    Cai, Chunyan; Yuan, Ying; Johnson, Valen E

    2013-01-01

    Background Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Methods Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Results Simulation studies show that the proposed design substantially outperforms the conventional multiarm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while allocating substantially more patients to efficacious treatments. Limitations The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. Conclusions The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while providing higher power to identify the best treatment at the end of the trial. PMID:23359875

  7. Evaluation of marginal/internal fit of chrome-cobalt crowns: Direct laser metal sintering versus computer-aided design and computer-aided manufacturing.

    PubMed

    Gunsoy, S; Ulusoy, M

    2016-01-01

    The purpose of this study was to evaluate the internal and marginal fit of chrome cobalt (Co-Cr) crowns were fabricated with laser sintering, computer-aided design (CAD) and computer-aided manufacturing, and conventional methods. Polyamide master and working models were designed and fabricated. The models were initially designed with a software application for three-dimensional (3D) CAD (Maya, Autodesk Inc.). All models were fabricated models were produced by a 3D printer (EOSINT P380 SLS, EOS). 128 1-unit Co-Cr fixed dental prostheses were fabricated with four different techniques: Conventional lost wax method, milled wax with lost-wax method (MWLW), direct laser metal sintering (DLMS), and milled Co-Cr (MCo-Cr). The cement film thickness of the marginal and internal gaps was measured by an observer using a stereomicroscope after taking digital photos in ×24. Best fit rates according to mean and standard deviations of all measurements was in DLMS both in premolar (65.84) and molar (58.38) models in μm. A significant difference was found DLMS and the rest of fabrication techniques (P < 0.05). No significant difference was found between MCo-CR and MWLW in all fabrication techniques both in premolar and molar models (P > 0.05). DMLS was best fitting fabrication techniques for single crown based on the results.The best fit was found in marginal; the larger gap was found in occlusal.All groups were within the clinically acceptable misfit range.

  8. Long-term behavior of integral abutment bridges : appendix E, INDOT design manual : selected recommendations for integral abutment bridges.

    DOT National Transportation Integrated Search

    2011-01-01

    Integral abutment (IA) construction has become the preferred method over conventional construction for use with typical highway bridges. However, the use of these structures is limited due to state mandated length and skew limitations. To expand thei...

  9. A Comparison of Traditional, Step-Path, and Geostatistical Techniques in the Stability Analysis of a Large Open Pit

    NASA Astrophysics Data System (ADS)

    Mayer, J. M.; Stead, D.

    2017-04-01

    With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.

  10. A Simulation Study on Optimal Design Parameters of 200V Class Induction Range using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Ohchi, Masashi; Furukawa, Tatsuya

    Induction heating has found a new feasibility in domestic appliances. Its application is known as an “induction range” or an “induction heating oven”. Conventional design schemes of them have depended on the experience and insight of designers. In the paper, the authors treat it as an electromagnetic device to investigate the mechanism of power dissipation using the Finite Element Method, where an impressed voltage supply is taken account of and the constant V/f condition is imposed for the constant impressed magnetic flux. Furthermore the authors will examine how to heat an aluminum pan and discuss the optimal frequency of a power supply.

  11. Marginal and internal fit of metal copings fabricated with rapid prototyping and conventional waxing.

    PubMed

    Farjood, Ehsan; Vojdani, Mahroo; Torabi, Kiyanoosh; Khaledi, Amir Ali Reza

    2017-01-01

    Given the limitations of conventional waxing, computer-aided design and computer-aided manufacturing (CAD-CAM) technologies have been developed as alternative methods of making patterns. The purpose of this in vitro study was to compare the marginal and internal fit of metal copings derived from wax patterns fabricated by rapid prototyping (RP) to those created by the conventional handmade technique. Twenty-four standardized brass dies were milled and divided into 2 groups (n=12) according to the wax pattern fabrication method. The CAD-RP group was assigned to the experimental group, and the conventional group to the control group. The cross-sectional technique was used to assess the marginal and internal discrepancies at 15 points on the master die by using a digital microscope. An independent t test was used for statistical analysis (α=.01). The CAD-RP group had a total mean (±SD) for absolute marginal discrepancy of 117.1 (±11.5) μm and a mean marginal discrepancy of 89.8 (±8.3) μm. The conventional group had an absolute marginal discrepancy 88.1 (±10.7) μm and a mean marginal discrepancy of 69.5 (±15.6) μm. The overall mean (±SD) of the total internal discrepancy, separately calculated as the axial internal discrepancy and occlusal internal discrepancy, was 95.9 (±8.0) μm for the CAD-RP group and 76.9 (±10.2) μm for the conventional group. The independent t test results showed significant differences between the 2 groups. The CAD-RP group had larger discrepancies at all measured areas than the conventional group, which was statistically significant (P<.01). Within the limitations of this in vitro study, the conventional method of wax pattern fabrication produced copings with better marginal and internal fit than the CAD-RP method. However, the marginal and internal fit for both groups were within clinically acceptable ranges. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  12. Dissipative Prototyping Methods: A Manifesto

    NASA Astrophysics Data System (ADS)

    Beesley, P.

    Taking a designer's unique perspective using examples of practice in experimental installation and digital protoyping, this manifesto acts as provocation for change and unlocking new potential by encouraging changes of perspective about the material realm. Diffusive form-language is proposed as a paradigm for architectural design. This method of design is applied through 3D printing and related digital fabrication methods, offering new qualities that can be implemented in design of realms including present earth and future interplanetary environments. A paradigm shift is encouraged by questioning conventional notions of geometry that minimize interfaces and by proposing the alternatives of maximized interfaces formed by effusive kinds of formal composition. A series of projects from the Canadian research studio of the Hylozoic Architecture group are described, providing examples of component design methods employing diffusive forms within combinations of tension-integrity structural systems integrated with hybrid metabolisms employing synthetic biology. Cultural implications are also discussed, drawing from architectural theory and natural philosophy. The conclusion of this paper suggests that the practice of diffusive prototyping can offer formative strategies contributing to design of future living systems.

  13. A comparison study on microwave-assisted extraction of Artemisia sphaerocephala polysaccharides with conventional method: Molecule structure and antioxidant activities evaluation.

    PubMed

    Wang, Junlong; Zhang, Ji; Wang, Xiaofang; Zhao, Baotang; Wu, Yiqian; Yao, Jian

    2009-12-01

    The conventional extraction methods for polysaccharides were time-consuming, laborious and energy-consuming. Microwave-assisted extraction (MAE) technique was employed for the extraction of Artemisia sphaerocephala polysaccharides (ASP), which is a traditional Chinese food. The extracting parameters were optimized by Box-Behnken design. In microwave heating process, a decrease in molecular weight (M(w)) was detected in SEC-LLS measurement. A d(f) value of 2.85 indicated ASP using MAE exhibited as a sphere conformation of branched clusters in aqueous solution. Furthermore, it showed stronger antioxidant activities compared with hot water extraction. The data obtained showed that the molecular weights played a more important role in antioxidant activities.

  14. Development of an Ointment Formulation Using Hot-Melt Extrusion Technology.

    PubMed

    Bhagurkar, Ajinkya M; Angamuthu, Muralikrishnan; Patil, Hemlata; Tiwari, Roshan V; Maurya, Abhijeet; Hashemnejad, Seyed Meysam; Kundu, Santanu; Murthy, S Narasimha; Repka, Michael A

    2016-02-01

    Ointments are generally prepared either by fusion or by levigation methods. The current study proposes the use of hot-melt extrusion (HME) processing for the preparation of a polyethylene glycol base ointment. Lidocaine was used as a model drug. A modified screw design was used in this process, and parameters such as feeding rate, barrel temperature, and screw speed were optimized to obtain a uniform product. The product characteristics were compared with an ointment of similar composition prepared by conventional fusion method. The rheological properties, drug release profile, and texture characteristics of the hot-melt extruded product were similar to the conventionally prepared product. This study demonstrates a novel application of the hot-melt extrusion process in the manufacturing of topical semi-solids.

  15. Adaptive Control with Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This paper presents a modification of the conventional model reference adaptive control (MRAC) architecture in order to improve transient performance of the input and output signals of uncertain systems. A simple modification of the reference model is proposed by feeding back the tracking error signal. It is shown that the proposed approach guarantees tracking of the given reference command and the reference control signal (one that would be designed if the system were known) not only asymptotically but also in transient. Moreover, it prevents generation of high frequency oscillations, which are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference commands of any magnitude from any initial position without re-tuning. The benefits of the method are demonstrated with a simulation example

  16. New heterogeneous test statistics for the unbalanced fixed-effect nested design.

    PubMed

    Guo, Jiin-Huarng; Billard, L; Luh, Wei-Ming

    2011-05-01

    When the underlying variances are unknown or/and unequal, using the conventional F test is problematic in the two-factor hierarchical data structure. Prompted by the approximate test statistics (Welch and Alexander-Govern methods), the authors develop four new heterogeneous test statistics to test factor A and factor B nested within A for the unbalanced fixed-effect two-stage nested design under variance heterogeneity. The actual significance levels and statistical power of the test statistics were compared in a simulation study. The results show that the proposed procedures maintain better Type I error rate control and have greater statistical power than those obtained by the conventional F test in various conditions. Therefore, the proposed test statistics are recommended in terms of robustness and easy implementation. ©2010 The British Psychological Society.

  17. Nonlinear Dot Plots.

    PubMed

    Rodrigues, Nils; Weiskopf, Daniel

    2018-01-01

    Conventional dot plots use a constant dot size and are typically applied to show the frequency distribution of small data sets. Unfortunately, they are not designed for a high dynamic range of frequencies. We address this problem by introducing nonlinear dot plots. Adopting the idea of nonlinear scaling from logarithmic bar charts, our plots allow for dots of varying size so that columns with a large number of samples are reduced in height. For the construction of these diagrams, we introduce an efficient two-way sweep algorithm that leads to a dense and symmetrical layout. We compensate aliasing artifacts at high dot densities by a specifically designed low-pass filtering method. Examples of nonlinear dot plots are compared to conventional dot plots as well as linear and logarithmic histograms. Finally, we include feedback from an expert review.

  18. Spatial resolution properties of motion-compensated tomographic image reconstruction methods.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A

    2012-07-01

    Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.

  19. From user needs to system specifications: multi-disciplinary thematic seminars as a collaborative design method for development of health information systems.

    PubMed

    Scandurra, I; Hägglund, M; Koch, S

    2008-08-01

    This paper presents a new multi-disciplinary method for user needs analysis and requirements specification in the context of health information systems based on established theories from the fields of participatory design and computer supported cooperative work (CSCW). Whereas conventional methods imply a separate, sequential needs analysis for each profession, the "multi-disciplinary thematic seminar" (MdTS) method uses a collaborative design process. Application of the method in elderly homecare resulted in prototypes that were well adapted to the intended user groups. Vital information in the points of intersection between different care professions was elicited and a holistic view of the entire care process was obtained. Health informatics-usability specialists and clinical domain experts are necessary to apply the method. Although user needs acquisition can be time-consuming, MdTS was perceived to efficiently identify in-context user needs, and transformed these directly into requirements specifications. Consequently the method was perceived to expedite the entire ICT implementation process.

  20. Rapid Prototyping: A Survey and Evaluation of Methodologies and Models

    DTIC Science & Technology

    1990-03-01

    possibility of program coding errors or design differences from the actual prototype the user validated. The method - ology should result in a production...behavior within the problem domain to be defned. "Each method has a different approach towards developing the set of symbols with which to define the...investigate prototyping as a viable alternative to the conventional method of software development. By the mid 1980’s, it was evi- dent that the traditional

  1. Finite-time synchronization control of a class of memristor-based recurrent neural networks.

    PubMed

    Jiang, Minghui; Wang, Shuangtao; Mei, Jun; Shen, Yanjun

    2015-03-01

    This paper presents a global and local finite-time synchronization control law for memristor neural networks. By utilizing the drive-response concept, differential inclusions theory, and Lyapunov functional method, we establish several sufficient conditions for finite-time synchronization between the master and corresponding slave memristor-based neural network with the designed controller. In comparison with the existing results, the proposed stability conditions are new, and the obtained results extend some previous works on conventional recurrent neural networks. Two numerical examples are provided to illustrate the effective of the design method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Single-incision Laparoscopic Surgery (SILS) in general surgery: a review of current practice.

    PubMed

    Froghi, Farid; Sodergren, Mikael Hans; Darzi, Ara; Paraskeva, Paraskevas

    2010-08-01

    Single-incision laparoscopic surgery (SILS) aims to eliminate multiple port incisions. Although general operative principles of SILS are similar to conventional laparoscopic surgery, operative techniques are not standardized. This review aims to evaluate the current use of SILS published in the literature by examining the types of operations performed, techniques employed, and relevant complications and morbidity. This review considered a total of 94 studies reporting 1889 patients evaluating 17 different general surgical operations. There were 8 different access techniques reported using conventional laparoscopic instruments and specifically designed SILS ports. There is extensive heterogeneity associated with operating methods and in particular ways of overcoming problems with retraction and instrumentation. Published complications, morbidity, and hospital length of stay are comparable to conventional laparoscopy. Although SILS provides excellent cosmetic results and morbidity seems similar to conventional laparoscopy, larger randomized controlled trials are needed to assess the safety and efficacy of this novel technique.

  3. An efficient temporal database design method based on EER

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Huang, Jiping; Miao, Hua

    2007-12-01

    Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.

  4. A continuous-flow capillary mixing method to monitor reactions on the microsecond time scale.

    PubMed Central

    Shastry, M C; Luck, S D; Roder, H

    1998-01-01

    A continuous-flow capillary mixing apparatus, based on the original design of Regenfuss et al. (Regenfuss, P., R. M. Clegg, M. J. Fulwyler, F. J. Barrantes, and T. M. Jovin. 1985. Rev. Sci. Instrum. 56:283-290), has been developed with significant advances in mixer design, detection method and data analysis. To overcome the problems associated with the free-flowing jet used for observation in the original design (instability, optical artifacts due to scattering, poor definition of the geometry), the solution emerging from the capillary is injected directly into a flow-cell joined to the tip of the outer capillary via a ground-glass joint. The reaction kinetics are followed by measuring fluorescence versus distance downstream from the mixer, using an Hg(Xe) arc lamp for excitation and a digital camera with a UV-sensitized CCD detector for detection. Test reactions involving fluorescent dyes indicate that mixing is completed within 15 micros of its initiation and that the dead time of the measurement is 45 +/- 5 micros, which represents a >30-fold improvement in time resolution over conventional stopped-flow instruments. The high sensitivity and linearity of the CCD camera have been instrumental in obtaining artifact-free kinetic data over the time window from approximately 45 micros to a few milliseconds with signal-to-noise levels comparable to those of conventional methods. The scope of the method is discussed and illustrated with an example of a protein folding reaction. PMID:9591695

  5. Lightning protection: challenges, solutions and questionable steps in the 21st century

    NASA Astrophysics Data System (ADS)

    Berta, István

    2011-06-01

    Besides the special primary lightning protection of extremely high towers, huge office and governmental buildings, large industrial plants and resident parks most of the challenges were connected to the secondary lightning protection of sensitive devices in Information and Communication Technology. The 70 year history of Budapest School of Lightning Protection plays an important role in the research and education of lightning and development of lightning protection. Among results and solutions the Rolling Sphere designing method (RS) and the Probability Modulated Attraction Space (PMAS) theory are detailed. As a new field Preventive Lightning Protection (PLP) has been introduced. The PLP method means the use of special preventive actions only for the duration of the thunderstorm. Recently several non-conventional lightning protection techniques have appeared as competitors of the air termination systems formed of conventional Franklin rods. The questionable steps, non-conventional lightning protection systems reported in the literature are the radioactive lightning rods, Early Streamer Emission (ESE) rods and Dissipation Arrays (sometimes called Charge Transfer Systems).

  6. An induction reactor for studying crude-oil oxidation relevant to in situ combustion.

    PubMed

    Bazargan, Mohammad; Lapene, Alexandre; Chen, Bo; Castanier, Louis M; Kovscek, Anthony R

    2013-07-01

    In a conventional ramped temperature oxidation kinetics cell experiment, an electrical furnace is used to ramp temperature at a prescribed rate. Thus, the heating rate of a kinetics cell experiment is limited by furnace performance to heating rates of about 0.5-3 °C/min. A new reactor has been designed to overcome this limit. It uses an induction heating method to ramp temperature. Induction heating is fast and easily controlled. The new reactor covers heating rates from 1 to 30 °C/min. This is the first time that the oxidation profiles of a crude oil are available over such a wide range of heating rate. The results from an induction reactor and a conventional kinetics cell at roughly 2 °C/min are compared to illustrate consistency between the two reactors. The results at low heating rate are the same as the conventional kinetics cell. As presented in the paper, the new reactor couples well with the isoconversional method for interpretation of reaction kinetics.

  7. Conventional and Accelerated-Solvent Extractions of Green Tea (Camellia sinensis) for Metabolomics-based Chemometrics

    PubMed Central

    Kellogg, Joshua J.; Wallace, Emily D.; Graf, Tyler N.; Oberlies, Nicholas H.; Cech, Nadja B.

    2018-01-01

    Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. PMID:28787673

  8. Development of Innovative Group Work Practice Using the Intervention Research Paradigm

    ERIC Educational Resources Information Center

    Comer, Edna; Meier, Andrea; Galinsky, Maeda J.

    2004-01-01

    Rothman and Thomas' intervention research (IR) paradigm provides an alternative, developmental research method that is appropriate for practice research, especially at the early stages. It is more flexible than conventional experimental designs, capitalizes on the availability of small samples, accommodates the dynamism and variation in practice…

  9. Lift-Shape Construction, An EFL Project Report.

    ERIC Educational Resources Information Center

    Evans, Ben H.

    Research development of a construction system is detailed in terms of--(1) design and analysis, (2) construction methods, (3) testing, (4) cost analysis, and (5) architectural potentials. The system described permits construction of usual shapes without the use of conventional concrete formwork. The concrete involves development of a structural…

  10. INVESTIGATION OF ORGANIC WEED CONTROL METHODS, PESTICIDE SPECIAL STUDY, COLORADO STATE UNIVERSITY

    EPA Science Inventory

    The project is proposed for the 2003 and 2004 growing seasons. Corn gluten meal (CGM), treated paper mulch and plastic mulch, along with conventional herbicide, will be applied to fields of drip irrigated broccoli in a randomized complete block design with 6 replicates. Due to ...

  11. Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott

    2017-11-01

    Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.

  12. Using Finite Element and Eigenmode Expansion Methods to Investigate the Periodic and Spectral Characteristic of Superstructure Fiber Bragg Gratings

    PubMed Central

    He, Yue-Jing; Hung, Wei-Chih; Lai, Zhe-Ping

    2016-01-01

    In this study, a numerical simulation method was employed to investigate and analyze superstructure fiber Bragg gratings (SFBGs) with five duty cycles (50%, 33.33%, 14.28%, 12.5%, and 10%). This study focuses on demonstrating the relevance between design period and spectral characteristics of SFBGs (in the form of graphics) for SFBGs of all duty cycles. Compared with complicated and hard-to-learn conventional coupled-mode theory, the result of the present study may assist beginner and expert designers in understanding the basic application aspects, optical characteristics, and design techniques of SFBGs, thereby indirectly lowering the physical concepts and mathematical skills required for entering the design field. To effectively improve the accuracy of overall computational performance and numerical calculations and to shorten the gap between simulation results and actual production, this study integrated a perfectly matched layer (PML), perfectly reflecting boundary (PRB), object meshing method (OMM), and boundary meshing method (BMM) into the finite element method (FEM) and eigenmode expansion method (EEM). The integrated method enables designers to easily and flexibly design optical fiber communication systems that conform to the specific spectral characteristic by using the simulation data in this paper, which includes bandwidth, number of channels, and band gap size. PMID:26861322

  13. Design of transmission-type phase holograms for a compact radar-cross-section measurement range at 650 GHz.

    PubMed

    Noponen, Eero; Tamminen, Aleksi; Vaaja, Matti

    2007-07-10

    A design formalism is presented for transmission-type phase holograms for use in a submillimeter-wave compact radar-cross-section (RCS) measurement range. The design method is based on rigorous electromagnetic grating theory combined with conventional hologram synthesis. Hologram structures consisting of a curved groove pattern on a 320 mmx280 mm Teflon plate are designed to transform an incoming spherical wave at 650 GHz into an output wave generating a 100 mm diameter planar field region (quiet zone) at a distance of 1 m. The reconstructed quiet-zone field is evaluated by a numerical simulation method. The uniformity of the quiet-zone field is further improved by reoptimizing the goal field. Measurement results are given for a test hologram fabricated on Teflon.

  14. Evaluation of a global algorithm for wavefront reconstruction for Shack-Hartmann wave-front sensors and thick fundus reflectors.

    PubMed

    Liu, Tao; Thibos, Larry; Marin, Gildas; Hernandez, Martha

    2014-01-01

    Conventional aberration analysis by a Shack-Hartmann aberrometer is based on the implicit assumption that an injected probe beam reflects from a single fundus layer. In fact, the biological fundus is a thick reflector and therefore conventional analysis may produce errors of unknown magnitude. We developed a novel computational method to investigate this potential failure of conventional analysis. The Shack-Hartmann wavefront sensor was simulated by computer software and used to recover by two methods the known wavefront aberrations expected from a population of normally-aberrated human eyes and bi-layer fundus reflection. The conventional method determines the centroid of each spot in the SH data image, from which wavefront slopes are computed for least-squares fitting with derivatives of Zernike polynomials. The novel 'global' method iteratively adjusted the aberration coefficients derived from conventional centroid analysis until the SH image, when treated as a unitary picture, optimally matched the original data image. Both methods recovered higher order aberrations accurately and precisely, but only the global algorithm correctly recovered the defocus coefficients associated with each layer of fundus reflection. The global algorithm accurately recovered Zernike coefficients for mean defocus and bi-layer separation with maximum error <0.1%. The global algorithm was robust for bi-layer separation up to 2 dioptres for a typical SH wavefront sensor design. For 100 randomly generated test wavefronts with 0.7 D axial separation, the retrieved mean axial separation was 0.70 D with standard deviations (S.D.) of 0.002 D. Sufficient information is contained in SH data images to measure the dioptric thickness of dual-layer fundus reflection. The global algorithm is superior since it successfully recovered the focus value associated with both fundus layers even when their separation was too small to produce clearly separated spots, while the conventional analysis misrepresents the defocus component of the wavefront aberration as the mean defocus for the two reflectors. Our novel global algorithm is a promising method for SH data image analysis in clinical and visual optics research for human and animal eyes. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  15. Comparison of the Debonding Characteristics of Conventional and New Debonding Instrument used for Ceramic, Composite and Metallic Brackets – An Invitro Study

    PubMed Central

    Gill, Vikas; Reddy, Y. N. N.; Sanadhya, Sudhanshu; Aapaliya, Pankaj; Sharma, Nidhi

    2014-01-01

    Background: Debonding procedure is time consuming and damaging to the enamel if performed with improper technique. Various debonding methods include: the conventional methods that use pliers or wrenches, an ultrasonic method, electrothermal devices, air pressure impulse devices, diamond burs to grind the brackets off the tooth surface and lasers. Among all these methods, using debonding pliers is most convenient and effective method but has been reported to cause damage to the teeth. Recently, a New Debonding Instrument designed specifically for ceramic and composite brackets has been introduced. As this is a new instrument, little information is available on efficacy of this instrument. The purpose of this study was to evaluate the debonding characteristics of both “the conventional debonding Pliers” and “the New debonding instrument” when removing ceramic, composite and metallic brackets. Materials and Methods: One Hundred Thirty eight extracted maxillary premolar teeth were collected and divided into two Groups: Group A and Group B (n = 69) respectively. They were further divided into 3 subGroups (n = 23) each according to the types of brackets to be bonded. In subGroups A1 and B1{stainless steel};A2 and B2{ceramic};A3 and B3{composite}adhesive precoated maxillary premolar brackets were used. Among them {ceramic and composite} adhesive pre-coated maxillary premolar brackets were bonded. All the teeth were etched using 37% phosphoric acid for 15 seconds and the brackets were bonded using Transbond XT primer. Brackets were debonded using Conventional Debonding Plier and New Debonding Instrument (Group B). After debonding, the enamel surface of each tooth was examined under stereo microscope (10X magnifications). Amodifiedadhesive remnant index (ARI) was used to quantify the amount of remaining adhesive on each tooth. Results: The observations demonstrate that the results of New Debonding Instrument for debonding of metal, ceramic and composite brackets were statistically significantly different (p = 0.04) and superior from the results of conventional debonding Pliers. Conclusion: The debonding efficiency of New Debonding Instrument is better than the debonding efficiency of Conventional Debonding Pliers for use of metal, ceramic and composite brackets respectively. PMID:25177639

  16. Design-based and model-based inference in surveys of freshwater mollusks

    USGS Publications Warehouse

    Dorazio, R.M.

    1999-01-01

    Well-known concepts in statistical inference and sampling theory are used to develop recommendations for planning and analyzing the results of quantitative surveys of freshwater mollusks. Two methods of inference commonly used in survey sampling (design-based and model-based) are described and illustrated using examples relevant in surveys of freshwater mollusks. The particular objectives of a survey and the type of information observed in each unit of sampling can be used to help select the sampling design and the method of inference. For example, the mean density of a sparsely distributed population of mollusks can be estimated with higher precision by using model-based inference or by using design-based inference with adaptive cluster sampling than by using design-based inference with conventional sampling. More experience with quantitative surveys of natural assemblages of freshwater mollusks is needed to determine the actual benefits of different sampling designs and inferential procedures.

  17. Estimation of the failure risk of a maxillary premolar with different crack depths with endodontic treatment by computer-aided design/computer-aided manufacturing ceramic restorations.

    PubMed

    Lin, Chun-Li; Chang, Yen-Hsiang; Hsieh, Shih-Kai; Chang, Wen-Jen

    2013-03-01

    This study evaluated the risk of failure for an endodontically treated premolar with different crack depths, which was shearing toward the pulp chamber and was restored by using 3 different computer-aided design/computer-aided manufacturing ceramic restoration configurations. Three 3-dimensional finite element models designed with computer-aided design/computer-aided manufacturing ceramic onlay, endocrown, and conventional crown restorations were constructed to perform simulations. The Weibull function was incorporated with finite element analysis to calculate the long-term failure probability relative to different load conditions. The results indicated that the stress values on the enamel, dentin, and luting cement for endocrown restorations exhibited the lowest values relative to the other 2 restoration methods. Weibull analysis revealed that the overall failure probabilities in a shallow cracked premolar were 27%, 2%, and 1% for the onlay, endocrown, and conventional crown restorations, respectively, in the normal occlusal condition. The corresponding values were 70%, 10%, and 2% for the depth cracked premolar. This numeric investigation suggests that the endocrown provides sufficient fracture resistance only in a shallow cracked premolar with endodontic treatment. The conventional crown treatment can immobilize the premolar for different cracked depths with lower failure risk. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  18. Optimal design approach for heating irregular-shaped objects in three-dimensional radiant furnaces using a hybrid genetic algorithm-artificial neural network method

    NASA Astrophysics Data System (ADS)

    Darvishvand, Leila; Kamkari, Babak; Kowsary, Farshad

    2018-03-01

    In this article, a new hybrid method based on the combination of the genetic algorithm (GA) and artificial neural network (ANN) is developed to optimize the design of three-dimensional (3-D) radiant furnaces. A 3-D irregular shape design body (DB) heated inside a 3-D radiant furnace is considered as a case study. The uniform thermal conditions on the DB surfaces are obtained by minimizing an objective function. An ANN is developed to predict the objective function value which is trained through the data produced by applying the Monte Carlo method. The trained ANN is used in conjunction with the GA to find the optimal design variables. The results show that the computational time using the GA-ANN approach is significantly less than that of the conventional method. It is concluded that the integration of the ANN with GA is an efficient technique for optimization of the radiant furnaces.

  19. Unstructured Grids for Sonic Boom Analysis and Design

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Nayani, Sudheer N.

    2015-01-01

    An evaluation of two methods for improving the process for generating unstructured CFD grids for sonic boom analysis and design has been conducted. The process involves two steps: the generation of an inner core grid using a conventional unstructured grid generator such as VGRID, followed by the extrusion of a sheared and stretched collar grid through the outer boundary of the core grid. The first method evaluated, known as COB, automatically creates a cylindrical outer boundary definition for use in VGRID that makes the extrusion process more robust. The second method, BG, generates the collar grid by extrusion in a very efficient manner. Parametric studies have been carried out and new options evaluated for each of these codes with the goal of establishing guidelines for best practices for maintaining boom signature accuracy with as small a grid as possible. In addition, a preliminary investigation examining the use of the CDISC design method for reducing sonic boom utilizing these grids was conducted, with initial results confirming the feasibility of a new remote design approach.

  20. The Design of a Primary Flight Trainer using Concurrent Engineering Concepts

    NASA Technical Reports Server (NTRS)

    Ladesic, James G.; Eastlake, Charles N.; Kietzmann, Nicholas H.

    1993-01-01

    Concurrent Engineering (CE) concepts seek to coordinate the expertise of various disciplines from initial design configuration selection through product disposal so that cost efficient design solutions may be achieve. Integrating this methodology into an undergraduate design course sequence may provide a needed enhancement to engineering education. The Advanced Design Program (ADP) project at Embry-Riddle Aeronautical University (EMU) is focused on developing recommendations for the general aviation Primary Flight Trainer (PFT) of the twenty first century using methods of CE. This project, over the next two years, will continue synthesizing the collective knowledge of teams composed of engineering students along with students from other degree programs, their faculty, and key industry representatives. During the past year (Phase I). conventional trainer configurations that comply with current regulations and existing technologies have been evaluated. Phase I efforts have resulted in two baseline concepts, a high-wing, conventional design named Triton and a low-wing, mid-engine configuration called Viper. In the second and third years (Phases II and III). applications of advanced propulsion, advanced materials, and unconventional airplane configurations along with military and commercial technologies which are anticipated to be within the economic range of general aviation by the year 2000, will be considered.

  1. Numerical investigation of three-dimensional pupil model impact on the relative illumination in panomorph lenses

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhenfeng; Thibault, Simon

    2017-11-01

    One of the key issues in conventional wide-angle lenses is the well-known cosine-fourth power law problem causing the illumination falloff at its image space. This paper explores methods of improving illumination in the image space in panomorph lenses. By tracing skew rays within the defined field of view and pupil diameter, we obtained the actual position of the three-dimensional pupil model of the entrance pupil (EP) and exit pupil (XP). Based on the law of irradiance transport conservation, the relation between the area of the EP projection and illumination in the image space is derived to investigate the factors affecting the illumination on the peripheral field. A panomorph lens has been optimized as an example by providing a self-defined operation in the optimization process. The characteristic of the EP and XP in panomorph lenses is qualitatively analyzed. Compared with the conventional design method, the proposed design strategy can enhance the illumination with and without polarized light based on qualitatively evaluating the area of projected EP. It is demonstrated that this method enables the enhancement of the illumination without additional film coating.

  2. Nonlinear Optical Characterization of Membrane Protein Microcrystals and Nanocrystals.

    PubMed

    Newman, Justin A; Simpson, Garth J

    2016-01-01

    Nonlinear optical methods such as second harmonic generation (SHG) and two-photon excited UV fluorescence (TPE-UVF) imaging are promising approaches to address bottlenecks in the membrane protein structure determination pipeline. The general principles of SHG and TPE-UVF are discussed here along with instrument design considerations. Comparisons to conventional methods in high throughput crystallization condition screening and crystal quality assessment prior to X-ray diffraction are also discussed.

  3. Defense Small Business Innovation Research Program (SBIR) FY 1984.

    DTIC Science & Technology

    1984-01-12

    nuclear submarine non-metallic, light weight, high strength piping . Includes the development of adequate fabrication procedures for attaching pipe ...waste heat economizer methods, require development. Improved conventional and hybrid heat pipes and/or two phase transport devices 149 IF are required...DESCRIPTION: A need exists to conceive, design, fabricate and test a method of adjusting the length of the individual legs of nylon or Kevlar rope sling

  4. Development and application of a rapid and visual loop-mediated isothermal amplification for the detection of Sporisorium scitamineum in sugarcane

    PubMed Central

    Su, Yachun; Yang, Yuting; Peng, Qiong; Zhou, Dinggang; Chen, Yun; Wang, Zhuqing; Xu, Liping; Que, Youxiong

    2016-01-01

    Smut is a fungal disease with widespread prevalence in sugarcane planting areas. Early detection and proper identification of Sporisorium scitamineum are essential in smut management practices. In the present study, four specific primers targeting the core effector Pep1 gene of S. scitamineum were designed. Optimal concentrations of Mg2+, primer and Bst DNA polymerase, the three important components of the loop-mediated isothermal amplification (LAMP) reaction system, were screened using a single factor experiment method and the L16(45) orthogonal experimental design. Hence, a LAMP system suitable for detection of S. scitamineum was established. High specificity of the LAMP method was confirmed by the assay of S. scitamineum, Fusarium moniliforme, Pestalotia ginkgo, Helminthospcrium sacchari, Fusarium oxysporum and endophytes of Yacheng05-179 and ROC22. The sensitivity of the LAMP method was equal to that of the conventional PCR targeting Pep1 gene and was 100 times higher than that of the conventional PCR assay targeting bE gene in S. scitamineum. The results suggest that this novel LAMP system has strong specificity and high sensitivity. This method not only provides technological support for the epidemic monitoring of sugarcane smut, but also provides a good case for development of similar detection technology for other plant pathogens. PMID:27035751

  5. Short-focus and ultra-wide-angle lens design in wavefront coding

    NASA Astrophysics Data System (ADS)

    Zhang, Jiyan; Huang, Yuanqing; Xiong, Feibing

    2016-10-01

    Wavefront coding (WFC) is a hybrid technology designed to increase depth of field of conventional optics. The goal of our research is to apply this technology to the short-focus and ultra-wide-angle lens which suffers from the aberration related with large field of view (FOV) such as coma and astigmatism. WFC can also be used to compensate for other aberration which is sensitive to the FOV. Ultra-wide-angle lens has a little depth of focus because it has small F number and short-focus. We design a hybrid lens combing WFC with the ultra-wide-angle lens. The full FOV and relative aperture of the final design are up to170° and 1/1.8 respectively. The focal length is 2 mm. We adopt the cubic phase mask (CPM) in the design. The conventional design will have a wide variation of the point spread function (PSF) across the FOV and it is very sensitive with the variation of the FOV. The new design we obtain the PSF is nearly invariant over the whole FOV. But the result of the design also shows the little difference between the horizontal and vertical length of the PSF. We analyze that the CPM is non-symmetric phase mask and the FOV is so large, which will generate variation in the final image quality. For that reason, we apply a new method to avoid that happened. We try to make the rays incident on the CPM with small angle and decrease the deformation of the PSF. The experimental result shows the new method to optimize the CPM is fit for the ultra-wide-angle lens. The research above will be a helpful instruction to design the ultra-wide-angle lens with WFC.

  6. 77 FR 69568 - Special Conditions: Bombardier Aerospace, Model BD-500-1A10 and BD-500-1A11 Airplanes; Sidestick...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-20

    ... sidestick controller instead of a conventional control column and wheel. This kind of controller is designed... conventional control column and wheel. This kind of controller is designed for one-hand operation. Discussion... controller instead of a conventional wheel or control stick. This kind of controller is designed to be...

  7. 78 FR 11089 - Special Conditions: Bombardier Aerospace, Model BD-500-1A10 and BD-500-1A11 Airplanes; Sidestick...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-15

    ... controller instead of a conventional control column and wheel. This kind of controller is designed for only... following novel or unusual design feature: A sidestick controller instead of a conventional control column... conventional wheel or control stick. This kind of controller is designed to be operated using only one hand...

  8. A Simple Method for High-Lift Propeller Conceptual Design

    NASA Technical Reports Server (NTRS)

    Patterson, Michael; Borer, Nick; German, Brian

    2016-01-01

    In this paper, we present a simple method for designing propellers that are placed upstream of the leading edge of a wing in order to augment lift. Because the primary purpose of these "high-lift propellers" is to increase lift rather than produce thrust, these props are best viewed as a form of high-lift device; consequently, they should be designed differently than traditional propellers. We present a theory that describes how these props can be designed to provide a relatively uniform axial velocity increase, which is hypothesized to be advantageous for lift augmentation based on a literature survey. Computational modeling indicates that such propellers can generate the same average induced axial velocity while consuming less power and producing less thrust than conventional propeller designs. For an example problem based on specifications for NASA's Scalable Convergent Electric Propulsion Technology and Operations Research (SCEPTOR) flight demonstrator, a propeller designed with the new method requires approximately 15% less power and produces approximately 11% less thrust than one designed for minimum induced loss. Higher-order modeling and/or wind tunnel testing are needed to verify the predicted performance.

  9. Energy Productivity of the High Velocity Algae Raceway Integrated Design (ARID-HV)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attalah, Said; Waller, Peter M.; Khawam, George

    The original Algae Raceway Integrated Design (ARID) raceway was an effective method to increase algae culture temperature in open raceways. However, the energy input was high and flow mixing was poor. Thus, the High Velocity Algae Raceway Integrated Design (ARID-HV) raceway was developed to reduce energy input requirements and improve flow mixing in a serpentine flow path. A prototype ARID-HV system was installed in Tucson, Arizona. Based on algae growth simulation and hydraulic analysis, an optimal ARID-HV raceway was designed, and the electrical energy input requirement (kWh ha-1 d-1) was calculated. An algae growth model was used to compare themore » productivity of ARIDHV and conventional raceways. The model uses a pond surface energy balance to calculate water temperature as a function of environmental parameters. Algae growth and biomass loss are calculated based on rate constants during day and night, respectively. A 10 year simulation of DOE strain 1412 (Chlorella sorokiniana) showed that the ARID-HV raceway had significantly higher production than a conventional raceway for all months of the year in Tucson, Arizona. It should be noted that this difference is species and climate specific and is not observed in other climates and with other algae species. The algae growth model results and electrical energy input evaluation were used to compare the energy productivity (algae production rate/energy input) of the ARID-HV and conventional raceways for Chlorella sorokiniana in Tucson, Arizona. The energy productivity of the ARID-HV raceway was significantly greater than the energy productivity of a conventional raceway for all months of the year.« less

  10. The effects of low impact development on urban flooding under different rainfall characteristics.

    PubMed

    Qin, Hua-peng; Li, Zhuo-xi; Fu, Guangtao

    2013-11-15

    Low impact development (LID) is generally regarded as a more sustainable solution for urban stormwater management than conventional urban drainage systems. However, its effects on urban flooding at a scale of urban drainage systems have not been fully understood particularly when different rainfall characteristics are considered. In this paper, using an urbanizing catchment in China as a case study, the effects of three LID techniques (swale, permeable pavement and green roof) on urban flooding are analyzed and compared with the conventional drainage system design. A range of storm events with different rainfall amounts, durations and locations of peak intensity are considered for holistic assessment of the LID techniques. The effects are measured by the total flood volume reduction during a storm event compared to the conventional drainage system design. The results obtained indicate that all three LID scenarios are more effective in flood reduction during heavier and shorter storm events. Their performance, however, varies significantly according to the location of peak intensity. That is, swales perform best during a storm event with an early peak, permeable pavements perform best with a middle peak, and green roofs perform best with a late peak, respectively. The trends of flood reduction can be explained using a newly proposed water balance method, i.e., by comparing the effective storage depth of the LID designs with the accumulative rainfall amounts at the beginning and end of flooding in the conventional drainage system. This paper provides an insight into the performance of LID designs under different rainfall characteristics, which is essential for effective urban flood management. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Upscaling the pollutant emission from mixed recycled aggregates under compaction for civil applications.

    PubMed

    Galvín, Adela P; Ayuso, Jesús; Barbudo, Auxi; Cabrera, Manuel; López-Uceda, Antonio; Rosales, Julia

    2017-12-27

    In general terms, plant managers of sites producing construction wastes assess materials according to concise, legally recommended leaching tests that do not consider the compaction stage of the materials when they are applied on-site. Thus, the tests do not account for the real on-site physical conditions of the recycled aggregates used in civil works (e.g., roads or embankments). This leads to errors in estimating the pollutant potential of these materials. For that reason, in the present research, an experimental procedure is designed as a leaching test for construction materials under compaction. The aim of this laboratory test (designed specifically for the granular materials used in civil engineering infrastructures) is to evaluate the release of pollutant elements when the recycled aggregate is tested at its commercial grain-size distribution and when the material is compacted under on-site conditions. Two recycled aggregates with different gypsum contents (0.95 and 2.57%) were used in this study. In addition to the designed leaching laboratory test, the conventional compliance leaching test and the Dutch percolation test were performed. The results of the new leaching method were compared with the conventional leaching test results. After analysis, the chromium and sulphate levels obtained from the newly designed test were lower than those obtained from the conventional leaching test, and these were considered more seriously pollutant elements. This result confirms that when the leaching behaviour is evaluated for construction aggregates without density alteration, crushing the aggregate and using only the finest fraction, as is done in the conventional test (which is an unrealistic situation for aggregates that are applied under on-site conditions), the leaching behaviour is not accurately assessed.

  12. Effect of corn residue harvest method with ruminally undegradable protein supplementation on performance of growing calves and fiber digestibility.

    PubMed

    King, T M; Bondurant, R G; Jolly-Breithaupt, M L; Gramkow, J L; Klopfenstein, T J; MacDonald, J C

    2017-12-01

    Two experiments evaluated the effects of corn residue harvest method on animal performance and diet digestibility. Experiment 1 was designed as a 2 × 2 + 1 factorial arrangement of treatments using 60 individually fed crossbred steers (280 kg [SD 32] initial BW; = 12). Factors were the corn residue harvest method (high-stem and conventional) and supplemental RUP at 2 concentrations (0 and 3.3% diet DM). A third harvest method (low-stem) was also evaluated, but only in diets containing supplemental RUP at 3.3% diet DM because of limitations in the amount of available low-stem residue. Therefore, the 3 harvest methods were compared only in diets containing supplemental RUP. In Exp. 2, 9 crossbred wethers were blocked by BW (42.4 kg [SD 7] initial BW) and randomly assigned to diets containing corn residue harvested 1 of 3 ways (low-stem, high-stem, and conventional). In Exp. 1, steers fed the low-stem residue diet had greater ADG compared with the steers fed conventionally harvested corn residue ( = 0.03; 0.78 vs. 0.63 kg), whereas steers fed high-stem residue were intermediate ( > 0.17; 0.69 kg), not differing from either conventional or low-stem residues. Results from in vitro OM digestibility suggest that low-stem residue had the greatest ( < 0.01) amount of digestible OM compared with the other 2 residue harvest methods, which did not differ ( = 0.32; 55.0, 47.8, and 47.1% for low-stem, high-stem, and conventional residues, respectively). There were no differences in RUP content (40% of CP) and RUP digestibility (60%) among the 3 residues ( ≥ 0.35). No interactions were observed between harvest method and the addition of RUP ( ≥ 0.12). The addition of RUP tended to result in improved ADG (0.66 ± 0.07 vs. 0.58 ± 0.07 for supplemental RUP and no RUP, respectively; = 0.08) and G:F (0.116 ± 0.006 vs. 0.095 ± 0.020 for supplemental RUP and no RUP, respectively; = 0.02) compared with similar diets without the additional RUP. In Exp. 2, low-stem residue had greater DM and OM digestibility and DE ( < 0.01) than high-stem and conventional residues, which did not differ ( ≥ 0.63). Low-stem residue also had the greatest NDF digestibility (NDFD; < 0.01), whereas high-stem residue had greater NDFD than conventional residue ( < 0.01). Digestible energy was greatest for low-stem residue ( < 0.05) and did not differ between high-stem and conventional residues ( = 0.50). Reducing the proportion of stem in the bale through changes in the harvest method increased the nutritive quality of corn residue.

  13. Fault tolerant control laws

    NASA Technical Reports Server (NTRS)

    Ly, U. L.; Ho, J. K.

    1986-01-01

    A systematic procedure for the synthesis of fault tolerant control laws to actuator failure has been presented. Two design methods were used to synthesize fault tolerant controllers: the conventional LQ design method and a direct feedback controller design method SANDY. The latter method is used primarily to streamline the full-state Q feedback design into a practical implementable output feedback controller structure. To achieve robustness to control actuator failure, the redundant surfaces are properly balanced according to their control effectiveness. A simple gain schedule based on the landing gear up/down logic involving only three gains was developed to handle three design flight conditions: Mach .25 and Mach .60 at 5000 ft and Mach .90 at 20,000 ft. The fault tolerant control law developed in this study provides good stability augmentation and performance for the relaxed static stability aircraft. The augmented aircraft responses are found to be invariant to the presence of a failure. Furthermore, single-loop stability margins of +6 dB in gain and +30 deg in phase were achieved along with -40 dB/decade rolloff at high frequency.

  14. Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans

    NASA Astrophysics Data System (ADS)

    Cheng, Jie-Zhi; Ni, Dong; Chou, Yi-Hong; Qin, Jing; Tiu, Chui-Mei; Chang, Yeun-Chung; Huang, Chiun-Sheng; Shen, Dinggang; Chen, Chung-Ming

    2016-04-01

    This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features.

  15. Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans.

    PubMed

    Cheng, Jie-Zhi; Ni, Dong; Chou, Yi-Hong; Qin, Jing; Tiu, Chui-Mei; Chang, Yeun-Chung; Huang, Chiun-Sheng; Shen, Dinggang; Chen, Chung-Ming

    2016-04-15

    This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features.

  16. Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans

    PubMed Central

    Cheng, Jie-Zhi; Ni, Dong; Chou, Yi-Hong; Qin, Jing; Tiu, Chui-Mei; Chang, Yeun-Chung; Huang, Chiun-Sheng; Shen, Dinggang; Chen, Chung-Ming

    2016-01-01

    This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features. PMID:27079888

  17. Cervical cancer patterns with automation-assisted and conventional cytological screening: a randomized study.

    PubMed

    Anttila, Ahti; Pokhrel, Arun; Kotaniemi-Talonen, Laura; Hakama, Matti; Malila, Nea; Nieminen, Pekka

    2011-03-01

    The purpose was to evaluate alternative cytological screening methods in population-based screening for cervical cancer up to cancer incidence and mortality outcome. Automation-assisted screening was compared to conventional cytological screening in a randomized design. The study was based on follow-up of 503,391 women invited in the Finnish cervical cancer screening program during 1999-2003. The endpoints were incident cervical cancer, severe intraepithelial neoplasia and deaths from cervical cancer. One third of the women had been randomly allocated to automation-assisted screening and two thirds to conventional cytology. Information on cervical cancer and severe neoplasia were obtained through 1999-2007 from a linkage between screening and cancer registry files. There were altogether 3.2 million woman-years at risk, and the average follow-up time was 6.3 years. There was no difference in the risk of cervical cancer between the automation-assisted and conventional screening methods; the relative risk (RR) of cervical cancer between the study and control arm was 1.00 (95% confidence interval [CI] = 0.76-1.29) among all invited and 1.08 (95% CI = 0.76-1.51) among women who were test negative at entry. Comparing women who were test negative with nonscreened, RR of cervical cancer incidence was 0.26, 95% CI = 0.19-0.36 and of mortality 0.24 (0.13-0.43). Both methods were valid for screening. Because cervical cancer is rare in our country, we cannot rule out small differences between methods. Evidence on alternative methods for cervical cancer screening is increasing and it is thus feasible to evaluate new methods in large-scale population-based screening programs up to cancer outcome. Copyright © 2010 UICC.

  18. CNN based approach for activity recognition using a wrist-worn accelerometer.

    PubMed

    Panwar, Madhuri; Dyuthi, S Ram; Chandra Prakash, K; Biswas, Dwaipayan; Acharyya, Amit; Maharatna, Koushik; Gautam, Arvind; Naik, Ganesh R

    2017-07-01

    In recent years, significant advancements have taken place in human activity recognition using various machine learning approaches. However, feature engineering have dominated conventional methods involving the difficult process of optimal feature selection. This problem has been mitigated by using a novel methodology based on deep learning framework which automatically extracts the useful features and reduces the computational cost. As a proof of concept, we have attempted to design a generalized model for recognition of three fundamental movements of the human forearm performed in daily life where data is collected from four different subjects using a single wrist worn accelerometer sensor. The validation of the proposed model is done with different pre-processing and noisy data condition which is evaluated using three possible methods. The results show that our proposed methodology achieves an average recognition rate of 99.8% as opposed to conventional methods based on K-means clustering, linear discriminant analysis and support vector machine.

  19. Accurate Modeling Method for Cu Interconnect

    NASA Astrophysics Data System (ADS)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  20. High Aspect-Ratio Neural Probes using Conventional Blade Dicing

    NASA Astrophysics Data System (ADS)

    Goncalves, S. B.; Ribeiro, J. F.; Silva, A. F.; Correia, J. H.

    2016-10-01

    Exploring deep neural circuits has triggered the development of long penetrating neural probes. Moreover, driven by brain displacement, the long neural probes require also a high aspect-ratio shafts design. In this paper, a simple and reproducible method of manufacturing long-shafts neural probes using blade dicing technology is presented. Results shows shafts up to 8 mm long and 200 µm wide, features competitive to the current state-of-art, being its outline simply accomplished by a single blade dicing program. Therefore, conventional blade dicing presents itself as a viable option to manufacture long neural probes.

  1. Luting of CAD/CAM ceramic inlays: direct composite versus dual-cure luting cement.

    PubMed

    Kameyama, Atsushi; Bonroy, Kim; Elsen, Caroline; Lührs, Anne-Katrin; Suyama, Yuji; Peumans, Marleen; Van Meerbeek, Bart; De Munck, Jan

    2015-01-01

    The aim of this study was to investigate bonding effectiveness in direct restorations. A two-step self-etch adhesive and a light-cure resin composite was compared with luting with a conventional dual-cure resin cement and a two-step etch and rinse adhesive. Class-I box-type cavities were prepared. Identical ceramic inlays were designed and fabricated with a computer-aided design/computer-aided manufacturing (CAD/CAM) device. The inlays were seated with Clearfil SE Bond/Clearfil AP-X (Kuraray Medical) or ExciTE F DSC/Variolink II (Ivoclar Vivadent), each by two operators (five teeth per group). The inlays were stored in water for one week at 37°C, whereafter micro-tensile bond strength testing was conducted. The micro-tensile bond strength of the direct composite was significantly higher than that from conventional luting, and was independent of the operator (P<0.0001). Pre-testing failures were only observed with the conventional method. High-power light-curing of a direct composite may be a viable alternative to luting lithium disilicate glass-ceramic CAD/CAM restorations.

  2. NASA's Aeroacoustic Tools and Methods for Analysis of Aircraft Noise

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Lopes, Leonard V.; Burley, Casey L.

    2015-01-01

    Aircraft community noise is a significant concern due to continued growth in air traffic, increasingly stringent environmental goals, and operational limitations imposed by airport authorities. The ability to quantify aircraft noise at the source and ultimately at observers is required to develop low noise aircraft designs and flight procedures. Predicting noise at the source, accounting for scattering and propagation through the atmosphere to the observer, and assessing the perception and impact on a community requires physics-based aeroacoustics tools. Along with the analyses for aero-performance, weights and fuel burn, these tools can provide the acoustic component for aircraft MDAO (Multidisciplinary Design Analysis and Optimization). Over the last decade significant progress has been made in advancing the aeroacoustic tools such that acoustic analyses can now be performed during the design process. One major and enabling advance has been the development of the system noise framework known as Aircraft NOise Prediction Program2 (ANOPP2). ANOPP2 is NASA's aeroacoustic toolset and is designed to facilitate the combination of acoustic approaches of varying fidelity for the analysis of noise from conventional and unconventional aircraft. The toolset includes a framework that integrates noise prediction and propagation methods into a unified system for use within general aircraft analysis software. This includes acoustic analyses, signal processing and interfaces that allow for the assessment of perception of noise on a community. ANOPP2's capability to incorporate medium fidelity shielding predictions and wind tunnel experiments into a design environment is presented. An assessment of noise from a conventional and Hybrid Wing Body (HWB) aircraft using medium fidelity scattering methods combined with noise measurements from a model-scale HWB recently placed in NASA's 14x22 wind tunnel are presented. The results are in the form of community noise metrics and auralizations.

  3. Analysis and design of on-grade reinforced concrete track support structures

    NASA Technical Reports Server (NTRS)

    Mclean, F. G.; Williams, R. D.; Greening, L. R.

    1972-01-01

    For the improvement of rail service, the Department of Transportation, Federal Rail Administration, is sponsoring a test track on the Atchison, Topeka, and Santa Fe Railway. The test track will contain nine separate rail support structures, including one conventional section for control and three reinforced concrete structures on grade, one slab and two beam sections. The analysis and design of these latter structures was accomplished by means of the finite element method, NASTRAN, and is presented.

  4. A novel test cage with an air ventilation system as an alternative to conventional cages for the efficacy testing of mosquito repellents.

    PubMed

    Obermayr, U; Rose, A; Geier, M

    2010-11-01

    We have developed a novel test cage and improved method for the evaluation of mosquito repellents. The method is compatible with the United States Environmental Protection Agency, 2000 draft OPPTS 810.3700 Product Performance Test Guidelines for Testing of Insect Repellents. The Biogents cages (BG-cages) require fewer test mosquitoes than conventional cages and are more comfortable for the human volunteers. The novel cage allows a section of treated forearm from a volunteer to be exposed to mosquito probing through a window. This design minimizes residual contamination of cage surfaces with repellent. In addition, an air ventilation system supplies conditioned air to the cages after each single test, to flush out and prevent any accumulation of test substances. During biting activity tests, the untreated skin surface does not receive bites because of a screen placed 150 mm above the skin. Compared with the OPPTS 810.3700 method, the BG-cage is smaller (27 liters, compared with 56 liters) and contains 30 rather than hundreds of blood-hungry female mosquitoes. We compared the performance of a proprietary repellent formulation containing 20% KBR3023 with four volunteers on Aedes aegypti (L.) (Diptera: Culicidae) in BG- and conventional cages. Repellent protection time was shorter in tests conducted with conventional cages. The average 95% protection time was 4.5 +/- 0.4 h in conventional cages and 7.5 +/- 0.6 h in the novel BG-cages. The protection times measured in BG-cages were more similar to the protection times determined with these repellents in field tests.

  5. User-centered design in clinical handover: exploring post-implementation outcomes for clinicians.

    PubMed

    Wong, Ming Chao; Cummings, Elizabeth; Turner, Paul

    2013-01-01

    This paper examines the outcomes for clinicians from their involvement in the development of an electronic clinical hand-over tool developed using principles of user-centered design. Conventional e-health post-implementation evaluations tend to emphasize technology-related (mostly positive) outcomes. More recently, unintended (mostly negative) consequences arising from the implementation of e-health technologies have also been reported. There remains limited focus on the post-implementation outcomes for users, particularly those directly involved in e-health design processes. This paper presents detailed analysis and insights into the outcomes experienced post-implementation by a cohort of junior clinicians involved in developing an electronic clinical handover tool in Tasmania, Australia. The qualitative methods used included observations, semi-structured interviews and analysis of clinical handover notes. Significantly, a number of unanticipated flow-on effects were identified that mitigated some of the challenges arising during the design and implementation of the tool. The paper concludes by highlighting the importance of identifying post-implementation user outcomes beyond conventional system adoption and use and also points to the need for more comprehensive evaluative frameworks to encapsulate these broader socio-technical user outcomes.

  6. Combination microwave ovens: an innovative design strategy.

    PubMed

    Tinga, Wayne R; Eke, Ken

    2012-01-01

    Reducing the sensitivity of microwave oven heating and cooking performance to load volume, load placement and load properties has been a long-standing challenge for microwave and microwave-convection oven designers. Conventional design problem and solution methods are reviewed to provide greater insight into the challenge and optimum operation of a microwave oven after which a new strategy is introduced. In this methodology, a special load isolating and energy modulating device called a transducer-exciter is used containing an iris, a launch box, a phase, amplitude and frequency modulator and a coupling plate designed to provide spatially distributed coupling to the oven. This system, when applied to a combined microwave-convection oven, gives astounding performance improvements to all kinds of baked and roasted foods including sensitive items such as cakes and pastries, with the only compromise being a reasonable reduction in the maximum available microwave power. Large and small metal utensils can be used in the oven with minimal or no performance penalty on energy uniformity and cooking results. Cooking times are greatly reduced from those in conventional ovens while maintaining excellent cooking performance.

  7. Inverse design of centrifugal compressor vaned diffusers in inlet shear flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zangeneh, M.

    1996-04-01

    A three-dimensional inverse design method in which the blade (or vane) geometry is designed for specified distributions of circulation and blade thickness is applied to the design of centrifugal compressor vaned diffusers. Two generic diffusers are designed, one with uniform inlet flow (equivalent to a conventional design) and the other with a sheared inlet flow. The inlet shear flow effects are modeled in the design method by using the so-called ``Secondary Flow Approximation`` in which the Bernoulli surfaces are convected by the tangentially mean inviscid flow field. The difference between the vane geometry of the uniform inlet flow and nonuniformmore » inlet flow diffusers is found to be most significant from 50 percent chord to the trailing edge region. The flows through both diffusers are computed by using Denton`s three-dimensional inviscid Euler solver and Dawes` three-dimensional Navier-Stokes solver under sheared in-flow conditions. The predictions indicate improved pressure recovery and internal flow field for the diffuser designed for shear inlet flow conditions.« less

  8. The PDS4 Data Dictionary Tool - Metadata Design for Data Preparers

    NASA Astrophysics Data System (ADS)

    Raugh, A.; Hughes, J. S.

    2017-12-01

    One of the major design goals of the PDS4 development effort was to create an extendable Information Model (IM) for the archive, and to allow mission data designers/preparers to create extensions for metadata definitions specific to their own contexts. This capability is critical for the Planetary Data System - an archive that deals with a data collection that is diverse along virtually every conceivable axis. Amid such diversity in the data itself, it is in the best interests of the PDS archive and its users that all extensions to the IM follow the same design techniques, conventions, and restrictions as the core implementation itself. But it is unrealistic to expect mission data designers to acquire expertise in information modeling, model-driven design, ontology, schema formulation, and PDS4 design conventions and philosophy in order to define their own metadata. To bridge that expertise gap and bring the power of information modeling to the data label designer, the PDS Engineering Node has developed the data dictionary creation tool known as "LDDTool". This tool incorporates the same software used to maintain and extend the core IM, packaged with an interface that enables a developer to create his extension to the IM using the same, standards-based metadata framework PDS itself uses. Through this interface, the novice dictionary developer has immediate access to the common set of data types and unit classes for defining attributes, and a straight-forward method for constructing classes. The more experienced developer, using the same tool, has access to more sophisticated modeling methods like abstraction and extension, and can define context-specific validation rules. We present the key features of the PDS Local Data Dictionary Tool, which both supports the development of extensions to the PDS4 IM, and ensures their compatibility with the IM.

  9. Cylindrical geometry hall thruster

    DOEpatents

    Raitses, Yevgeny; Fisch, Nathaniel J.

    2002-01-01

    An apparatus and method for thrusting plasma, utilizing a Hall thruster with a cylindrical geometry, wherein ions are accelerated in substantially the axial direction. The apparatus is suitable for operation at low power. It employs small size thruster components, including a ceramic channel, with the center pole piece of the conventional annular design thruster eliminated or greatly reduced. Efficient operation is accomplished through magnetic fields with a substantial radial component. The propellant gas is ionized at an optimal location in the thruster. A further improvement is accomplished by segmented electrodes, which produce localized voltage drops within the thruster at optimally prescribed locations. The apparatus differs from a conventional Hall thruster, which has an annular geometry, not well suited to scaling to small size, because the small size for an annular design has a great deal of surface area relative to the volume.

  10. Supervised exercises for adults with acute lateral ankle sprain: a randomised controlled trial

    PubMed Central

    van Rijn, Rogier M; van Os, Anton G; Kleinrensink, Gert-Jan; Bernsen, Roos MD; Verhaar, Jan AN; Koes, Bart W; Bierma-Zeinstra, Sita MA

    2007-01-01

    Background During the recovery period after acute ankle sprain, it is unclear whether conventional treatment should be supported by supervised exercise. Aim To evaluate the short- and long-term effectiveness of conventional treatment combined with supervised exercises compared with conventional treatment alone in patients with an acute ankle sprain. Design Randomised controlled clinical trial. Setting A total of 32 Dutch general practices and the hospital emergency department. Method Adults with an acute lateral ankle sprain consulting general practices or the hospital emergency department were allocated to either conventional treatment combined with supervised exercises or conventional treatment alone. Primary outcomes were subjective recovery (0–10 point scale) and the occurrence of a re-sprain. Measurements were carried out at intake, 4 weeks, 8 weeks, 3 months, and 1 year after injury. Data were analysed using intention-to-treat analyses. Results A total of 102 patients were enrolled and randomised to either conventional treatment alone or conventional treatment combined with supervised exercise. There was no significant difference between treatment groups concerning subjective recovery or occurrence of re-sprains after 3 months and 1-year of follow-up. Conclusion Conventional treatment combined with supervised exercises compared to conventional treatment alone during the first year after an acute lateral ankle sprain does not lead to differences in the occurrence of re-sprains or in subjective recovery. PMID:17925136

  11. Design of a compact disk-like microfluidic platform for enzyme-linked immunosorbent assay.

    PubMed

    Lai, Siyi; Wang, Shengnian; Luo, Jun; Lee, L James; Yang, Shang-Tian; Madou, Marc J

    2004-04-01

    This paper presents an integrated microfluidic device on a compact disk (CD) that performs an enzyme-linked immunosorbent assay (ELISA) for rat IgG from a hybridoma cell culture. Centrifugal and capillary forces were used to control the flow sequence of different solutions involved in the ELISA process. The microfluidic device was fabricated on a plastic CD. Each step of the ELISA process was carried out automatically by controlling the rotation speed of the CD. The work on analysis of rat IgG from hybridoma culture showed that the microchip-based ELISA has the same detection range as the conventional method on the 96-well microtiter plate but has advantages such as less reagent consumption and shorter assay time over the conventional method.

  12. Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)

    2016-01-01

    Design Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellation Abstract Communication systems are described that use geometrically PSK shaped constellations that have increased capacity compared to conventional PSK constellations operating within a similar SNR band. The geometrically shaped PSK constellation is optimized based upon parallel decoding capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.

  13. A broad range assay for rapid detection and etiologic characterization of bacterial meningitis: performance testing in samples from sub-Sahara.

    PubMed

    Won, Helen; Yang, Samuel; Gaydos, Charlotte; Hardick, Justin; Ramachandran, Padmini; Hsieh, Yu-Hsiang; Kecojevic, Alexander; Njanpop-Lafourcade, Berthe-Marie; Mueller, Judith E; Tameklo, Tsidi Agbeko; Badziklou, Kossi; Gessner, Bradford D; Rothman, Richard E

    2012-09-01

    This study aimed to conduct a pilot evaluation of broad-based multiprobe polymerase chain reaction (PCR) in clinical cerebrospinal fluid (CSF) samples compared to local conventional PCR/culture methods used for bacterial meningitis surveillance. A previously described PCR consisting of initial broad-based detection of Eubacteriales by a universal probe, followed by Gram typing, and pathogen-specific probes was designed targeting variable regions of the 16S rRNA gene. The diagnostic performance of the 16S rRNA assay in "127 CSF samples was evaluated in samples from patients from Togo, Africa, by comparison to conventional PCR/culture methods. Our probes detected Neisseria meningitidis, Streptococcus pneumoniae, and Haemophilus influenzae. Uniprobe sensitivity and specificity versus conventional PCR were 100% and 54.6%, respectively. Sensitivity and specificity of uniprobe versus culture methods were 96.5% and 52.5%, respectively. Gram-typing probes correctly typed 98.8% (82/83) and pathogen-specific probes identified 96.4% (80/83) of the positives. This broad-based PCR algorithm successfully detected and provided species level information for multiple bacterial meningitis agents in clinical samples. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. A broad range assay for rapid detection and etiologic characterization of bacterial meningitis: performance testing in samples from sub-Sahara☆, ☆☆,★

    PubMed Central

    Won, Helen; Yang, Samuel; Gaydos, Charlotte; Hardick, Justin; Ramachandran, Padmini; Hsieh, Yu-Hsiang; Kecojevic, Alexander; Njanpop-Lafourcade, Berthe-Marie; Mueller, Judith E.; Tameklo, Tsidi Agbeko; Badziklou, Kossi; Gessner, Bradford D.; Rothman, Richard E.

    2012-01-01

    This study aimed to conduct a pilot evaluation of broad-based multiprobe polymerase chain reaction (PCR) in clinical cerebrospinal fluid (CSF) samples compared to local conventional PCR/culture methods used for bacterial meningitis surveillance. A previously described PCR consisting of initial broad-based detection of Eubacteriales by a universal probe, followed by Gram typing, and pathogen-specific probes was designed targeting variable regions of the 16S rRNA gene. The diagnostic performance of the 16S rRNA assay in “”127 CSF samples was evaluated in samples from patients from Togo, Africa, by comparison to conventional PCR/culture methods. Our probes detected Neisseria meningitidis, Streptococcus pneumoniae, and Haemophilus influenzae. Uniprobe sensitivity and specificity versus conventional PCR were 100% and 54.6%, respectively. Sensitivity and specificity of uniprobe versus culture methods were 96.5% and 52.5%, respectively. Gram-typing probes correctly typed 98.8% (82/83) and pathogen-specific probes identified 96.4% (80/83) of the positives. This broad-based PCR algorithm successfully detected and provided species level information for multiple bacterial meningitis agents in clinical samples. PMID:22809694

  15. PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD

    NASA Astrophysics Data System (ADS)

    Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao

    Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.

  16. Gaming via Computer Simulation Techniques for Junior College Economics Education. Final Report.

    ERIC Educational Resources Information Center

    Thompson, Fred A.

    A study designed to answer the need for more attractive and effective economics education involved the teaching of one junior college economics class by the conventional (lecture) method and an experimental class by computer simulation techniques. Econometric models approximating the "real world" were computer programed to enable the experimental…

  17. Does Cognitive Impairment Influence Quality of Life among Nursing Home Residents?

    ERIC Educational Resources Information Center

    Abrahamson, Kathleen; Clark, Daniel; Perkins, Anthony; Arling, Greg

    2012-01-01

    Purpose: We investigated the relationship between cognitive status and quality of life (QOL) of Minnesota nursing home (NH) residents and the relationship between conventional or Alzheimer's special care unit (SCU) placement and QOL. The study may inform development of dementia-specific quality measures. Design and Methods: Data for analyses came…

  18. 50 CFR 300.104 - Scientific research.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to be used. (vi) Survey design and methods of data analyses. (vii) Data to be collected. (3) A... for Finfish Surveys in the Convention Area when the Total Catch is Expected to be More Than 50 Tons to... Administrator. (2) The format requires: (i) The name of the CCAMLR Member. (ii) Survey details. (iii...

  19. 50 CFR 300.104 - Scientific research.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... to be used. (vi) Survey design and methods of data analyses. (vii) Data to be collected. (3) A... for Finfish Surveys in the Convention Area when the Total Catch is Expected to be More Than 50 Tons to... Administrator. (2) The format requires: (i) The name of the CCAMLR Member. (ii) Survey details. (iii...

  20. 50 CFR 300.104 - Scientific research.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... to be used. (vi) Survey design and methods of data analyses. (vii) Data to be collected. (3) A... for Finfish Surveys in the Convention Area when the Total Catch is Expected to be More Than 50 Tons to... Administrator. (2) The format requires: (i) The name of the CCAMLR Member. (ii) Survey details. (iii...

  1. 50 CFR 300.104 - Scientific research.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... to be used. (vi) Survey design and methods of data analyses. (vii) Data to be collected. (3) A... for Finfish Surveys in the Convention Area when the Total Catch is Expected to be More Than 50 Tons to... Administrator. (2) The format requires: (i) The name of the CCAMLR Member. (ii) Survey details. (iii...

  2. 50 CFR 300.104 - Scientific research.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... to be used. (vi) Survey design and methods of data analyses. (vii) Data to be collected. (3) A... for Finfish Surveys in the Convention Area when the Total Catch is Expected to be More Than 50 Tons to... Administrator. (2) The format requires: (i) The name of the CCAMLR Member. (ii) Survey details. (iii...

  3. Fillet Weld Stress Using Finite Element Methods

    NASA Technical Reports Server (NTRS)

    Lehnhoff, T. F.; Green, G. W.

    1985-01-01

    Average elastic Von Mises equivalent stresses were calculated along the throat of a single lap fillet weld. The average elastic stresses were compared to initial yield and to plastic instability conditions to modify conventional design formulas is presented. The factor is a linear function of the thicknesses of the parent plates attached by the fillet weld.

  4. Interactive Learning Environment for Bio-Inspired Optimization Algorithms for UAV Path Planning

    ERIC Educational Resources Information Center

    Duan, Haibin; Li, Pei; Shi, Yuhui; Zhang, Xiangyin; Sun, Changhao

    2015-01-01

    This paper describes the development of BOLE, a MATLAB-based interactive learning environment, that facilitates the process of learning bio-inspired optimization algorithms, and that is dedicated exclusively to unmanned aerial vehicle path planning. As a complement to conventional teaching methods, BOLE is designed to help students consolidate the…

  5. Additive Manufactured Superconducting Cavities

    NASA Astrophysics Data System (ADS)

    Holland, Eric; Rosen, Yaniv; Woolleet, Nathan; Materise, Nicholas; Voisin, Thomas; Wang, Morris; Mireles, Jorge; Carosi, Gianpaolo; Dubois, Jonathan

    Superconducting radio frequency cavities provide an ultra-low dissipative environment, which has enabled fundamental investigations in quantum mechanics, materials properties, and the search for new particles in and beyond the standard model. However, resonator designs are constrained by limitations in conventional machining techniques. For example, current through a seam is a limiting factor in performance for many waveguide cavities. Development of highly reproducible methods for metallic parts through additive manufacturing, referred to colloquially as 3D printing\\x9D, opens the possibility for novel cavity designs which cannot be implemented through conventional methods. We present preliminary investigations of superconducting cavities made through a selective laser melting process, which compacts a granular powder via a high-power laser according to a digitally defined geometry. Initial work suggests that assuming a loss model and numerically optimizing a geometry to minimize dissipation results in modest improvements in device performance. Furthermore, a subset of titanium alloys, particularly, a titanium, aluminum, vanadium alloy (Ti - 6Al - 4V) exhibits properties indicative of a high kinetic inductance material. This work is supported by LDRD 16-SI-004.

  6. Cost-effective rapid prototyping and assembly of poly(methyl methacrylate) microfluidic devices.

    PubMed

    Matellan, Carlos; Del Río Hernández, Armando E

    2018-05-03

    The difficulty in translating conventional microfluidics from laboratory prototypes to commercial products has shifted research efforts towards thermoplastic materials for their higher translational potential and amenability to industrial manufacturing. Here, we present an accessible method to fabricate and assemble polymethyl methacrylate (PMMA) microfluidic devices in a "mask-less" and cost-effective manner that can be applied to manufacture a wide range of designs due to its versatility. Laser micromachining offers high flexibility in channel dimensions and morphology by controlling the laser properties, while our two-step surface treatment based on exposure to acetone vapour and low-temperature annealing enables improvement of the surface quality without deformation of the device. Finally, we demonstrate a capillarity-driven adhesive delivery bonding method that can produce an effective seal between PMMA devices and a variety of substrates, including glass, silicon and LiNbO 3 . We illustrate the potential of this technique with two microfluidic devices, an H-filter and a droplet generator. The technique proposed here offers a low entry barrier for the rapid prototyping of thermoplastic microfluidics, enabling iterative design for laboratories without access to conventional microfabrication equipment.

  7. A Non-Intrusive Algorithm for Sensitivity Analysis of Chaotic Flow Simulations

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2017-01-01

    We demonstrate a novel algorithm for computing the sensitivity of statistics in chaotic flow simulations to parameter perturbations. The algorithm is non-intrusive but requires exposing an interface. Based on the principle of shadowing in dynamical systems, this algorithm is designed to reduce the effect of the sampling error in computing sensitivity of statistics in chaotic simulations. We compare the effectiveness of this method to that of the conventional finite difference method.

  8. Non-contact radio frequency shielding and wave guiding by multi-folded transformation optics method

    PubMed Central

    Madni, Hamza Ahmad; Zheng, Bin; Yang, Yihao; Wang, Huaping; Zhang, Xianmin; Yin, Wenyan; Li, Erping; Chen, Hongsheng

    2016-01-01

    Compared with conventional radio frequency (RF) shielding methods in which the conductive coating material encloses the circuits design and the leakage problem occurs due to the gap in such conductive material, non-contact RF shielding at a distance is very promising but still impossible to achieve so far. In this paper, a multi-folded transformation optics method is proposed to design a non-contact device for RF shielding. This “open-shielded” device can shield any object at a distance from the electromagnetic waves at the operating frequency, while the object is still physically open to the outer space. Based on this, an open-carpet cloak is proposed and the functionality of the open-carpet cloak is demonstrated. Furthermore, we investigate a scheme of non-contact wave guiding to remotely control the propagation of surface waves over any obstacles. The flexibilities of such multi-folded transformation optics method demonstrate the powerfulness of the method in the design of novel remote devices with impressive new functionalities. PMID:27841358

  9. A Simple and Reliable Method of Design for Standalone Photovoltaic Systems

    NASA Astrophysics Data System (ADS)

    Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.

    2017-06-01

    Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.

  10. Design synthesis and optimization of joined-wing transports

    NASA Technical Reports Server (NTRS)

    Gallman, John W.; Smith, Stephen C.; Kroo, Ilan M.

    1990-01-01

    A computer program for aircraft synthesis using a numerical optimizer was developed to study the application of the joined-wing configuration to transport aircraft. The structural design algorithm included the effects of secondary bending moments to investigate the possibility of tail buckling and to design joined wings resistant to buckling. The structural weight computed using this method was combined with a statistically-based method to obtain realistic estimates of total lifting surface weight and aircraft empty weight. A variety of 'optimum' joined-wing and conventional aircraft designs were compared on the basis of direct operating cost, gross weight, and cruise drag. The most promising joined-wing designs were found to have a joint location at about 70 percent of the wing semispan. The optimum joined-wing transport is shown to save 1.7 percent in direct operating cost and 11 percent in drag for a 2000 nautical mile transport mission.

  11. The order and priority of research and design method application within an assistive technology new product development process: a summative content analysis of 20 case studies.

    PubMed

    Torrens, George Edward

    2018-01-01

    Summative content analysis was used to define methods and heuristics from each case study. The review process was in two parts: (1) A literature review to identify conventional research methods and (2) a summative content analysis of published case studies, based on the identified methods and heuristics to suggest an order and priority of where and when were used. Over 200 research and design methods and design heuristics were identified. From the review of the 20 case studies 42 were identified as being applied. The majority of methods and heuristics were applied in phase two, market choice. There appeared a disparity between the limited numbers of methods frequently used, under 10 within the 20 case studies, when hundreds were available. Implications for Rehabilitation The communication highlights a number of issues that have implication for those involved in assistive technology new product development: •The study defined over 200 well-established research and design methods and design heuristics that are available for use by those who specify and design assistive technology products, which provide a comprehensive reference list for practitioners in the field; •The review within the study suggests only a limited number of research and design methods are regularly used by industrial design focused assistive technology new product developers; and, •Debate is required within the practitioners working in this field to reflect on how a wider range of potentially more effective methods and heuristics may be incorporated into daily working practice.

  12. Comparison of retention between maxillary milled and conventional denture bases: A clinical study.

    PubMed

    AlHelal, Abdulaziz; AlRumaih, Hamad S; Kattadiyil, Mathew T; Baba, Nadim Z; Goodacre, Charles J

    2017-02-01

    Clinical studies comparing the retention values of milled denture bases with those of conventionally processed denture bases are lacking. The purpose of this clinical study was to compare the retention values of conventional heat-polymerized denture bases with those of digitally milled maxillary denture bases. Twenty individuals with completely edentulous maxillary arches participated in this study. Definitive polyvinyl siloxane impressions were scanned (iSeries; Dental Wings), and the standard tessellation language files were sent to Global Dental Science for the fabrication of a computer-aided design and computer-aided manufacturing (CAD-CAM) milled denture base (group MB) (AvaDent). The impression was then poured to obtain a definitive cast that was used to fabricate a heat-polymerized acrylic resin denture base resin (group HB). A custom-designed testing device was used to measure denture retention (N). Each denture base was subjected to a vertical pulling force by using an advanced digital force gauge 3 times at 10-minute intervals. The average retention of the 2 fabrication methods was compared using repeated ANOVA (α=.05). Significantly increased retention was observed for the milled denture bases compared with that of the conventional heat-polymerized denture bases (P<.001). The retention offered by milled complete denture bases from prepolymerized poly(methyl methacrylate) resin was significantly higher than that offered by conventional heat- polymerized denture bases. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  13. DNA analysis using an integrated microchip for multiplex PCR amplification and electrophoresis for reference samples.

    PubMed

    Le Roux, Delphine; Root, Brian E; Reedy, Carmen R; Hickey, Jeffrey A; Scott, Orion N; Bienvenue, Joan M; Landers, James P; Chassagne, Luc; de Mazancourt, Philippe

    2014-08-19

    A system that automatically performs the PCR amplification and microchip electrophoretic (ME) separation for rapid forensic short tandem repeat (STR) forensic profiling in a single disposable plastic chip is demonstrated. The microchip subassays were optimized to deliver results comparable to conventional benchtop methods. The microchip process was accomplished in sub-90 min compared with >2.5 h for the conventional approach. An infrared laser with a noncontact temperature sensing system was optimized for a 45 min PCR compared with the conventional 90 min amplification time. The separation conditions were optimized using LPA-co-dihexylacrylamide block copolymers specifically designed for microchip separations to achieve accurate DNA size calling in an effective length of 7 cm in a plastic microchip. This effective separation length is less than half of other reports for integrated STR analysis and allows a compact, inexpensive microchip design. This separation quality was maintained when integrated with microchip PCR. Thirty samples were analyzed conventionally and then compared with data generated by the microfluidic chip system. The microfluidic system allele calling was 100% concordant with the conventional process. This study also investigated allelic ladder consistency over time. The PCR-ME genetic profiles were analyzed using binning palettes generated from two sets of allelic ladders run three and six months apart. Using these binning palettes, no allele calling errors were detected in the 30 samples demonstrating that a microfluidic platform can be highly consistent over long periods of time.

  14. Taguchi's off line method and Multivariate loss function approach for quality management and optimization of process parameters -A review

    NASA Astrophysics Data System (ADS)

    Bharti, P. K.; Khan, M. I.; Singh, Harbinder

    2010-10-01

    Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.

  15. Multi-Agent Methods for the Configuration of Random Nanocomputers

    NASA Technical Reports Server (NTRS)

    Lawson, John W.

    2004-01-01

    As computational devices continue to shrink, the cost of manufacturing such devices is expected to grow exponentially. One alternative to the costly, detailed design and assembly of conventional computers is to place the nano-electronic components randomly on a chip. The price for such a trivial assembly process is that the resulting chip would not be programmable by conventional means. In this work, we show that such random nanocomputers can be adaptively programmed using multi-agent methods. This is accomplished through the optimization of an associated high dimensional error function. By representing each of the independent variables as a reinforcement learning agent, we are able to achieve convergence must faster than with other methods, including simulated annealing. Standard combinational logic circuits such as adders and multipliers are implemented in a straightforward manner. In addition, we show that the intrinsic flexibility of these adaptive methods allows the random computers to be reconfigured easily, making them reusable. Recovery from faults is also demonstrated.

  16. Aerodynamic and structural studies of joined-wing aircraft

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Smith, Stephen; Gallman, John

    1991-01-01

    A method for rapidly evaluating the structural and aerodynamic characteristics of joined-wing aircraft was developed and used to study the fundamental advantages attributed to this concept. The technique involves a rapid turnaround aerodynamic analysis method for computing minimum trimmed drag combined with a simple structural optimization. A variety of joined-wing designs are compared on the basis of trimmed drag, structural weight, and, finally, trimmed drag with fixed structural weight. The range of joined-wing design parameters resulting in best cruise performance is identified. Structural weight savings and net drag reductions are predicted for certain joined-wing configurations compared with conventional cantilever-wing configurations.

  17. Fluctuating residual limb volume accommodated with an adjustable, modular socket design: A novel case report.

    PubMed

    Mitton, Kay; Kulkarni, Jai; Dunn, Kenneth William; Ung, Anthony Hoang

    2017-10-01

    This novel case report describes the problems of prescribing a prosthetic socket in a left transfemoral amputee secondary to chronic patellofemoral instability compounded by complex regional pain syndrome. Case Description and Methods: Following the amputation, complex regional pain syndrome symptoms recurred in the residual limb, presenting mainly with oedema. Due to extreme daily volume fluctuations of the residual limb, a conventional, laminated thermoplastic socket fitting was not feasible. Findings and Outcomes: An adjustable, modular socket design was trialled. The residual limb volume fluctuations were accommodated within the socket. Amputee rehabilitation could be continued, and the rehabilitation goals were achieved. The patient was able to wear the prosthesis for 8 h daily and to walk unaided indoors and outdoors. An adjustable, modular socket design accommodated the daily residual limb volume fluctuations and provided a successful outcome in this case. It demonstrates the complexities of socket fitting and design with volume fluctuations. Clinical relevance Ongoing complex regional pain syndrome symptoms within the residual limb can lead to fitting difficulties in a conventional, laminated thermoplastic socket due to volume fluctuations. An adjustable, modular socket design can accommodate this and provide a successful outcome.

  18. Comparison of the Debonding Characteristics of Conventional and New Debonding Instrument used for Ceramic, Composite and Metallic Brackets - An Invitro Study.

    PubMed

    Choudhary, Garima; Gill, Vikas; Reddy, Y N N; Sanadhya, Sudhanshu; Aapaliya, Pankaj; Sharma, Nidhi

    2014-07-01

    Debonding procedure is time consuming and damaging to the enamel if performed with improper technique. Various debonding methods include: the conventional methods that use pliers or wrenches, an ultrasonic method, electrothermal devices, air pressure impulse devices, diamond burs to grind the brackets off the tooth surface and lasers. Among all these methods, using debonding pliers is most convenient and effective method but has been reported to cause damage to the teeth. Recently, a New Debonding Instrument designed specifically for ceramic and composite brackets has been introduced. As this is a new instrument, little information is available on efficacy of this instrument. The purpose of this study was to evaluate the debonding characteristics of both "the conventional debonding Pliers" and "the New debonding instrument" when removing ceramic, composite and metallic brackets. One Hundred Thirty eight extracted maxillary premolar teeth were collected and divided into two Groups: Group A and Group B (n = 69) respectively. They were further divided into 3 subGroups (n = 23) each according to the types of brackets to be bonded. In subGroups A1 and B1{stainless steel};A2 and B2{ceramic};A3 and B3{composite}adhesive precoated maxillary premolar brackets were used. Among them {ceramic and composite} adhesive pre-coated maxillary premolar brackets were bonded. All the teeth were etched using 37% phosphoric acid for 15 seconds and the brackets were bonded using Transbond XT primer. Brackets were debonded using Conventional Debonding Plier and New Debonding Instrument (Group B). After debonding, the enamel surface of each tooth was examined under stereo microscope (10X magnifications). Amodifiedadhesive remnant index (ARI) was used to quantify the amount of remaining adhesive on each tooth. The observations demonstrate that the results of New Debonding Instrument for debonding of metal, ceramic and composite brackets were statistically significantly different (p = 0.04) and superior from the results of conventional debonding Pliers. The debonding efficiency of New Debonding Instrument is better than the debonding efficiency of Conventional Debonding Pliers for use of metal, ceramic and composite brackets respectively.

  19. Development and comparison of a real-time PCR assay for detection of Dichelobacter nodosus with culturing and conventional PCR: harmonisation between three laboratories

    PubMed Central

    2012-01-01

    Background Ovine footrot is a contagious disease with worldwide occurrence in sheep. The main causative agent is the fastidious bacterium Dichelobacter nodosus. In Scandinavia, footrot was first diagnosed in Sweden in 2004 and later also in Norway and Denmark. Clinical examination of sheep feet is fundamental to diagnosis of footrot, but D. nodosus should also be detected to confirm the diagnosis. PCR-based detection using conventional PCR has been used at our institutes, but the method was laborious and there was a need for a faster, easier-to-interpret method. The aim of this study was to develop a TaqMan-based real-time PCR assay for detection of D. nodosus and to compare its performance with culturing and conventional PCR. Methods A D. nodosus-specific TaqMan based real-time PCR assay targeting the 16S rRNA gene was designed. The inclusivity and exclusivity (specificity) of the assay was tested using 55 bacterial and two fungal strains. To evaluate the sensitivity and harmonisation of results between different laboratories, aliquots of a single DNA preparation were analysed at three Scandinavian laboratories. The developed real-time PCR assay was compared to culturing by analysing 126 samples, and to a conventional PCR method by analysing 224 samples. A selection of PCR-products was cloned and sequenced in order to verify that they had been identified correctly. Results The developed assay had a detection limit of 3.9 fg of D. nodosus genomic DNA. This result was obtained at all three laboratories and corresponds to approximately three copies of the D. nodosus genome per reaction. The assay showed 100% inclusivity and 100% exclusivity for the strains tested. The real-time PCR assay found 54.8% more positive samples than by culturing and 8% more than conventional PCR. Conclusions The developed real-time PCR assay has good specificity and sensitivity for detection of D. nodosus, and the results are easy to interpret. The method is less time-consuming than either culturing or conventional PCR. PMID:22293440

  20. Performance analysis of cross-layer design with average PER constraint over MIMO fading channels

    NASA Astrophysics Data System (ADS)

    Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin

    2015-12-01

    In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.

  1. Advanced overlay analysis through design based metrology

    NASA Astrophysics Data System (ADS)

    Ji, Sunkeun; Yoo, Gyun; Jo, Gyoyeon; Kang, Hyunwoo; Park, Minwoo; Kim, Jungchan; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Maruyama, Kotaro; Park, Byungjun; Yamamoto, Masahiro

    2015-03-01

    As design rule shrink, overlay has been critical factor for semiconductor manufacturing. However, the overlay error which is determined by a conventional measurement with an overlay mark based on IBO and DBO often does not represent the physical placement error in the cell area. The mismatch may arise from the size or pitch difference between the overlay mark and the cell pattern. Pattern distortion caused by etching or CMP also can be a source of the mismatch. In 2014, we have demonstrated that method of overlay measurement in the cell area by using DBM (Design Based Metrology) tool has more accurate overlay value than conventional method by using an overlay mark. We have verified the reproducibility by measuring repeatable patterns in the cell area, and also demonstrated the reliability by comparing with CD-SEM data. We have focused overlay mismatching between overlay mark and cell area until now, further more we have concerned with the cell area having different pattern density and etch loading. There appears a phenomenon which has different overlay values on the cells with diverse patterning environment. In this paper, the overlay error was investigated from cell edge to center. For this experiment, we have verified several critical layers in DRAM by using improved(Better resolution and speed) DBM tool, NGR3520.

  2. Aerodynamic shape optimization using control theory

    NASA Technical Reports Server (NTRS)

    Reuther, James

    1996-01-01

    Aerodynamic shape design has long persisted as a difficult scientific challenge due its highly nonlinear flow physics and daunting geometric complexity. However, with the emergence of Computational Fluid Dynamics (CFD) it has become possible to make accurate predictions of flows which are not dominated by viscous effects. It is thus worthwhile to explore the extension of CFD methods for flow analysis to the treatment of aerodynamic shape design. Two new aerodynamic shape design methods are developed which combine existing CFD technology, optimal control theory, and numerical optimization techniques. Flow analysis methods for the potential flow equation and the Euler equations form the basis of the two respective design methods. In each case, optimal control theory is used to derive the adjoint differential equations, the solution of which provides the necessary gradient information to a numerical optimization method much more efficiently then by conventional finite differencing. Each technique uses a quasi-Newton numerical optimization algorithm to drive an aerodynamic objective function toward a minimum. An analytic grid perturbation method is developed to modify body fitted meshes to accommodate shape changes during the design process. Both Hicks-Henne perturbation functions and B-spline control points are explored as suitable design variables. The new methods prove to be computationally efficient and robust, and can be used for practical airfoil design including geometric and aerodynamic constraints. Objective functions are chosen to allow both inverse design to a target pressure distribution and wave drag minimization. Several design cases are presented for each method illustrating its practicality and efficiency. These include non-lifting and lifting airfoils operating at both subsonic and transonic conditions.

  3. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern

    PubMed Central

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-01-01

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method. PMID:28657602

  4. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern.

    PubMed

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-06-28

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.

  5. Design of sewage treatment system by applying fuzzy adaptive PID controller

    NASA Astrophysics Data System (ADS)

    Jin, Liang-Ping; Li, Hong-Chan

    2013-03-01

    In the sewage treatment system, the dissolved oxygen concentration control, due to its nonlinear, time-varying, large time delay and uncertainty, is difficult to establish the exact mathematical model. While the conventional PID controller only works with good linear not far from its operating point, it is difficult to realize the system control when the operating point far off. In order to solve the above problems, the paper proposed a method which combine fuzzy control with PID methods and designed a fuzzy adaptive PID controller based on S7-300 PLC .It employs fuzzy inference method to achieve the online tuning for PID parameters. The control algorithm by simulation and practical application show that the system has stronger robustness and better adaptability.

  6. User-composable Electronic Health Record Improves Efficiency of Clinician Data Viewing for Patient Case Appraisal: A Mixed-Methods Study.

    PubMed

    Senathirajah, Yalini; Kaufman, David; Bakken, Suzanne

    2016-01-01

    Challenges in the design of electronic health records (EHRs) include designing usable systems that must meet the complex, rapidly changing, and high-stakes information needs of clinicians. The ability to move and assemble elements together on the same page has significant human-computer interaction (HCI) and efficiency advantages, and can mitigate the problems of negotiating multiple fixed screens and the associated cognitive burdens. We compare MedWISE-a novel EHR that supports user-composable displays-with a conventional EHR in terms of the number of repeat views of data elements for patient case appraisal. The study used mixed-methods for examination of clinical data viewing in four patient cases. The study compared use of an experimental user-composable EHR with use of a conventional EHR, for case appraisal. Eleven clinicians used a user-composable EHR in a case appraisal task in the laboratory setting. This was compared with log file analysis of the same patient cases in the conventional EHR. We investigated the number of repeat views of the same clinical information during a session and across these two contexts, and compared them using Fisher's exact test. There was a significant difference (p<.0001) in proportion of cases with repeat data element viewing between the user-composable EHR (14.6 percent) and conventional EHR (72.6 percent). Users of conventional EHRs repeatedly viewed the same information elements in the same session, as revealed by log files. Our findings are consistent with the hypothesis that conventional systems require that the user view many screens and remember information between screens, causing the user to forget information and to have to access the information a second time. Other mechanisms (such as reduction in navigation over a population of users due to interface sharing, and information selection) may also contribute to increased efficiency in the experimental system. Systems that allow a composable approach that enables the user to gather together on the same screen any desired information elements may confer cognitive support benefits that can increase productive use of systems by reducing fragmented information. By reducing cognitive overload, it can also enhance the user experience.

  7. Total Dose Effects in Conventional Bipolar Transistors

    NASA Technical Reports Server (NTRS)

    Johnston, A. H.; Swift, G. W.; Rax, B. G.

    1994-01-01

    This paper examines various factors in bipolar device construction and design, and discusses their impact on radiation hardness. The intent of the paper is to improve understanding of the underlying mechanisms for practical devices without special test structures, and to provide (1) guidance in ways to select transistor designs that are more resistant to radiation damage, and (2) methods to estimate the maximum amount of damage that might be expected from a basic transistor design. The latter factor is extremely important in assessing the risk that future lots of devices will be substantially below design limits, which are usually based on test data for older devices.

  8. Hotspot detection using image pattern recognition based on higher-order local auto-correlation

    NASA Astrophysics Data System (ADS)

    Maeda, Shimon; Matsunawa, Tetsuaki; Ogawa, Ryuji; Ichikawa, Hirotaka; Takahata, Kazuhiro; Miyairi, Masahiro; Kotani, Toshiya; Nojima, Shigeki; Tanaka, Satoshi; Nakagawa, Kei; Saito, Tamaki; Mimotogi, Shoji; Inoue, Soichi; Nosato, Hirokazu; Sakanashi, Hidenori; Kobayashi, Takumi; Murakawa, Masahiro; Higuchi, Tetsuya; Takahashi, Eiichi; Otsu, Nobuyuki

    2011-04-01

    Below 40nm design node, systematic variation due to lithography must be taken into consideration during the early stage of design. So far, litho-aware design using lithography simulation models has been widely applied to assure that designs are printed on silicon without any error. However, the lithography simulation approach is very time consuming, and under time-to-market pressure, repetitive redesign by this approach may result in the missing of the market window. This paper proposes a fast hotspot detection support method by flexible and intelligent vision system image pattern recognition based on Higher-Order Local Autocorrelation. Our method learns the geometrical properties of the given design data without any defects as normal patterns, and automatically detects the design patterns with hotspots from the test data as abnormal patterns. The Higher-Order Local Autocorrelation method can extract features from the graphic image of design pattern, and computational cost of the extraction is constant regardless of the number of design pattern polygons. This approach can reduce turnaround time (TAT) dramatically only on 1CPU, compared with the conventional simulation-based approach, and by distributed processing, this has proven to deliver linear scalability with each additional CPU.

  9. Nanophotonic particle simulation and inverse design using artificial neural networks.

    PubMed

    Peurifoy, John; Shen, Yichen; Jing, Li; Yang, Yi; Cano-Renteria, Fidel; DeLacy, Brendan G; Joannopoulos, John D; Tegmark, Max; Soljačić, Marin

    2018-06-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical.

  10. The association between short interpregnancy interval and preterm birth in Louisiana: a comparison of methods.

    PubMed

    Howard, Elizabeth J; Harville, Emily; Kissinger, Patricia; Xiong, Xu

    2013-07-01

    There is growing interest in the application of propensity scores (PS) in epidemiologic studies, especially within the field of reproductive epidemiology. This retrospective cohort study assesses the impact of a short interpregnancy interval (IPI) on preterm birth and compares the results of the conventional logistic regression analysis with analyses utilizing a PS. The study included 96,378 singleton infants from Louisiana birth certificate data (1995-2007). Five regression models designed for methods comparison are presented. Ten percent (10.17 %) of all births were preterm; 26.83 % of births were from a short IPI. The PS-adjusted model produced a more conservative estimate of the exposure variable compared to the conventional logistic regression method (β-coefficient: 0.21 vs. 0.43), as well as a smaller standard error (0.024 vs. 0.028), odds ratio and 95 % confidence intervals [1.15 (1.09, 1.20) vs. 1.23 (1.17, 1.30)]. The inclusion of more covariate and interaction terms in the PS did not change the estimates of the exposure variable. This analysis indicates that PS-adjusted regression may be appropriate for validation of conventional methods in a large dataset with a fairly common outcome. PS's may be beneficial in producing more precise estimates, especially for models with many confounders and effect modifiers and where conventional adjustment with logistic regression is unsatisfactory. Short intervals between pregnancies are associated with preterm birth in this population, according to either technique. Birth spacing is an issue that women have some control over. Educational interventions, including birth control, should be applied during prenatal visits and following delivery.

  11. Modelling of teeth of a gear transmission for modern manufacturing technologies

    NASA Astrophysics Data System (ADS)

    Monica, Z.; Banaś, W.; Ćwikla, G.; Topolska, S.

    2017-08-01

    The technological process of manufacturing of gear wheels is influenced by many factors. It is designated depending on the type of material from which the gear is to be produced, its heat treatment parameters, the required accuracy, the geometrical form and the modifications of the tooth. Therefor the parameters selection process is not easy and moreover it is unambiguous. Another important stage of the technological process is the selection of appropriate tools to properly machine teeth in the operations of both roughing and finishing. In the presented work the focus is put first of all on modern production methods of gears using technologically advanced instruments in comparison with conventional tools. Conventional processing tools such as gear hobbing cutters or Fellows gear-shaper cutters are used from the beginning of the machines for the production of gear wheels. With the development of technology and the creation of CNC machines designated for machining of gears wheel it was also developed the manufacturing technology as well as the design knowledge concerning the technological tools. Leading manufacturers of cutting tools extended the range of tools designated for machining of gears on the so-called hobbing cutters with inserted cemented carbide tips. The same have be introduced to Fellows gear-shaper cutters. The results of tests show that is advantaged to use hobbing cutters with inserted cemented carbide tips for milling gear wheels with a high number of teeth, where the time gains are very high, in relation to the use of conventional milling cutters.

  12. Spectral-spatial classification of hyperspectral image using three-dimensional convolution network

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Yu, Xuchu; Zhang, Pengqiang; Tan, Xiong; Wang, Ruirui; Zhi, Lu

    2018-01-01

    Recently, hyperspectral image (HSI) classification has become a focus of research. However, the complex structure of an HSI makes feature extraction difficult to achieve. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. The design of an improved 3-D convolutional neural network (3D-CNN) model for HSI classification is described. This model extracts features from both the spectral and spatial dimensions through the application of 3-D convolutions, thereby capturing the important discrimination information encoded in multiple adjacent bands. The designed model views the HSI cube data altogether without relying on any pre- or postprocessing. In addition, the model is trained in an end-to-end fashion without any handcrafted features. The designed model was applied to three widely used HSI datasets. The experimental results demonstrate that the 3D-CNN-based method outperforms conventional methods even with limited labeled training samples.

  13. Determination of discharge during pulsating flow

    USGS Publications Warehouse

    Thompson, T.H.

    1968-01-01

    Pulsating flow in an open channel is a manifestation of unstable-flow conditions in which a series of translatory waves of perceptible magnitude develops and moves rapidly downstream. Pulsating flow is a matter of concern in the design and operation of steep-gradient channels. If it should occur at high stages in a channel designed for stable flow, the capacity of the channel may be inadequate at a discharge that is much smaller than that for which the channel was designed. If the overriding translatory wave carries an appreciable part of the total flow, conventional stream-gaging procedures cannot be used to determine the discharge; neither the conventional instrumentation nor conventional methodology is adequate. A method of determining the discharge during pulsating flow was tested in the Santa Anita Wash flood control channel in Arcadia, Calif., April 16, 1965. Observations of the dimensions and velocities of translatory waves were made during a period of controlled reservoir releases of about 100, 200, and 300 cfs (cubic feet per second). The method of computing discharge was based on (1) computation of the discharge in the overriding waves and (2) computation of the discharge in the shallow-depth, or overrun, part of the flow. Satisfactory results were obtained by this method. However, the procedure used-separating the flow into two components and then treating the shallow-depth component as though it were steady--has no theoretical basis. It is simply an expedient for use until laboratory investigation can provide a satisfactory analytical solution to the problem of computing discharge during pulsating flow. Sixteen months prior to the test in Santa Anita Wash, a robot camera had been designed .and programmed to obtain the data needed to compute discharge by the method described above. The photographic equipment had been installed in Haines Creek flood control channel in Los Angeles, Calif., but it had not been completely tested because of the infrequency of flow in that channel. Because the Santa Anita Wash tests afforded excellent data for analysis, further development of the photographic ,technique at Haines Creek was discontinued. Three methods for obtaining the data needed to compute discharge during pulsating flow are proposed. In two of the methods--the photographic method and the depth-recorder method--the dimensions and velocities of translatory waves are recorded, and discharge is then computed by the procedure developed in this report. The third method?the constant-rate-dye-dilution method--yields the discharge more directly. The discharge is computed from the dye-injection rate and the ratio of the concentration of dye in the injected solution to the concentration of dye in the water sampled at a site downstream. The three methods should be developed and tested in ,the Santa Anita Wash flood control channel under controlled conditions similar to those in the test of April 1965.

  14. Design of two-dimensional zero reference codes with cross-entropy method.

    PubMed

    Chen, Jung-Chieh; Wen, Chao-Kai

    2010-06-20

    We present a cross-entropy (CE)-based method for the design of optimum two-dimensional (2D) zero reference codes (ZRCs) in order to generate a zero reference signal for a grating measurement system and achieve absolute position, a coordinate origin, or a machine home position. In the absence of diffraction effects, the 2D ZRC design problem is known as the autocorrelation approximation. Based on the properties of the autocorrelation function, the design of the 2D ZRC is first formulated as a particular combination optimization problem. The CE method is then applied to search for an optimal 2D ZRC and thus obtain the desirable zero reference signal. Computer simulation results indicate that there are 15.38% and 14.29% reductions in the second maxima value for the 16x16 grating system with n(1)=64 and the 100x100 grating system with n(1)=300, respectively, where n(1) is the number of transparent pixels, compared with those of the conventional genetic algorithm.

  15. Covalent layer-by-layer films: chemistry, design, and multidisciplinary applications.

    PubMed

    An, Qi; Huang, Tao; Shi, Feng

    2018-05-16

    Covalent layer-by-layer (LbL) assembly is a powerful method used to construct functional ultrathin films that enables nanoscopic structural precision, componential diversity, and flexible design. Compared with conventional LbL films built using multiple noncovalent interactions, LbL films prepared using covalent crosslinking offer the following distinctive characteristics: (i) enhanced film endurance or rigidity; (ii) improved componential diversity when uncharged species or small molecules are stably built into the films by forming covalent bonds; and (iii) increased structural diversity when covalent crosslinking is employed in componential, spacial, or temporal (labile bonds) selective manners. In this review, we document the chemical methods used to build covalent LbL films as well as the film properties and applications achievable using various film design strategies. We expect to translate the achievement in the discipline of chemistry (film-building methods) into readily available techniques for materials engineers and thus provide diverse functional material design protocols to address the energy, biomedical, and environmental challenges faced by the entire scientific community.

  16. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. A comparison between conventional and LANDSAT based hydrologic modeling: The Four Mile Run case study

    NASA Technical Reports Server (NTRS)

    Ragan, R. M.; Jackson, T. J.; Fitch, W. N.; Shubinski, R. P.

    1976-01-01

    Models designed to support the hydrologic studies associated with urban water resources planning require input parameters that are defined in terms of land cover. Estimating the land cover is a difficult and expensive task when drainage areas larger than a few sq. km are involved. Conventional and LANDSAT based methods for estimating the land cover based input parameters required by hydrologic planning models were compared in a case study of the 50.5 sq. km (19.5 sq. mi) Four Mile Run Watershed in Virginia. Results of the study indicate that the LANDSAT based approach is highly cost effective for planning model studies. The conventional approach to define inputs was based on 1:3600 aerial photos, required 110 man-days and a total cost of $14,000. The LANDSAT based approach required 6.9 man-days and cost $2,350. The conventional and LANDSAT based models gave similar results relative to discharges and estimated annual damages expected from no flood control, channelization, and detention storage alternatives.

  18. Development of a new method for the noninvasive measurement of deep body temperature without a heater.

    PubMed

    Kitamura, Kei-Ichiro; Zhu, Xin; Chen, Wenxi; Nemoto, Tetsu

    2010-01-01

    The conventional zero-heat-flow thermometer, which measures the deep body temperature from the skin surface, is widely used at present. However, this thermometer requires considerable electricity to power the electric heater that compensates for heat loss from the probe; thus, AC power is indispensable for its use. Therefore, this conventional thermometer is inconvenient for unconstrained monitoring. We have developed a new dual-heat-flux method that can measure the deep body temperature from the skin surface without a heater. Our method is convenient for unconstrained and long-term measurement because the instrument is driven by a battery and its design promotes energy conservation. Its probe consists of dual-heat-flow channels with different thermal resistances, and each heat-flow-channel has a pair of IC sensors attached on its top and bottom. The average deep body temperature measurements taken using both the dual-heat-flux and then the zero-heat-flow thermometers from the foreheads of 17 healthy subjects were 37.08 degrees C and 37.02 degrees C, respectively. In addition, the correlation coefficient between the values obtained by the 2 methods was 0.970 (p<0.001). These results show that our method can be used for monitoring the deep body temperature as accurately as the conventional method, and it overcomes the disadvantage of the necessity of AC power supply. (c) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  20. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  1. Numerical and Experimental Validation of the Optimization Methodologies for a Wing-Tip Structure Equipped with Conventional and Morphing Ailerons =

    NASA Astrophysics Data System (ADS)

    Koreanschi, Andreea

    In order to answer the problem of 'how to reduce the aerospace industry's environment footprint?' new morphing technologies were developed. These technologies were aimed at reducing the aircraft's fuel consumption through reduction of the wing drag. The morphing concept used in the present research consists of replacing the conventional aluminium upper surface of the wing with a flexible composite skin for morphing abilities. For the ATR-42 'Morphing wing' project, the wing models were manufactured entirely from composite materials and the morphing region was optimized for flexibility. In this project two rigid wing models and an active morphing wing model were designed, manufactured and wind tunnel tested. For the CRIAQ MDO 505 project, a full scale wing-tip equipped with two types of ailerons, conventional and morphing, was designed, optimized, manufactured, bench and wind tunnel tested. The morphing concept was applied on a real wing internal structure and incorporated aerodynamic, structural and control constraints specific to a multidisciplinary approach. Numerical optimization, aerodynamic analysis and experimental validation were performed for both the CRIAQ MDO 505 full scale wing-tip demonstrator and the ATR-42 reduced scale wing models. In order to improve the aerodynamic performances of the ATR-42 and CRIAQ MDO 505 wing airfoils, three global optimization algorithms were developed, tested and compared. The three algorithms were: the genetic algorithm, the artificial bee colony and the gradient descent. The algorithms were coupled with the two-dimensional aerodynamic solver XFoil. XFoil is known for its rapid convergence, robustness and use of the semi-empirical e n method for determining the position of the flow transition from laminar to turbulent. Based on the performance comparison between the algorithms, the genetic algorithm was chosen for the optimization of the ATR-42 and CRIAQ MDO 505 wing airfoils. The optimization algorithm was improved during the CRIAQ MDO 505 project for convergence speed by introducing a two-step cross-over function. Structural constraints were introduced in the algorithm at each aero-structural optimization interaction, allowing a better manipulation of the algorithm and giving it more capabilities of morphing combinations. The CRIAQ MDO 505 project envisioned a morphing aileron concept for the morphing upper surface wing. For this morphing aileron concept, two optimization methods were developed. The methods used the already developed genetic algorithm and each method had a different design concept. The first method was based on the morphing upper surface concept, using actuation points to achieve the desired shape. The second method was based on the hinge rotation concept of the conventional aileron but applied at multiple nodes along the aileron camber to achieve the desired shape. Both methods were constrained by manufacturing and aerodynamic requirements. The purpose of the morphing aileron methods was to obtain an aileron shape with a smoother pressure distribution gradient during deflection than the conventional aileron. The aerodynamic optimization results were used for the structural optimization and design of the wing, particularly the flexible composite skin. Due to the structural changes performed on the initial wing-tip structure, an aeroelastic behaviour analysis, more specific on flutter phenomenon, was performed. The analyses were done to ensure the structural integrity of the wing-tip demonstrator during wind tunnel tests. Three wind tunnel tests were performed for the CRIAQ MDO 505 wing-tip demonstrator at the IAR-NRC subsonic wind tunnel facility in Ottawa. The first two tests were performed for the wing-tip equipped with conventional aileron. The purpose of these tests was to validate the control system designed for the morphing upper surface, the numerical optimization and aerodynamic analysis and to evaluate the optimization efficiency on the boundary layer behaviour and the wing drag. The third set of wind tunnel tests was performed on the wing-tip equipped with a morphing aileron. The purpose of this test was to evaluate the performances of the morphing aileron, in conjunction with the active morphing upper surface, and their effect on the lift, drag and boundary layer behaviour. Transition data, obtained from Infrared Thermography, and pressure data, extracted from Kulite and pressure taps recordings, were used to validate the numerical optimization and aerodynamic performances of the wing-tip demonstrator. A set of wind tunnel tests was performed on the ATR-42 rigid wing models at the Price-Paidoussis subsonic wind tunnel at Ecole de technologie Superieure. The results from the pressure taps recordings were used to validate the numerical optimization. A second derivative of the pressure distribution method was applied to evaluate the transition region on the upper surface of the wing models for comparison with the numerical transition values. (Abstract shortened by ProQuest.).

  2. A comparison of problem-based learning and conventional teaching in nursing ethics education.

    PubMed

    Lin, Chiou-Fen; Lu, Meei-Shiow; Chung, Chun-Chih; Yang, Che-Ming

    2010-05-01

    The aim of this study was to compare the learning effectiveness of peer tutored problem-based learning and conventional teaching of nursing ethics in Taiwan. The study adopted an experimental design. The peer tutored problem-based learning method was applied to an experimental group and the conventional teaching method to a control group. The study sample consisted of 142 senior nursing students who were randomly assigned to the two groups. All the students were tested for their nursing ethical discrimination ability both before and after the educational intervention. A learning satisfaction survey was also administered to both groups at the end of each course. After the intervention, both groups showed a significant increase in ethical discrimination ability. There was a statistically significant difference between the ethical discrimination scores of the two groups (P < 0.05), with the experimental group on average scoring higher than the control group. There were significant differences in satisfaction with self-motivated learning and critical thinking between the groups. Peer tutored problem-based learning and lecture-type conventional teaching were both effective for nursing ethics education, but problem-based learning was shown to be more effective. Peer tutored problem-based learning has the potential to enhance the efficacy of teaching nursing ethics in situations in which there are personnel and resource constraints.

  3. Low-cost, high-resolution, single-structure array telescopes for imaging of low-earth-orbit satellites

    NASA Technical Reports Server (NTRS)

    Massie, N. A.; Oster, Yale; Poe, Greg; Seppala, Lynn; Shao, Mike

    1992-01-01

    Telescopes that are designed for the unconventional imaging of near-earth satellites must follow unique design rules. The costs must be reduced substantially over those of the conventional telescope designs, and the design must accommodate a technique to circumvent atmospheric distortion of the image. Apertures of 12 m and more along with altitude-altitude mounts that provide high tracking rates are required. A novel design for such a telescope, optimized for speckle imaging, has been generated. Its mount closely resembles a radar mount, and it does not use the conventional dome. Costs for this design are projected to be considerably lower than those for the conventional designs. Results of a design study are presented with details of the electro-optical and optical designs.

  4. Conventional and accelerated-solvent extractions of green tea (camellia sinensis) for metabolomics-based chemometrics.

    PubMed

    Kellogg, Joshua J; Wallace, Emily D; Graf, Tyler N; Oberlies, Nicholas H; Cech, Nadja B

    2017-10-25

    Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. Copyright © 2017. Published by Elsevier B.V.

  5. From public health to international law: possible protocols for inclusion in the Framework Convention on Tobacco Control.

    PubMed Central

    Joossens, L.

    2000-01-01

    Faced with a difficult business environment in the United States and the falling demand for cigarettes in industrialized countries, multinational tobacco companies have been competing fiercely to expand their sales in developing countries. Because of the worldwide threat posed by smoking to health and the emphasis being placed by international tobacco companies on marketing in developing countries, an international regulatory strategy, such as the WHO proposed Framework Convention on Tobacco Control, is needed. This review describes from a public health perspective the possible scope and key considerations of protocols that should be included in the convention. The key international areas that should be considered in tobacco control are: prices, smuggling; tax-free tobacco products; advertising and sponsorship; the Internet; testing methods; package design and labelling; agriculture; and information sharing. PMID:10994267

  6. New method to design stellarator coils without the winding surface

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-01-01

    Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal ‘winding’ surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code, named flexible optimized coils using space curves (FOCUS), has been developed. Applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.

  7. Effectiveness of two synthetic fiber filters for removing white cells from AS-1 red cells.

    PubMed

    Pikul, F J; Farrar, R P; Boris, M B; Estok, L; Marlo, D; Wildgen, M; Chaplin, H

    1989-09-01

    Two commercially available synthetic fiber filters were studied for their effectiveness at removing white cells (WBCs) from AS-1-preserved red cells (RBCs) stored less than or equal to 14 days. In all, 65 filtrations were performed. An automated microprocessor-controlled hydraulic system designed for use with cellulose acetate fiber filters was employed to prepare filtered RBCs before release for transfusion. Studies were also carried out on polyester fiber filters, which are designed to be used in-line during transfusion. Residual WBCs were below the accurate counting range of Coulter counters and of conventional manual chamber counts. An isosmotic ammonium chloride RBC lysis method, plus a modified chamber counting technique, permitted a 270-fold increase over the number of WBCs counted by the conventional manual method. For the polyester fiber-filtered products, residual WBCs per unit were not affected by speed of filtration, prior length of storage, or mechanical tapping during filtration. The effectiveness of WBC removal (mean 99.7%), total residual WBCs (means, 4.8 and 5.5 x 10(6], and RBC recovery (mean, 93%) was the same for both filters. The majority of residual WBCs were lymphocytes. WBC removal and RBC recovery were strikingly superior to results reported with nonfiltration methods.

  8. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), amore » systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.« less

  9. Effects of Computer-Assisted Instruction on Performance of Senior High School Biology Students in Ghana

    ERIC Educational Resources Information Center

    Owusu, K. A.; Monney, K. A.; Appiah, J. Y.; Wilmot, E. M.

    2010-01-01

    This study investigated the comparative efficiency of computer-assisted instruction (CAI) and conventional teaching method in biology on senior high school students. A science class was selected in each of two randomly selected schools. The pretest-posttest non equivalent quasi experimental design was used. The students in the experimental group…

  10. Logging damage in thinned, young-growth true fir stands in California and recommendations for prevention.

    Treesearch

    Paul E. Aho; Gary Fiddler; Mike. Srago

    1983-01-01

    Logging-damage surveys and tree-dissection studies were made in commercially thinned, naturally established young-growth true fir stands in the Lassen National Forest in northern California. Significant damage occurred to residual trees in stands logged by conventional methods. Logging damage was substantially lower in stands thinned using techniques designed to reduce...

  11. The Working Postures among Schoolchildren--Controlled Intervention Study on the Effects of Newly Designed Workstations

    ERIC Educational Resources Information Center

    Saarni, Lea; Nygrd, Clas-H kan; Rimpel, Arja; Nummi, Tapio; Kaukiainen, Anneli

    2007-01-01

    Background: School workstations are often inappropriate in not offering an optimal sitting posture. The aim of this study was to investigate the effects of individually adjustable saddle-type chairs with wheels and desks with comfort curve and arm support on schoolchildren's working postures compared to conventional workstations. Methods:…

  12. Mixing Problem Based Learning and Conventional Teaching Methods in an Analog Electronics Course

    ERIC Educational Resources Information Center

    Podges, J. M.; Kommers, P. A. M.; Winnips, K.; van Joolingen, W. R.

    2014-01-01

    This study, undertaken at the Walter Sisulu University of Technology (WSU) in South Africa, describes how problem-based learning (PBL) affects the first year 'analog electronics course', when PBL and the lecturing mode is compared. Problems were designed to match real-life situations. Data between the experimental group and the control group that…

  13. Subwavelength-thick lenses with high numerical apertures and large efficiency based on high-contrast transmitarrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arbabi, Amir; Horie, Yu; Ball, Alexander J.

    2015-05-07

    Flat optical devices thinner than a wavelength promise to replace conventional free-space components for wavefront and polarization control. Transmissive flat lenses are particularly interesting for applications in imaging and on-chip optoelectronic integration. Several designs based on plasmonic metasurfaces, high-contrast transmitarrays and gratings have been recently implemented but have not provided a performance comparable to conventional curved lenses. Here we report polarization-insensitive, micron-thick, high-contrast transmitarray micro-lenses with focal spots as small as 0.57 λ. The measured focusing efficiency is up to 82%. A rigorous method for ultrathin lens design, and the trade-off between high efficiency and small spot size (or largemore » numerical aperture) are discussed. The micro-lenses, composed of silicon nano-posts on glass, are fabricated in one lithographic step that could be performed with high-throughput photo or nanoimprint lithography, thus enabling widespread adoption.« less

  14. Design principles for shift current photovoltaics

    DOE PAGES

    Cook, Ashley M.; M. Fregoso, Benjamin; de Juan, Fernando; ...

    2017-01-25

    While the basic principles of conventional solar cells are well understood, little attention has gone towards maximizing the efficiency of photovoltaic devices based on shift currents. Furthermore, by analysing effective models, here we outline simple design principles for the optimization of shift currents for frequencies near the band gap. This method allows us to express the band edge shift current in terms of a few model parameters and to show it depends explicitly on wavefunctions in addition to standard band structure. We use our approach to identify two classes of shift current photovoltaics, ferroelectric polymer films and single-layer orthorhombic monochalcogenidesmore » such as GeS, which display the largest band edge responsivities reported so far. Moreover, exploring the parameter space of the tight-binding models that describe them we find photoresponsivities that can exceed 100 mA W -1 . Our results illustrate the great potential of shift current photovoltaics to compete with conventional solar cells.« less

  15. Three-dimensional modeling of light rays on the surface of a slanted lenticular array for autostereoscopic displays.

    PubMed

    Jung, Sung-Min; Kang, In-Byeong

    2013-08-10

    In this paper, we developed an optical model describing the behavior of light at the surface of a slanted lenticular array for autostereoscopic displays in three dimensions and simulated the optical characteristics of autostereoscopic displays using the Monte Carlo method under actual design conditions. The behavior of light is analyzed by light rays for selected inclination and azimuthal angles; numerical aberrations and conditions of total internal reflection for the lenticular array were found. The intensity and the three-dimensional crosstalk distributions calculated from our model coincide very well with those from conventional design software, and our model shows highly enhanced calculation speed that is 67 times faster than that of the conventional software. From the results, we think that the optical model is very useful for predicting the optical characteristics of autostereoscopic displays with enhanced calculation speed.

  16. Design principles for shift current photovoltaics

    PubMed Central

    Cook, Ashley M.; M. Fregoso, Benjamin; de Juan, Fernando; Coh, Sinisa; Moore, Joel E.

    2017-01-01

    While the basic principles of conventional solar cells are well understood, little attention has gone towards maximizing the efficiency of photovoltaic devices based on shift currents. By analysing effective models, here we outline simple design principles for the optimization of shift currents for frequencies near the band gap. Our method allows us to express the band edge shift current in terms of a few model parameters and to show it depends explicitly on wavefunctions in addition to standard band structure. We use our approach to identify two classes of shift current photovoltaics, ferroelectric polymer films and single-layer orthorhombic monochalcogenides such as GeS, which display the largest band edge responsivities reported so far. Moreover, exploring the parameter space of the tight-binding models that describe them we find photoresponsivities that can exceed 100 mA W−1. Our results illustrate the great potential of shift current photovoltaics to compete with conventional solar cells. PMID:28120823

  17. Design principles for shift current photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Ashley M.; M. Fregoso, Benjamin; de Juan, Fernando

    While the basic principles of conventional solar cells are well understood, little attention has gone towards maximizing the efficiency of photovoltaic devices based on shift currents. Furthermore, by analysing effective models, here we outline simple design principles for the optimization of shift currents for frequencies near the band gap. This method allows us to express the band edge shift current in terms of a few model parameters and to show it depends explicitly on wavefunctions in addition to standard band structure. We use our approach to identify two classes of shift current photovoltaics, ferroelectric polymer films and single-layer orthorhombic monochalcogenidesmore » such as GeS, which display the largest band edge responsivities reported so far. Moreover, exploring the parameter space of the tight-binding models that describe them we find photoresponsivities that can exceed 100 mA W -1 . Our results illustrate the great potential of shift current photovoltaics to compete with conventional solar cells.« less

  18. Experimental aeroelastic control using adaptive wing model concepts

    NASA Astrophysics Data System (ADS)

    Costa, Antonio P.; Moniz, Paulo A.; Suleman, Afzal

    2001-06-01

    The focus of this study is to evaluate the aeroelastic performance and control of adaptive wings. Ailerons and flaps have been designed and implemented into 3D wings for comparison with adaptive structures and active aerodynamic surface control methods. The adaptive structures concept, the experimental setup and the control design are presented. The wind-tunnel tests of the wing models are presented for the open- and closed-loop systems. The wind tunnel testing has allowed for quantifying the effectiveness of the piezoelectric vibration control of the wings, and also provided performance data for comparison with conventional aerodynamic control surfaces. The results indicate that a wing utilizing skins as active structural elements with embedded piezoelectric actuators can be effectively used to improve the aeroelastic response of aeronautical components. It was also observed that the control authority of adaptive wings is much greater than wings using conventional aerodynamic control surfaces.

  19. Spacecraft command verification: The AI solution

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Smith, Brian K.

    1990-01-01

    Recently, a knowledge-based approach was used to develop a system called the Command Constraint Checker (CCC) for TRW. CCC was created to automate the process of verifying spacecraft command sequences. To check command files by hand for timing and sequencing errors is a time-consuming and error-prone task. Conventional software solutions were rejected when it was estimated that it would require 36 man-months to build an automated tool to check constraints by conventional methods. Using rule-based representation to model the various timing and sequencing constraints of the spacecraft, CCC was developed and tested in only three months. By applying artificial intelligence techniques, CCC designers were able to demonstrate the viability of AI as a tool to transform difficult problems into easily managed tasks. The design considerations used in developing CCC are discussed and the potential impact of this system on future satellite programs is examined.

  20. TUNNEL LINING DESIGN METHOD BY FRAME STRUCTURE ANALYSIS USING GROUND REACTION CURVE

    NASA Astrophysics Data System (ADS)

    Sugimoto, Mitsutaka; Sramoon, Aphichat; Okazaki, Mari

    Both of NATM and shield tunnelling method can be applied to Diluvial and Neogene deposit, on which mega cities are located in Japan. Since the lining design method for both tunnelling methods are much different, the unified concept for tunnel lining design is expected. Therefore, in this research, a frame structure analysis model for tunnel lining design using the ground reaction curve was developed, which can take into account the earth pressure due to excavated surface displacement to active side including the effect of ground self-stabilization, and the excavated surface displacement before lining installation. Based on the developed model, a parameter study was carried out taking coefficient of subgrade reaction and grouting rate as a parameter, and the measured earth pressure acting on the lining at the site was compared with the calculated one by the developed model and the conventional model. As a result, it was confirmed that the developed model can represent earth pressure acting on the lining, lining displacement, and lining sectional force at ground ranging from soft ground to stiff ground.

  1. Gram staining apparatus for space station applications

    NASA Technical Reports Server (NTRS)

    Molina, T. C.; Brown, H. D.; Irbe, R. M.; Pierson, D. L.

    1990-01-01

    A self-contained, portable Gram staining apparatus (GSA) has been developed for use in the microgravity environment on board the Space Station Freedom. Accuracy and reproducibility of this apparatus compared with the conventional Gram staining method were evaluated by using gram-negative and gram-positive controls and different species of bacteria grown in pure cultures. A subsequent study was designed to assess the performance of the GSA with actual specimens. A set of 60 human and environmental specimens was evaluated with the GSA and the conventional Gram staining procedure. Data obtained from these studies indicated that the GSA will provide the Gram staining capability needed for the microgravity environment of space.

  2. Estimating causal effects with a non-paranormal method for the design of efficient intervention experiments

    PubMed Central

    2014-01-01

    Background Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. Results We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Conclusions Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality. PMID:24980787

  3. Estimating causal effects with a non-paranormal method for the design of efficient intervention experiments.

    PubMed

    Teramoto, Reiji; Saito, Chiaki; Funahashi, Shin-ichi

    2014-06-30

    Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality.

  4. Research methods to change clinical practice for patients with rare cancers.

    PubMed

    Billingham, Lucinda; Malottki, Kinga; Steven, Neil

    2016-02-01

    Rare cancers are a growing group as a result of reclassification of common cancers by molecular markers. There is therefore an increasing need to identify methods to assess interventions that are sufficiently robust to potentially affect clinical practice in this setting. Methods advocated for clinical trials in rare diseases are not necessarily applicable in rare cancers. This Series paper describes research methods that are relevant for rare cancers in relation to the range of incidence levels. Strategies that maximise recruitment, minimise sample size, or maximise the usefulness of the evidence could enable the application of conventional clinical trial design to rare cancer populations. Alternative designs that address specific challenges for rare cancers with the aim of potentially changing clinical practice include Bayesian designs, uncontrolled n-of-1 trials, and umbrella and basket trials. Pragmatic solutions must be sought to enable some level of evidence-based health care for patients with rare cancers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A new design approach based on differential evolution algorithm for geometric optimization of magnetorheological brakes

    NASA Astrophysics Data System (ADS)

    Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung

    2016-12-01

    In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.

  6. Optimization of monopiles for offshore wind turbines.

    PubMed

    Kallehave, Dan; Byrne, Byron W; LeBlanc Thilsted, Christian; Mikkelsen, Kristian Kousgaard

    2015-02-28

    The offshore wind industry currently relies on subsidy schemes to be competitive with fossil-fuel-based energy sources. For the wind industry to survive, it is vital that costs are significantly reduced for future projects. This can be partly achieved by introducing new technologies and partly through optimization of existing technologies and design methods. One of the areas where costs can be reduced is in the support structure, where better designs, cheaper fabrication and quicker installation might all be possible. The prevailing support structure design is the monopile structure, where the simple design is well suited to mass-fabrication, and the installation approach, based on conventional impact driving, is relatively low-risk and robust for most soil conditions. The range of application of the monopile for future wind farms can be extended by using more accurate engineering design methods, specifically tailored to offshore wind industry design. This paper describes how state-of-the-art optimization approaches are applied to the design of current wind farms and monopile support structures and identifies the main drivers where more accurate engineering methods could impact on a next generation of highly optimized monopiles. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  7. Improved conventional and microwave-assisted silylation protocols for simultaneous gas chromatographic determination of tocopherols and sterols: Method development and multi-response optimization.

    PubMed

    Poojary, Mahesha M; Passamonti, Paolo

    2016-12-09

    This paper reports on improved conventional thermal silylation (CTS) and microwave-assisted silylation (MAS) methods for simultaneous determination of tocopherols and sterols by gas chromatography. Reaction parameters in each of the methods developed were systematically optimized using a full factorial design followed by a central composite design. Initially, experimental conditions for CTS were optimized using a block heater. Further, a rapid MAS was developed and optimized. To understand microwave heating mechanisms, MAS was optimized by two distinct modes of microwave heating: temperature-controlled MAS and power-controlled MAS, using dedicated instruments where reaction temperature and microwave power level were controlled and monitored online. Developed methods: were compared with routine overnight derivatization. On a comprehensive level, while both CTS and MAS were found to be efficient derivatization techniques, MAS significantly reduced the reaction time. The optimal derivatization temperature and time for CTS found to be 55°C and 54min, while it was 87°C and 1.2min for temperature-controlled MAS. Further, a microwave power of 300W and a derivatization time 0.5min found to be optimal for power-controlled MAS. The use of an appropriate derivatization solvent, such as pyridine, was found to be critical for the successful determination. Catalysts, like potassium acetate and 4-dimethylaminopyridine, enhanced the efficiency slightly. The developed methods showed excellent analytical performance in terms of linearity, accuracy and precision. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Design of a practical model-observer-based image quality assessment method for CT imaging systems

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana

    2014-03-01

    The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.

  9. Subsonic panel method for designing wing surfaces from pressure distribution

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.; Hawk, J. D.

    1983-01-01

    An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.

  10. Optimized Projection Matrix for Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Xu, Jianping; Pi, Yiming; Cao, Zongjie

    2010-12-01

    Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  11. Design of a control configured tanker aircraft

    NASA Technical Reports Server (NTRS)

    Walker, S. A.

    1976-01-01

    The benefits that accrue from using control configured vehicle (CCV) concepts were examined along with the techniques for applying these concepts to an advanced tanker aircraft design. Reduced static stability (RSS) and flutter mode control (FMC) were the two primary CCV concepts used in the design. The CCV tanker was designed to the same mission requirements specified for a conventional tanker design. A seven degree of freedom mathematical model of the flexible aircraft was derived and used to synthesize a lateral stability augmentation system (SAS), a longitudinal control augmentation system (CAS), and a FMC system. Fatigue life and cost analyses followed the control system synthesis, after which a comparative evaluation of the CCV and conventional tankers was made. This comparison indicated that the CCV weight and cost were lower but that, for this design iteration, the CCV fatigue life was shorter. Also, the CCV crew station acceleration was lower, but the acceleration at the boom operator station was higher relative to the corresponding conventional tanker. Comparison of the design processes used in the CCV and conventional design studies revealed that they were basically the same.

  12. Deep ECGNet: An Optimal Deep Learning Framework for Monitoring Mental Stress Using Ultra Short-Term ECG Signals.

    PubMed

    Hwang, Bosun; You, Jiwoo; Vaessen, Thomas; Myin-Germeys, Inez; Park, Cheolsoo; Zhang, Byoung-Tak

    2018-02-08

    Stress recognition using electrocardiogram (ECG) signals requires the intractable long-term heart rate variability (HRV) parameter extraction process. This study proposes a novel deep learning framework to recognize the stressful states, the Deep ECGNet, using ultra short-term raw ECG signals without any feature engineering methods. The Deep ECGNet was developed through various experiments and analysis of ECG waveforms. We proposed the optimal recurrent and convolutional neural networks architecture, and also the optimal convolution filter length (related to the P, Q, R, S, and T wave durations of ECG) and pooling length (related to the heart beat period) based on the optimization experiments and analysis on the waveform characteristics of ECG signals. The experiments were also conducted with conventional methods using HRV parameters and frequency features as a benchmark test. The data used in this study were obtained from Kwangwoon University in Korea (13 subjects, Case 1) and KU Leuven University in Belgium (9 subjects, Case 2). Experiments were designed according to various experimental protocols to elicit stressful conditions. The proposed framework to recognize stress conditions, the Deep ECGNet, outperformed the conventional approaches with the highest accuracy of 87.39% for Case 1 and 73.96% for Case 2, respectively, that is, 16.22% and 10.98% improvements compared with those of the conventional HRV method. We proposed an optimal deep learning architecture and its parameters for stress recognition, and the theoretical consideration on how to design the deep learning structure based on the periodic patterns of the raw ECG data. Experimental results in this study have proved that the proposed deep learning model, the Deep ECGNet, is an optimal structure to recognize the stress conditions using ultra short-term ECG data.

  13. Correction of sampling bias in a cross-sectional study of post-surgical complications.

    PubMed

    Fluss, Ronen; Mandel, Micha; Freedman, Laurence S; Weiss, Inbal Salz; Zohar, Anat Ekka; Haklai, Ziona; Gordon, Ethel-Sherry; Simchen, Elisheva

    2013-06-30

    Cross-sectional designs are often used to monitor the proportion of infections and other post-surgical complications acquired in hospitals. However, conventional methods for estimating incidence proportions when applied to cross-sectional data may provide estimators that are highly biased, as cross-sectional designs tend to include a high proportion of patients with prolonged hospitalization. One common solution is to use sampling weights in the analysis, which adjust for the sampling bias inherent in a cross-sectional design. The current paper describes in detail a method to build weights for a national survey of post-surgical complications conducted in Israel. We use the weights to estimate the probability of surgical site infections following colon resection, and validate the results of the weighted analysis by comparing them with those obtained from a parallel study with a historically prospective design. Copyright © 2012 John Wiley & Sons, Ltd.

  14. A finite element-boundary integral method for conformal antenna arrays on a circular cylinder

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.; Volakis, John L.; Woo, Alex C.; Yu, C. Long

    1992-01-01

    Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. In the past, antenna designers have had to resort to expensive measurements in order to develop a conformal array design. This is due to the lack of rigorous mathematical models for conformal antenna arrays, and as a result the design of conformal arrays is primarily based on planar antenna design concepts. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. Herewith we shall extend this formulation for conformal arrays on large metallic cylinders. In this we develop the mathematical formulation. In particular we discuss the finite element equations, the shape elements, and the boundary integral evaluation, and it is shown how this formulation can be applied with minimal computation and memory requirements. The implementation shall be discussed in a later report.

  15. A finite element-boundary integral method for conformal antenna arrays on a circular cylinder

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.; Volakis, John L.

    1992-01-01

    Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. In the past, antenna designers have had to resort to expensive measurements in order to develop a conformal array design. This was due to the lack of rigorous mathematical models for conformal antenna arrays. As a result, the design of conformal arrays was primarily based on planar antenna design concepts. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. We are extending this formulation to conformal arrays on large metallic cylinders. In doing so, we will develop a mathematical formulation. In particular, we discuss the finite element equations, the shape elements, and the boundary integral evaluation. It is shown how this formulation can be applied with minimal computation and memory requirements.

  16. An intelligent switch with back-propagation neural network based hybrid power system

    NASA Astrophysics Data System (ADS)

    Perdana, R. H. Y.; Fibriana, F.

    2018-03-01

    The consumption of conventional energy such as fossil fuels plays the critical role in the global warming issues. The carbon dioxide, methane, nitrous oxide, etc. could lead the greenhouse effects and change the climate pattern. In fact, 77% of the electrical energy is generated from fossil fuels combustion. Therefore, it is necessary to use the renewable energy sources for reducing the conventional energy consumption regarding electricity generation. This paper presents an intelligent switch to combine both energy resources, i.e., the solar panels as the renewable energy with the conventional energy from the State Electricity Enterprise (PLN). The artificial intelligence technology with the back-propagation neural network was designed to control the flow of energy that is distributed dynamically based on renewable energy generation. By the continuous monitoring on each load and source, the dynamic pattern of the intelligent switch was better than the conventional switching method. The first experimental results for 60 W solar panels showed the standard deviation of the trial at 0.7 and standard deviation of the experiment at 0.28. The second operation for a 900 W of solar panel obtained the standard deviation of the trial at 0.05 and 0.18 for the standard deviation of the experiment. Moreover, the accuracy reached 83% using this method. By the combination of the back-propagation neural network with the observation of energy usage of the load using wireless sensor network, each load can be evenly distributed and will impact on the reduction of conventional energy usage.

  17. Direct PCR - A rapid method for multiplexed detection of different serotypes of Salmonella in enriched pork meat samples.

    PubMed

    Chin, Wai Hoe; Sun, Yi; Høgberg, Jonas; Quyen, Than Linh; Engelsmann, Pia; Wolff, Anders; Bang, Dang Duong

    2017-04-01

    Salmonellosis, an infectious disease caused by Salmonella spp., is one of the most common foodborne diseases. Isolation and identification of Salmonella by conventional bacterial culture method is time consuming. In response to the demand for rapid on line or at site detection of pathogens, in this study, we developed a multiplex Direct PCR method for rapid detection of different Salmonella serotypes directly from pork meat samples without any DNA purification steps. An inhibitor-resistant Phusion Pfu DNA polymerase was used to overcome PCR inhibition. Four pairs of primers including a pair of newly designed primers targeting Salmonella spp. at subtype level were incorporated in the multiplex Direct PCR. To maximize the efficiency of the Direct PCR, the ratio between sample and dilution buffer was optimized. The sensitivity and specificity of the multiplex Direct PCR were tested using naturally contaminated pork meat samples for detecting and subtyping of Salmonella spp. Conventional bacterial culture methods were used as reference to evaluate the performance of the multiplex Direct PCR. Relative accuracy, sensitivity and specificity of 98.8%; 97.6% and 100%, respectively, were achieved by the method. Application of the multiplex Direct PCR to detect Salmonella in pork meat at slaughter reduces the time of detection from 5 to 6 days by conventional bacterial culture and serotyping methods to 14 h (including 12 h enrichment time). Furthermore, the method poses a possibility of miniaturization and integration into a point-of-need Lab-on-a-chip system for rapid online pathogen detection. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Evaluation of a Method Using Three Genomic Guided Escherichia coli Markers for Phylogenetic Typing of E. coli Isolates of Various Genetic Backgrounds

    PubMed Central

    Hamamoto, Kouta; Ueda, Shuhei; Yamamoto, Yoshimasa

    2015-01-01

    Genotyping and characterization of bacterial isolates are essential steps in the identification and control of antibiotic-resistant bacterial infections. Recently, one novel genotyping method using three genomic guided Escherichia coli markers (GIG-EM), dinG, tonB, and dipeptide permease (DPP), was reported. Because GIG-EM has not been fully evaluated using clinical isolates, we assessed this typing method with 72 E. coli collection of reference (ECOR) environmental E. coli reference strains and 63 E. coli isolates of various genetic backgrounds. In this study, we designated 768 bp of dinG, 745 bp of tonB, and 655 bp of DPP target sequences for use in the typing method. Concatenations of the processed marker sequences were used to draw GIG-EM phylogenetic trees. E. coli isolates with identical sequence types as identified by the conventional multilocus sequence typing (MLST) method were localized to the same branch of the GIG-EM phylogenetic tree. Sixteen clinical E. coli isolates were utilized as test isolates without prior characterization by conventional MLST and phylogenetic grouping before GIG-EM typing. Of these, 14 clinical isolates were assigned to a branch including only isolates of a pandemic clone, E. coli B2-ST131-O25b, and these results were confirmed by conventional typing methods. Our results suggested that the GIG-EM typing method and its application to phylogenetic trees might be useful tools for the molecular characterization and determination of the genetic relationships among E. coli isolates. PMID:25809972

  19. Evaluation of a Method Using Three Genomic Guided Escherichia coli Markers for Phylogenetic Typing of E. coli Isolates of Various Genetic Backgrounds.

    PubMed

    Hamamoto, Kouta; Ueda, Shuhei; Yamamoto, Yoshimasa; Hirai, Itaru

    2015-06-01

    Genotyping and characterization of bacterial isolates are essential steps in the identification and control of antibiotic-resistant bacterial infections. Recently, one novel genotyping method using three genomic guided Escherichia coli markers (GIG-EM), dinG, tonB, and dipeptide permease (DPP), was reported. Because GIG-EM has not been fully evaluated using clinical isolates, we assessed this typing method with 72 E. coli collection of reference (ECOR) environmental E. coli reference strains and 63 E. coli isolates of various genetic backgrounds. In this study, we designated 768 bp of dinG, 745 bp of tonB, and 655 bp of DPP target sequences for use in the typing method. Concatenations of the processed marker sequences were used to draw GIG-EM phylogenetic trees. E. coli isolates with identical sequence types as identified by the conventional multilocus sequence typing (MLST) method were localized to the same branch of the GIG-EM phylogenetic tree. Sixteen clinical E. coli isolates were utilized as test isolates without prior characterization by conventional MLST and phylogenetic grouping before GIG-EM typing. Of these, 14 clinical isolates were assigned to a branch including only isolates of a pandemic clone, E. coli B2-ST131-O25b, and these results were confirmed by conventional typing methods. Our results suggested that the GIG-EM typing method and its application to phylogenetic trees might be useful tools for the molecular characterization and determination of the genetic relationships among E. coli isolates. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  20. A hybrid microfluidic-vacuum device for direct interfacing with conventional cell culture methods

    PubMed Central

    Chung, Bong Geun; Park, Jeong Won; Hu, Jia Sheng; Huang, Carlos; Monuki, Edwin S; Jeon, Noo Li

    2007-01-01

    Background Microfluidics is an enabling technology with a number of advantages over traditional tissue culture methods when precise control of cellular microenvironment is required. However, there are a number of practical and technical limitations that impede wider implementation in routine biomedical research. Specialized equipment and protocols required for fabrication and setting up microfluidic experiments present hurdles for routine use by most biology laboratories. Results We have developed and validated a novel microfluidic device that can directly interface with conventional tissue culture methods to generate and maintain controlled soluble environments in a Petri dish. It incorporates separate sets of fluidic channels and vacuum networks on a single device that allows reversible application of microfluidic gradients onto wet cell culture surfaces. Stable, precise concentration gradients of soluble factors were generated using simple microfluidic channels that were attached to a perfusion system. We successfully demonstrated real-time optical live/dead cell imaging of neural stem cells exposed to a hydrogen peroxide gradient and chemotaxis of metastatic breast cancer cells in a growth factor gradient. Conclusion This paper describes the design and application of a versatile microfluidic device that can directly interface with conventional cell culture methods. This platform provides a simple yet versatile tool for incorporating the advantages of a microfluidic approach to biological assays without changing established tissue culture protocols. PMID:17883868

  1. Computer modeling of a two-junction, monolithic cascade solar cell

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.; Abbott, D.

    1979-01-01

    The theory and design criteria for monolithic, two-junction cascade solar cells are described. The departure from the conventional solar cell analytical method and the reasons for using the integral form of the continuity equations are briefly discussed. The results of design optimization are presented. The energy conversion efficiency that is predicted for the optimized structure is greater than 30% at 300 K, AMO and one sun. The analytical method predicts device performance characteristics as a function of temperature. The range is restricted to 300 to 600 K. While the analysis is capable of determining most of the physical processes occurring in each of the individual layers, only the more significant device performance characteristics are presented.

  2. Design of catheter radio frequency coils using coaxial transmission line resonators for interventional neurovascular MR imaging.

    PubMed

    Zhang, Xiaoliang; Martin, Alastair; Jordan, Caroline; Lillaney, Prasheel; Losey, Aaron; Pang, Yong; Hu, Jeffrey; Wilson, Mark; Cooke, Daniel; Hetts, Steven W

    2017-04-01

    It is technically challenging to design compact yet sensitive miniature catheter radio frequency (RF) coils for endovascular interventional MR imaging. In this work, a new design method for catheter RF coils is proposed based on the coaxial transmission line resonator (TLR) technique. Due to its distributed circuit, the TLR catheter coil does not need any lumped capacitors to support its resonance, which simplifies the practical design and construction and provides a straightforward technique for designing miniature catheter-mounted imaging coils that are appropriate for interventional neurovascular procedures. The outer conductor of the TLR serves as an RF shield, which prevents electromagnetic energy loss, and improves coil Q factors. It also minimizes interaction with surrounding tissues and signal losses along the catheter coil. To investigate the technique, a prototype catheter coil was built using the proposed coaxial TLR technique and evaluated with standard RF testing and measurement methods and MR imaging experiments. Numerical simulation was carried out to assess the RF electromagnetic field behavior of the proposed TLR catheter coil and the conventional lumped-element catheter coil. The proposed TLR catheter coil was successfully tuned to 64 MHz for proton imaging at 1.5 T. B 1 fields were numerically calculated, showing improved magnetic field intensity of the TLR catheter coil over the conventional lumped-element catheter coil. MR images were acquired from a dedicated vascular phantom using the TLR catheter coil and also the system body coil. The TLR catheter coil is able to provide a significant signal-to-noise ratio (SNR) increase (a factor of 200 to 300) over its imaging volume relative to the body coil. Catheter imaging RF coil design using the proposed coaxial TLR technique is feasible and advantageous in endovascular interventional MR imaging applications.

  3. Modified surface testing method for large convex aspheric surfaces based on diffraction optics.

    PubMed

    Zhang, Haidong; Wang, Xiaokun; Xue, Donglin; Zhang, Xuejun

    2017-12-01

    Large convex aspheric optical elements have been widely applied in advanced optical systems, which have presented a challenging metrology problem. Conventional testing methods cannot satisfy the demand gradually with the change of definition of "large." A modified method is proposed in this paper, which utilizes a relatively small computer-generated hologram and an illumination lens with certain feasibility to measure the large convex aspherics. Two example systems are designed to demonstrate the applicability, and also, the sensitivity of this configuration is analyzed, which proves the accuracy of the configuration can be better than 6 nm with careful alignment and calibration of the illumination lens in advance. Design examples and analysis show that this configuration is applicable to measure the large convex aspheric surfaces.

  4. Design Space Toolbox V2: Automated Software Enabling a Novel Phenotype-Centric Modeling Strategy for Natural and Synthetic Biological Systems

    PubMed Central

    Lomnitz, Jason G.; Savageau, Michael A.

    2016-01-01

    Mathematical models of biochemical systems provide a means to elucidate the link between the genotype, environment, and phenotype. A subclass of mathematical models, known as mechanistic models, quantitatively describe the complex non-linear mechanisms that capture the intricate interactions between biochemical components. However, the study of mechanistic models is challenging because most are analytically intractable and involve large numbers of system parameters. Conventional methods to analyze them rely on local analyses about a nominal parameter set and they do not reveal the vast majority of potential phenotypes possible for a given system design. We have recently developed a new modeling approach that does not require estimated values for the parameters initially and inverts the typical steps of the conventional modeling strategy. Instead, this approach relies on architectural features of the model to identify the phenotypic repertoire and then predict values for the parameters that yield specific instances of the system that realize desired phenotypic characteristics. Here, we present a collection of software tools, the Design Space Toolbox V2 based on the System Design Space method, that automates (1) enumeration of the repertoire of model phenotypes, (2) prediction of values for the parameters for any model phenotype, and (3) analysis of model phenotypes through analytical and numerical methods. The result is an enabling technology that facilitates this radically new, phenotype-centric, modeling approach. We illustrate the power of these new tools by applying them to a synthetic gene circuit that can exhibit multi-stability. We then predict values for the system parameters such that the design exhibits 2, 3, and 4 stable steady states. In one example, inspection of the basins of attraction reveals that the circuit can count between three stable states by transient stimulation through one of two input channels: a positive channel that increases the count, and a negative channel that decreases the count. This example shows the power of these new automated methods to rapidly identify behaviors of interest and efficiently predict parameter values for their realization. These tools may be applied to understand complex natural circuitry and to aid in the rational design of synthetic circuits. PMID:27462346

  5. Design and Analysis of Bionic Cutting Blades Using Finite Element Method.

    PubMed

    Li, Mo; Yang, Yuwang; Guo, Li; Chen, Donghui; Sun, Hongliang; Tong, Jin

    2015-01-01

    Praying mantis is one of the most efficient predators in insect world, which has a pair of powerful tools, two sharp and strong forelegs. Its femur and tibia are both armed with a double row of strong spines along their posterior edges which can firmly grasp the prey, when the femur and tibia fold on each other in capturing. These spines are so sharp that they can easily and quickly cut into the prey. The geometrical characteristic of the praying mantis's foreleg, especially its tibia, has important reference value for the design of agricultural soil-cutting tools. Learning from the profile and arrangement of these spines, cutting blades with tooth profile were designed in this work. Two different sizes of tooth structure and arrangement were utilized in the design on the cutting edge. A conventional smooth-edge blade was used to compare with the bionic serrate-edge blades. To compare the working efficiency of conventional blade and bionic blades, 3D finite element simulation analysis and experimental measurement were operated in present work. Both the simulation and experimental results indicated that the bionic serrate-edge blades showed better performance in cutting efficiency.

  6. Design and Analysis of Bionic Cutting Blades Using Finite Element Method

    PubMed Central

    Li, Mo; Yang, Yuwang; Guo, Li; Chen, Donghui; Sun, Hongliang; Tong, Jin

    2015-01-01

    Praying mantis is one of the most efficient predators in insect world, which has a pair of powerful tools, two sharp and strong forelegs. Its femur and tibia are both armed with a double row of strong spines along their posterior edges which can firmly grasp the prey, when the femur and tibia fold on each other in capturing. These spines are so sharp that they can easily and quickly cut into the prey. The geometrical characteristic of the praying mantis's foreleg, especially its tibia, has important reference value for the design of agricultural soil-cutting tools. Learning from the profile and arrangement of these spines, cutting blades with tooth profile were designed in this work. Two different sizes of tooth structure and arrangement were utilized in the design on the cutting edge. A conventional smooth-edge blade was used to compare with the bionic serrate-edge blades. To compare the working efficiency of conventional blade and bionic blades, 3D finite element simulation analysis and experimental measurement were operated in present work. Both the simulation and experimental results indicated that the bionic serrate-edge blades showed better performance in cutting efficiency. PMID:27019583

  7. Advanced millimeter-wave security portal imaging techniques

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Bernacki, Bruce E.; McMakin, Douglas L.

    2012-03-01

    Millimeter-wave (mm-wave) imaging is rapidly gaining acceptance as a security tool to augment conventional metal detectors and baggage x-ray systems for passenger screening at airports and other secured facilities. This acceptance indicates that the technology has matured; however, many potential improvements can yet be realized. The authors have developed a number of techniques over the last several years including novel image reconstruction and display techniques, polarimetric imaging techniques, array switching schemes, and high-frequency high-bandwidth techniques. All of these may improve the performance of new systems; however, some of these techniques will increase the cost and complexity of the mm-wave security portal imaging systems. Reducing this cost may require the development of novel array designs. In particular, RF photonic methods may provide new solutions to the design and development of the sequentially switched linear mm-wave arrays that are the key element in the mm-wave portal imaging systems. Highfrequency, high-bandwidth designs are difficult to achieve with conventional mm-wave electronic devices, and RF photonic devices may be a practical alternative. In this paper, the mm-wave imaging techniques developed at PNNL are reviewed and the potential for implementing RF photonic mm-wave array designs is explored.

  8. Short torch design for direct liquid sample introduction using conventional and micro-nebulizers for plasma spectrometry

    DOEpatents

    Montaser, Akbar [Potomac, MD; Westphal, Craig S [Landenberg, PA; Kahen, Kaveh [Montgomery Village, MD; Rutkowski, William F [Arlington, VA

    2008-01-08

    An apparatus and method for providing direct liquid sample introduction using a nebulizer are provided. The apparatus and method include a short torch having an inner tube and an outer tube, and an elongated adapter having a cavity for receiving the nebulizer and positioning a nozzle tip of the nebulizer a predetermined distance from a tip of the outer tube of the short torch. The predetermined distance is preferably about 2-5 mm.

  9. Compaction managed mirror bend achromat

    DOEpatents

    Douglas, David [Yorktown, VA

    2005-10-18

    A method for controlling the momentum compaction in a beam of charged particles. The method includes a compaction-managed mirror bend achromat (CMMBA) that provides a beamline design that retains the large momentum acceptance of a conventional mirror bend achromat. The CMMBA also provides the ability to tailor the system momentum compaction spectrum as desired for specific applications. The CMMBA enables magnetostatic management of the longitudinal phase space in Energy Recovery Linacs (ERLs) thereby alleviating the need for harmonic linearization of the RF waveform.

  10. Complete Dentures Fabricated with CAD/CAM Technology and a Traditional Clinical Recording Method.

    PubMed

    Janeva, Nadica; Kovacevska, Gordana; Janev, Edvard

    2017-10-15

    The introduction of computer-aided design/computer-aided manufacturing (CAD/CAM) technology into complete denture (CD) fabrication ushered in a new era in removable prosthodontics. Commercially available CAD/CAM denture systems are expected to improve upon the disadvantages associated with conventional fabrication. The purpose of this report is to present the workflow involved in fabricating a CD with a traditional clinical recording method and CAD/CAM technology and to summarize the advantages to the dental practitioner and the patient.

  11. Evaluation of the Use of Supercritical Fluids for the Extraction of Explosives and Their Degradation Products from Soil

    DTIC Science & Technology

    1994-04-01

    and nontoxic is a major pounds. advantage . The accepted analytical method for explosives, The basic equipment required to conduct SFE is SW846 Method...theoretical advantage of SFE tion (SlE) with 18-hour sonic extraction with ACN. compared to conventional solvent extraction. II T r Figure 1. Phase...diagram of C0 2.Temperature 31"C Shut-off Hewler Figure 2. Design for a basic SFE apparaztus. (After Hawthorne 1993.) The advantages of extraction

  12. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  13. Criteria for assessment of bridge aesthetic and visual quality

    NASA Astrophysics Data System (ADS)

    Rozentale, I.; Paeglitis, A.

    2017-10-01

    The bridge designers should find an ideal balance between structure, economy, buildability, aesthetics, durability and harmony with industrial or natural landscape. During the last years, the society has adopted documents providing procedures for evaluation of the impact of the structural appearance on surrounding landscape. The European Landscape Convention defines the landscape as an area perceived by people, whose character is the result of the action and interaction of natural and/or human factors. The Convention indicates the methods for clear and objective assessment of the landscape’s visual qualities. The esthetical qualities of bridge structures, appearance and attraction should satisfy not only the technicians - engineers and architects, but mostly the surrounding population. Each of these groups has a different perception of esthetical qualities of structure. Many authors have used different methods and criteria for assessment of bridge aesthetics. The aim of this paper is to provide an overview of the bridge aesthetic and visual quality assessment methods and criteria.

  14. Off-axis mirror fabrication from spherical surfaces under mechanical stress

    NASA Astrophysics Data System (ADS)

    Izazaga-Pérez, R.; Aguirre-Aguirre, D.; Percino-Zacarías, M. E.; Granados-Agustín, Fermin-Salomon

    2013-09-01

    The preliminary results in the fabrication of off-axis optical surfaces are presented. The propose using the conventional polishing method and with the surface under mechanical stress at its edges. It starts fabricating a spherical surface using ZERODUR® optical glass with the conventional polishing method, the surface is deformed by applying tension and/or compression at the surface edges using a specially designed mechanical mount. To know the necessary deformation, the interferogram of the deformed surface is analyzed in real time with a ZYGO® Mark II Fizeau type interferometer, the mechanical stress is applied until obtain the inverse interferogram associated to the off-axis surface that we need to fabricate. Polishing process is carried out again until obtain a spherical surface, then mechanical stress in the edges are removed and compares the actual interferogram with the theoretical associated to the off-axis surface. To analyze the resulting interferograms of the surface we used the phase shifting analysis method by using a piezoelectric phase-shifter and Durango® interferometry software from Diffraction International™.

  15. A study of environmental characterization of conventional and advanced aluminum alloys for selection and design. Phase 2: The breaking load test method

    NASA Technical Reports Server (NTRS)

    Sprowls, D. O.; Bucci, R. J.; Ponchel, B. M.; Brazill, R. L.; Bretz, P. E.

    1984-01-01

    A technique is demonstrated for accelerated stress corrosion testing of high strength aluminum alloys. The method offers better precision and shorter exposure times than traditional pass fail procedures. The approach uses data from tension tests performed on replicate groups of smooth specimens after various lengths of exposure to static stress. The breaking strength measures degradation in the test specimen load carrying ability due to the environmental attack. Analysis of breaking load data by extreme value statistics enables the calculation of survival probabilities and a statistically defined threshold stress applicable to the specific test conditions. A fracture mechanics model is given which quantifies depth of attack in the stress corroded specimen by an effective flaw size calculated from the breaking stress and the material strength and fracture toughness properties. Comparisons are made with experimental results from three tempers of 7075 alloy plate tested by the breaking load method and by traditional tests of statistically loaded smooth tension bars and conventional precracked specimens.

  16. A wave superposition method formulated in digital acoustic space

    NASA Astrophysics Data System (ADS)

    Hwang, Yong-Sin

    In this thesis, a new formulation of the Wave Superposition method is proposed wherein the conventional mesh approach is replaced by a simple 3-D digital work space that easily accommodates shape optimization for minimizing or maximizing radiation efficiency. As sound quality is in demand in almost all product designs and also because of fierce competition between product manufacturers, faster and accurate computational method for shape optimization is always desired. Because the conventional Wave Superposition method relies solely on mesh geometry, it cannot accommodate fast shape changes in the design stage of a consumer product or machinery, where many iterations of shape changes are required. Since the use of a mesh hinders easy shape changes, a new approach for representing geometry is introduced by constructing a uniform lattice in a 3-D digital work space. A voxel (a portmanteau, a new word made from combining the sound and meaning, of the words, volumetric and pixel) is essentially a volume element defined by the uniform lattice, and does not require separate connectivity information as a mesh element does. In the presented method, geometry is represented with voxels that can easily adapt to shape changes, therefore it is more suitable for shape optimization. The new method was validated by computing radiated sound power of structures of simple and complex geometries and complex mode shapes. It was shown that matching volume velocity is a key component to an accurate analysis. A sensitivity study showed that it required at least 6 elements per acoustic wavelength, and a complexity study showed a minimal reduction in computational time.

  17. Alternatives for randomization in lifestyle intervention studies in cancer patients were not better than conventional randomization.

    PubMed

    Velthuis, Miranda J; May, Anne M; Monninkhof, Evelyn M; van der Wall, Elsken; Peeters, Petra H M

    2012-03-01

    Assessing effects of lifestyle interventions in cancer patients has some specific challenges. Although randomization is urgently needed for evidence-based knowledge, sometimes it is difficult to apply conventional randomization (i.e., consent preceding randomization and intervention) in daily settings. Randomization before seeking consent was proposed by Zelen, and additional modifications were proposed since. We discuss four alternatives for conventional randomization: single and double randomized consent design, two-stage randomized consent design, and the design with consent to postponed information. We considered these designs when designing a study to assess the impact of physical activity on cancer-related fatigue and quality of life. We tested the modified Zelen design with consent to postponed information in a pilot. The design was chosen to prevent drop out of participants in the control group because of disappointment about the allocation. The result was a low overall participation rate most likely because of perceived lack of information by eligible patients and a relatively high dropout in the intervention group. We conclude that the alternatives were not better than conventional randomization. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Design and analysis of frequency-selective surface enabled microbolometers

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Qu, Chuang; Almasri, Mahmoud; Kinzel, Edward

    2016-05-01

    Frequency Selective Surfaces (FSS) are periodic array of sub-wavelength antenna elements. They allow the absorptance and reflectance of a surface to be engineered with respect to wavelength, polarization and angle-of-incidence. This paper applies this technique to microbolometers for uncooled infrared sensing applications. Both narrowband and broadband near perfect absorbing surfaces are synthesized and applied engineer the response of microbolometers. The paper focuses on simple FSS geometries (hexagonal close packed disk arrays) that can be fabricated using conventional lithographic tools for use at thermal infrared wavelengths (feature sizes > 1 μm). The affects of geometry and material selection for this geometry is described in detail. In the microbolometer application, the FSS controls the absorption rather than a conventional Fabry-Perot cavity and this permits an improved thermal design. A coupled full wave electromagnetic/transient thermal model of the entire microbolometer is presented and analyzed using the finite element method. The absence of the cavity also permits more flexibility in the design of the support arms/contacts. This combined modeling permits prediction of the overall device sensitivity, time-constant and the specific detectivity.

  19. Modified femoral pressuriser generates a longer lasting high pressure during cement pressurisation

    PubMed Central

    2011-01-01

    Background The strength of the cement-bone interface in hip arthroplasty is strongly related to cement penetration into the bone. A modified femoral pressuriser has been investigated, designed for closer fitting into the femoral opening to generate higher and more constant cement pressure compared to a commercial (conventional) design. Methods Femoral cementation was performed in 10 Sawbones® models, five using the modified pressuriser and five using a current commercial pressuriser as a control. Pressure during the cementation was recorded at the proximal and distal regions of the femoral implant. The peak pressure and the pressure-time curves were analysed by student's t-test and Two way ANOVA. Results The modified pressuriser showed significantly and substantially longer durations at higher cementation pressures and slightly, although not statistically, higher peak pressures compared to the conventional pressuriser. The modified pressuriser also produced more controlled cement leakage. Conclusion The modified pressuriser generates longer higher pressure durations in the femoral model. This design modification may enhance cement penetration into cancellous bone and could improve femoral cementation. PMID:22004662

  20. Analysis of the rectangular resonator with butterfly MMI coupler using SOI

    NASA Astrophysics Data System (ADS)

    Kim, Sun-Ho; Park, Jun-Hee; Kim, Eudum; Jeon, Su-Jin; Kim, Ji-Hoon; Choi, Young-Wan

    2018-02-01

    We propose a rectangular resonator sensor structure with butterfly MMI coupler using SOI. It consists of the rectangular resonator, total internal reflection (TIR) mirror, and the butterfly MMI coupler. The rectangular resonator is expected to be used as bio and chemical sensors because of the advantages of using MMI coupler and the absence of bending loss unlike ring resonators. The butterfly MMI coupler can miniaturize the device compared to conventional MMI by using a linear butterfly shape instead of a square in the MMI part. The width, height, and slab height of the rib type waveguide are designed to be 1.5 μm, 1.5 μm, and 0.9 μm, respectively. This structure is designed as a single mode. When designing a TIR mirror, we considered the Goos-Hänchen shift and critical angle. We designed 3:1 MMI coupler because rectangular resonator has no bending loss. The width of MMI is designed to be 4.5 μm and we optimize the length of the butterfly MMI coupler using finite-difference time-domain (FDTD) method for higher Q-factor. It has the equal performance with conventional MMI even though the length is reduced by 1/3. As a result of the simulation, Qfactor of rectangular resonator can be obtained as 7381.

  1. Three-Dimensional Microvascular Fiber-Reinforced Composites

    DTIC Science & Technology

    2011-03-01

    are varied to meet the desired design criteria. The interstitial pore space between fi bers is infi ltrated with a low- viscosity thermosetting resin...depolymerization and monomer vaporization results in a 3D microvascular network integrated into a structural composite; d) fl uid (yellow) fi lls...VaSC method uses commercially available materials and can be seamlessly integrated with conventional fi ber-reinforced composite manufacturing

  2. Designing Meaning with Multiple Media Sources: A Case Study of an Eight-Year-Old Student's Writing Processes

    ERIC Educational Resources Information Center

    Ranker, Jason

    2007-01-01

    This case study closely examines how John (a former student of mine, age eight, second grade) composed during an informal writing group at school. Using qualitative research methods, I found that John selectively took up conventions, characters, story grammars, themes, and motifs from video games, television, Web pages, and comics. Likening his…

  3. Proceedings of Selected Research and Development Presentations at the 1996 National Convention of the Association for Educational Communications and Technology Sponsored by the Research and Theory Division (18th, Indianapolis, IN, 1996).

    ERIC Educational Resources Information Center

    Simonson, Michael R., Ed.; And Others

    1996-01-01

    This proceedings volume contains 77 papers. Subjects addressed include: image processing; new faculty research methods; preinstructional activities for preservice teacher education; computer "window" presentation styles; interface design; stress management instruction; cooperative learning; graphical user interfaces; student attitudes,…

  4. Statistical, Graphical, and Learning Methods for Sensing, Surveillance, and Navigation Systems

    DTIC Science & Technology

    2016-06-28

    harsh propagation environments. Conventional filtering techniques fail to provide satisfactory performance in many important nonlinear or non...Gaussian scenarios. In addition, there is a lack of a unified methodology for the design and analysis of different filtering techniques. To address...these problems, we have proposed a new filtering methodology called belief condensation (BC) DISTRIBUTION A: Distribution approved for public release

  5. DOE/DOT Crude Oil Characterization Research Study, Task 2 Test Report on Evaluating Crude Oil Sampling and Analysis Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lord, David; Allen, Ray; Rudeen, David

    The Crude Oil Characterization Research Study is designed to evaluate whether crude oils currently transported in North America, including those produced from "tight" formations, exhibit physical or chemical properties that are distinct from conventional crudes, and how these properties associate with combustion hazards with may be realized during transportation and handling.

  6. Non-conventional rule of making a periodically varying different-pole magnetic field in low-power alternating current electrical machines with using ring coils in multiphase armature winding

    NASA Astrophysics Data System (ADS)

    Plastun, A. T.; Tikhonova, O. V.; Malygin, I. V.

    2018-02-01

    The paper presents methods of making a periodically varying different-pole magnetic field in low-power electrical machines. Authors consider classical designs of electrical machines and machines with ring windings in armature, structural features and calculated parameters of magnetic circuit for these machines.

  7. Measuring the Impact of a Residential Learning Community on the Mental Health and Well-Being of Art Students in Higher Education

    ERIC Educational Resources Information Center

    Martin, Deborah Katharine

    2012-01-01

    College students are experiencing mental health concerns at an alarming rate. Art students are a particularly vulnerable sub-population, as artists appear to be more susceptible to mental illness than the general population. Many students do not seek assistance through conventional methods designed by colleges and universities to address their…

  8. Assessment of an ePortfolio: Developing a Taxonomy to Guide the Grading and Feedback for Personal Development Planning

    ERIC Educational Resources Information Center

    Clark, Wendy; Adamson, Jackie

    2009-01-01

    This paper describes the rationale for, and the design, implementation and preliminary evaluation of a taxonomy to guide the grading and feedback of ePortfolio assessment of personal development planning (PDP) in a module where PDP is integrated into the curriculum. Conventional higher education assessment methods do not adequately address the…

  9. Evolutionary Optimization of Centrifugal Nozzles for Organic Vapours

    NASA Astrophysics Data System (ADS)

    Persico, Giacomo

    2017-03-01

    This paper discusses the shape-optimization of non-conventional centrifugal turbine nozzles for Organic Rankine Cycle applications. The optimal aerodynamic design is supported by the use of a non-intrusive, gradient-free technique specifically developed for shape optimization of turbomachinery profiles. The method is constructed as a combination of a geometrical parametrization technique based on B-Splines, a high-fidelity and experimentally validated Computational Fluid Dynamic solver, and a surrogate-based evolutionary algorithm. The non-ideal gas behaviour featuring the flow of organic fluids in the cascades of interest is introduced via a look-up-table approach, which is rigorously applied throughout the whole optimization process. Two transonic centrifugal nozzles are considered, featuring very different loading and radial extension. The use of a systematic and automatic design method to such a non-conventional configuration highlights the character of centrifugal cascades; the blades require a specific and non-trivial definition of the shape, especially in the rear part, to avoid the onset of shock waves. It is shown that the optimization acts in similar way for the two cascades, identifying an optimal curvature of the blade that both provides a relevant increase of cascade performance and a reduction of downstream gradients.

  10. Conceptual design of single turbofan engine powered light aircraft

    NASA Technical Reports Server (NTRS)

    Snyder, F. S.; Voorhees, C. G.; Heinrich, A. M.; Baisden, D. N.

    1977-01-01

    The conceptual design of a four place single turbofan engine powered light aircraft was accomplished utilizing contemporary light aircraft conventional design techniques as a means of evaluating the NASA-Ames General Aviation Synthesis Program (GASP) as a preliminary design tool. In certain areas, disagreement or exclusion were found to exist between the results of the conventional design and GASP processes. Detail discussion of these points along with the associated contemporary design methodology are presented.

  11. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  12. A comparative study: Greener vs conventional synthesis of 4H-pyrimido[2,1-b]benzothiazoles via Biginelli reaction

    NASA Astrophysics Data System (ADS)

    Agarwal, Shikha; Agarwal, Dinesh Kr.; Kalal, Priyanka; Gandhi, Divyani

    2018-05-01

    Multicomponent reactions (MCRs) have been discovered as a powerful method for the synthesis of organic molecules, since the products are formed in a single step and the building blocks with diverse range of complexity can be obtained from easily available precursors. This strategy has become important in drug designing and discovery in the context of synthesis of biologically active compounds. In the today's scenario, MCRs are influenced by greener conditions as a powerful alternative over the conventional synthesis. In the last few years, a number of scientific publications have been appeared in the literature depicting the synthesis of pyrimidobenzothiazoles via greener routes which clearly states its importance in pharmaceutical chemistry for the drug development. Our article describes the synthesis of substituted pyrimidobenzothiazoles via one pot multicomponent reaction with structural diversity through conventional and greener pathways using different catalysts, ionic liquids, agar, resins etc.

  13. A Novel Method to Compute Breathing Volumes via Motion Capture Systems: Design and Experimental Trials.

    PubMed

    Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio

    2017-10-01

    Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2  = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2  = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.

  14. Computational Design of a Krueger Flap Targeting Conventional Slat Aerodynamics

    NASA Technical Reports Server (NTRS)

    Akaydin, H. Dogus; Housman, Jeffrey A.; Kiris, Cetin C.; Bahr, Christopher J.; Hutcheson, Florence V.

    2016-01-01

    In this study, we demonstrate the design of a Krueger flap as a substitute for a conventional slat in a high-lift system. This notional design, with the objective of matching equivalent-mission performance on aircraft approach, was required for a comparative aeroacoustic study with computational and experimental components. We generated a family of high-lift systems with Krueger flaps based on a set of design parameters. Then, we evaluated the high-lift systems using steady 2D RANS simulations to find a good match for the conventional slat, based on total lift coefficients in free-air. Finally, we evaluated the mean aerodynamics of the high-lift systems with Krueger flap and conventional slat as they were installed in an open-jet wind tunnel flow. The surface pressures predicted with the simulations agreed well with experimental results.

  15. Neuropsychological Criteria for Mild Cognitive Impairment Improves Diagnostic Precision, Biomarker Associations, and Progression Rates

    PubMed Central

    Bondi, Mark W.; Edmonds, Emily C.; Jak, Amy J.; Clark, Lindsay R.; Delano-Wood, Lisa; McDonald, Carrie R.; Nation, Daniel A.; Libon, David J.; Au, Rhoda; Galasko, Douglas; Salmon, David P.

    2014-01-01

    We compared two methods of diagnosing mild cognitive impairment (MCI): conventional Petersen/Winblad criteria as operationalized by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and an actuarial neuropsychological method put forward by Jak and Bondi designed to balance sensitivity and reliability. 1,150 ADNI participants were diagnosed at baseline as cognitively normal (CN) or MCI via ADNI criteria (MCI: n = 846; CN: n = 304) or Jak/Bondi criteria (MCI: n = 401; CN: n = 749), and the two MCI samples were submitted to cluster and discriminant function analyses. Resulting cluster groups were then compared and further examined for APOE allelic frequencies, cerebrospinal fluid (CSF) Alzheimer’s disease (AD) biomarker levels, and clinical outcomes. Results revealed that both criteria produced a mildly impaired Amnestic subtype and a more severely impaired Dysexecutive/Mixed subtype. The neuropsychological Jak/Bondi criteria uniquely yielded a third Impaired Language subtype, whereas conventional Petersen/Winblad ADNI criteria produced a third subtype comprising nearly one-third of the sample that performed within normal limits across the cognitive measures, suggesting this method’s susceptibility to false positive diagnoses. MCI participants diagnosed via neuropsychological criteria yielded dissociable cognitive phenotypes, significant CSF AD biomarker associations, more stable diagnoses, and identified greater percentages of participants who progressed to dementia than conventional MCI diagnostic criteria. Importantly, the actuarial neuropsychological method did not produce a subtype that performed within normal limits on the cognitive testing, unlike the conventional diagnostic method. Findings support the need for refinement of MCI diagnoses to incorporate more comprehensive neuropsychological methods, with resulting gains in empirical characterization of specific cognitive phenotypes, biomarker associations, stability of diagnoses, and prediction of progression. Refinement of MCI diagnostic methods may also yield gains in biomarker and clinical trial study findings because of improvements in sample compositions of ‘true positive’ cases and removal of ‘false positive’ cases. PMID:24844687

  16. The optimal design of service level agreement in IAAS based on BDIM

    NASA Astrophysics Data System (ADS)

    Liu, Xiaochen; Zhan, Zhiqiang

    2013-03-01

    Cloud Computing has become more and more prevalent over the past few years, and we have seen the importance of Infrastructure-as-a-service (IaaS). This kind of service enables scaling of bandwidth, memory, computing power and storage. But the SLA in IaaS also faces complexity and variety. Users also consider the business of the service. To meet the most users requirements, a methodology for designing optimal SLA in IaaS from the business perspectives is proposed. This method is different from the conventional SLA design method, It not only focuses on service provider perspective, also from the customer to carry on the design. This methodology better captures the linkage between service provider and service client by considering minimizing the business loss originated from performance degradation and IT infrastructure failures and maximizing profits for service provider and clients. An optimal design in an IaaS model is provided and an example are analyzed to show this approach obtain higher profit.

  17. Cascade Optimization Strategy with Neural Network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Patnaik, Surya N.

    2000-01-01

    A preliminary aircraft engine design methodology is being developed that utilizes a cascade optimization strategy together with neural network and regression approximation methods. The cascade strategy employs different optimization algorithms in a specified sequence. The neural network and regression methods are used to approximate solutions obtained from the NASA Engine Performance Program (NEPP), which implements engine thermodynamic cycle and performance analysis models. The new methodology is proving to be more robust and computationally efficient than the conventional optimization approach of using a single optimization algorithm with direct reanalysis. The methodology has been demonstrated on a preliminary design problem for a novel subsonic turbofan engine concept that incorporates a wave rotor as a cycle-topping device. Computations of maximum thrust were obtained for a specific design point in the engine mission profile. The results (depicted in the figure) show a significant improvement in the maximum thrust obtained using the new methodology in comparison to benchmark solutions obtained using NEPP in a manual design mode.

  18. New method to design stellarator coils without the winding surface

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...

    2017-11-06

    Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less

  19. New method to design stellarator coils without the winding surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao

    Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less

  20. Development of a Test Facility for Air Revitalization Technology Evaluation

    NASA Technical Reports Server (NTRS)

    Lu, Sao-Dung; Lin, Amy; Campbell, Melissa; Smith, Frederick

    2006-01-01

    An active fault tolerant control (FTC) law is generally sensitive to false identification since the control gain is reconfigured for fault occurrence. In the conventional FTC law design procedure, dynamic variations due to false identification are not considered. In this paper, an FTC synthesis method is developed in order to consider possible variations of closed-loop dynamics under false identification into the control design procedure. An active FTC synthesis problem is formulated into an LMI optimization problem to minimize the upper bound of the induced-L2 norm which can represent the worst-case performance degradation due to false identification. The developed synthesis method is applied for control of the longitudinal motions of FASER (Free-flying Airplane for Subscale Experimental Research). The designed FTC law of the airplane is simulated for pitch angle command tracking under a false identification case.

  1. Gain-Scheduled Fault Tolerance Control Under False Identification

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine (Technical Monitor)

    2006-01-01

    An active fault tolerant control (FTC) law is generally sensitive to false identification since the control gain is reconfigured for fault occurrence. In the conventional FTC law design procedure, dynamic variations due to false identification are not considered. In this paper, an FTC synthesis method is developed in order to consider possible variations of closed-loop dynamics under false identification into the control design procedure. An active FTC synthesis problem is formulated into an LMI optimization problem to minimize the upper bound of the induced-L2 norm which can represent the worst-case performance degradation due to false identification. The developed synthesis method is applied for control of the longitudinal motions of FASER (Free-flying Airplane for Subscale Experimental Research). The designed FTC law of the airplane is simulated for pitch angle command tracking under a false identification case.

  2. Design of a high-efficiency seven-port beam splitter using a dual duty cycle grating structure.

    PubMed

    Wen, Fung Jacky; Chung, Po Sheun

    2011-07-01

    In this paper, we propose a compact seven-port beam splitter which is constructed using only a single-layer high-density grating with a dual duty cycle structure. The properties of this grating are investigated by a simplified modal method. The diffraction efficiency can be achieved around 10% more than conventional Dammann gratings while the uniformity can still be maintained at less than 1%. The effect of deviations from the design parameters on the performance of the grating is also presented.

  3. Model Predictive Flight Control System with Full State Observer using H∞ Method

    NASA Astrophysics Data System (ADS)

    Sanwale, Jitu; Singh, Dhan Jeet

    2018-03-01

    This paper presents the application of the model predictive approach to design a flight control system (FCS) for longitudinal dynamics of a fixed wing aircraft. Longitudinal dynamics is derived for a conventional aircraft. Open loop aircraft response analysis is carried out. Simulation studies are illustrated to prove the efficacy of the proposed model predictive controller using H ∞ state observer. The estimation criterion used in the {H}_{∞} observer design is to minimize the worst possible effects of the modelling errors and additive noise on the parameter estimation.

  4. Nanophotonic particle simulation and inverse design using artificial neural networks

    PubMed Central

    Peurifoy, John; Shen, Yichen; Jing, Li; Cano-Renteria, Fidel; DeLacy, Brendan G.; Joannopoulos, John D.; Tegmark, Max

    2018-01-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical. PMID:29868640

  5. Design conceptuel d'un avion blended wing body de 200 passagers

    NASA Astrophysics Data System (ADS)

    Ammar, Sami

    The Blended Wing Body is built based on the flying wing concept and performance improvements compared to conventional aircraft. Contrariwise, most studies have focused on large aircraft and it is not sure whether the gains are the same for smaller aircraft. The main of objective is to perform the conceptual design of a BWB of 200 passengers and compare the performance obtained with a conventional aircraft equivalent in terms of payload and range. The design of the Blended Wing Body was carried out under the CEASIOM environment. This platform design suitable for conventional aircraft design has been modified and additional tools have been integrated in order to achieve the aerodynamic analysis, performance and stability of the aircraft fuselage built. A plane model is obtained in the geometric module AcBuilder CEASIOM from the design variables of a wing. Estimates of mass are made from semi- empirical formulas adapted to the geometry of the BWB and calculations centering and inertia are possible through BWB model developed in CATIA. Low fidelity methods, such as TORNADO and semi- empirical formulas are used to analyze the aerodynamic performance and stability of the aircraft. The aerodynamic results are validated using a high-fidelity analysis using FLUENT CFD software. An optimization process is implemented in order to obtain improved while maintaining a feasible design performance. It is an optimization of the plan form of the aircraft fuselage integrated with a number of passengers and equivalent to that of a A320.Les performance wing aircraft merged optimized maximum range are compared to A320 also optimized. Significant gains were observed. An analysis of the dynamics of longitudinal and lateral flight is carried out on the aircraft optimized BWB finesse and mass. This study identified the stable and unstable modes of the aircraft. Thus, this analysis has highlighted the stability problems associated with the oscillation of incidence and the Dutch roll for the absence of stabilizers.

  6. Robotic influence in the conceptual design of mechanical systems in space and vice versa - A survey

    NASA Technical Reports Server (NTRS)

    Sanger, George F.

    1988-01-01

    A survey of methods using robotic devices to construct structural elements in space is presented. Two approaches to robotic construction are considered: one in which the structural elements are designed using conventional aerospace techniques which tend to constrain the function aspects of robotics and one in which the structural elements are designed from the conceptual stage with built-in robotic features. Examples are presented of structural building concepts using robotics, including the construction of the SP-100 nuclear reactor power system, a multimirror large aperture IR space telescope concept, retrieval and repair in space, and the Flight Telerobotic Servicer.

  7. Plasmonic Fiber Optic Refractometric Sensors: From Conventional Architectures to Recent Design Trends

    PubMed Central

    Klantsataya, Elizaveta; Jia, Peipei; Ebendorff-Heidepriem, Heike; Monro, Tanya M.; François, Alexandre

    2016-01-01

    Surface Plasmon Resonance (SPR) fiber sensor research has grown since the first demonstration over 20 year ago into a rich and diverse field with a wide range of optical fiber architectures, plasmonic coatings, and excitation and interrogation methods. Yet, the large diversity of SPR fiber sensor designs has made it difficult to understand the advantages of each approach. Here, we review SPR fiber sensor architectures, covering the latest developments from optical fiber geometries to plasmonic coatings. By developing a systematic approach to fiber-based SPR designs, we identify and discuss future research opportunities based on a performance comparison of the different approaches for sensing applications. PMID:28025532

  8. Modern digital flight control system design for VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Broussard, J. R.; Berry, P. W.; Stengel, R. F.

    1979-01-01

    Methods for and results from the design and evaluation of a digital flight control system (DFCS) for a CH-47B helicopter are presented. The DFCS employed proportional-integral control logic to provide rapid, precise response to automatic or manual guidance commands while following conventional or spiral-descent approach paths. It contained altitude- and velocity-command modes, and it adapted to varying flight conditions through gain scheduling. Extensive use was made of linear systems analysis techniques. The DFCS was designed, using linear-optimal estimation and control theory, and the effects of gain scheduling are assessed by examination of closed-loop eigenvalues and time responses.

  9. A rapid leaf-disc sampler for psychrometric water potential measurements.

    PubMed

    Wullschleger, S D; Oosterhuis, D M

    1986-06-01

    An instrument was designed which facilitates faster and more accurate sampling of leaf discs for psychrometric water potential measurements. The instrument consists of an aluminum housing, a spring-loaded plunger, and a modified brass-plated cork borer. The leaf-disc sampler was compared with the conventional method of sampling discs for measurement of leaf water potential with thermocouple psychrometers on a range of plant material including Gossypium hirsutum L., Zea mays L., and Begonia rex-cultorum L. The new sampler permitted a leaf disc to be excised and inserted into the psychrometer sample chamber in less than 7 seconds, which was more than twice as fast as the conventional method. This resulted in more accurate determinations of leaf water potential due to reduced evaporative water losses. The leaf-disc sampler also significantly reduced sample variability between individual measurements. This instrument can be used for many other laboratory and field measurements that necessitate leaf disc sampling.

  10. On advanced configuration enhance adaptive system optimization

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Ding, Quanxin; Wang, Helong; Guo, Chunjie; Chen, Hongliang; Zhou, Liwei

    2017-10-01

    For aim to find an effective method to structure to enhance these adaptive system with some complex function and look forward to establish an universally applicable solution in prototype and optimization. As the most attractive component in adaptive system, wave front corrector is constrained by some conventional technique and components, such as polarization dependence and narrow working waveband. Advanced configuration based on a polarized beam split can optimized energy splitting method used to overcome these problems effective. With the global algorithm, the bandwidth has been amplified by more than five times as compared with that of traditional ones. Simulation results show that the system can meet the application requirements in MTF and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration, Results show their effectiveness.

  11. Ground and flight test program of a Stokes-flow parachute: Packaging, deployment, and sounding rocket integration

    NASA Technical Reports Server (NTRS)

    Niederer, P. G.; Mihora, D. J.

    1972-01-01

    The current design and hardware components of the patented 14 sqm Stokes flow parachute are described. The Stokes-flow parachute is a canopy of open mesh material, which is kept deployed by braces. Because of the light weight of its mesh material, and the high drag on its mesh elements when they operate in the Stokes-flow flight regime, this parachute has an extremely low ballistic coefficient. It provides a stable aerodynamic platform superior to conventional nonporous billowed parachutes, is exceptionally packable, and is easily contained within the canister of the Sidewinder Arcas or the RDT and E rockets. Thus, it offers the potential for gathering more meteorological data, especially at high altitudes, than conventional billowed parachutes. Methods for packaging the parachute are also recommended. These methods include schemes for folding the canopy and for automatically releasing the pressurizing fluid as the packaged parachute unfolds.

  12. A Stereological Method for the Quantitative Evaluation of Cartilage Repair Tissue

    PubMed Central

    Nyengaard, Jens Randel; Lind, Martin; Spector, Myron

    2015-01-01

    Objective To implement stereological principles to develop an easy applicable algorithm for unbiased and quantitative evaluation of cartilage repair. Design Design-unbiased sampling was performed by systematically sectioning the defect perpendicular to the joint surface in parallel planes providing 7 to 10 hematoxylin–eosin stained histological sections. Counting windows were systematically selected and converted into image files (40-50 per defect). The quantification was performed by two-step point counting: (1) calculation of defect volume and (2) quantitative analysis of tissue composition. Step 2 was performed by assigning each point to one of the following categories based on validated and easy distinguishable morphological characteristics: (1) hyaline cartilage (rounded cells in lacunae in hyaline matrix), (2) fibrocartilage (rounded cells in lacunae in fibrous matrix), (3) fibrous tissue (elongated cells in fibrous tissue), (4) bone, (5) scaffold material, and (6) others. The ability to discriminate between the tissue types was determined using conventional or polarized light microscopy, and the interobserver variability was evaluated. Results We describe the application of the stereological method. In the example, we assessed the defect repair tissue volume to be 4.4 mm3 (CE = 0.01). The tissue fractions were subsequently evaluated. Polarized light illumination of the slides improved discrimination between hyaline cartilage and fibrocartilage and increased the interobserver agreement compared with conventional transmitted light. Conclusion We have applied a design-unbiased method for quantitative evaluation of cartilage repair, and we propose this algorithm as a natural supplement to existing descriptive semiquantitative scoring systems. We also propose that polarized light is effective for discrimination between hyaline cartilage and fibrocartilage. PMID:26069715

  13. A Meta-heuristic Approach for Variants of VRP in Terms of Generalized Saving Method

    NASA Astrophysics Data System (ADS)

    Shimizu, Yoshiaki

    Global logistic design is becoming a keen interest to provide an essential infrastructure associated with modern societal provision. For examples, we can designate green and/or robust logistics in transportation systems, smart grids in electricity utilization systems, and qualified service in delivery systems, and so on. As a key technology for such deployments, we engaged in practical vehicle routing problem on a basis of the conventional saving method. This paper extends such idea and gives a general framework available for various real-world applications. It can cover not only delivery problems but also two kind of pick-up problems, i.e., straight and drop-by routings. Moreover, multi-depot problem is considered by a hybrid approach with graph algorithm and its solution method is realized in a hierarchical manner. Numerical experiments have been taken place to validate effectiveness of the proposed method.

  14. Economic feasibility of irradiation-composting plant of sewage sludge

    NASA Astrophysics Data System (ADS)

    Hashimoto, S.; Nishimura, K.; Machi, S.

    Design and cost analysis were made for a sewage sludge treatment plant (capacity 25 - 200 ton sludge/day) with an electron accelerator. Dewatered sludge is spreaded on a rolling drum through a flat nozzle and disinfected by electron irradiation with a dose of 5 kGy. Composting of the irradiated sludge is also made at the optimum temperature for 3 days. The accelerating voltage of electron and capacity of the accelerator are 1.5 MV and 15 kW, respectively. Total volume of the fermentor is about one third of that of conventional composting process because the irradiation makes the time of composting shorter. The cost of sludge treatment is almost the same as that of conventional method.

  15. Gram staining apparatus for space station applications.

    PubMed Central

    Molina, T C; Brown, H D; Irbe, R M; Pierson, D L

    1990-01-01

    A self-contained, portable Gram staining apparatus (GSA) has been developed for use in the microgravity environment on board the Space Station Freedom. Accuracy and reproducibility of this apparatus compared with the conventional Gram staining method were evaluated by using gram-negative and gram-positive controls and different species of bacteria grown in pure cultures. A subsequent study was designed to assess the performance of the GSA with actual specimens. A set of 60 human and environmental specimens was evaluated with the GSA and the conventional Gram staining procedure. Data obtained from these studies indicated that the GSA will provide the Gram staining capability needed for the microgravity environment of space. Images PMID:1690529

  16. Design and validation of a tissue bath 3-D printed with PLA for optically mapping suspended whole heart preparations.

    PubMed

    Entz, Michael; King, D Ryan; Poelzing, Steven

    2017-12-01

    With the sudden increase in affordable manufacturing technologies, the relationship between experimentalists and the designing process for laboratory equipment is rapidly changing. While experimentalists are still dependent on engineers and manufacturers for precision electrical, mechanical, and optical equipment, it has become a realistic option for in house manufacturing of other laboratory equipment with less precise design requirements. This is possible due to decreasing costs and increasing functionality of desktop three-dimensional (3-D) printers and 3-D design software. With traditional manufacturing methods, iterative design processes are expensive and time consuming, and making more than one copy of a custom piece of equipment is prohibitive. Here, we provide an overview to design a tissue bath and stabilizer for a customizable, suspended, whole heart optical mapping apparatus that can be produced significantly faster and less expensive than conventional manufacturing techniques. This was accomplished through a series of design steps to prevent fluid leakage in the areas where the optical imaging glass was attached to the 3-D printed bath. A combination of an acetone dip along with adhesive was found to create a water tight bath. Optical mapping was used to quantify cardiac conduction velocity and action potential duration to compare 3-D printed baths to a bath that was designed and manufactured in a machine shop. Importantly, the manufacturing method did not significantly affect conduction, action potential duration, or contraction, suggesting that 3-D printed baths are equally effective for optical mapping experiments. NEW & NOTEWORTHY This article details three-dimensional printable equipment for use in suspended whole heart optical mapping experiments. This equipment is less expensive than conventional manufactured equipment as well as easily customizable to the experimentalist. The baths can be waterproofed using only a three-dimensional printer, acetone, a glass microscope slide, c-clamps, and adhesive. Copyright © 2017 the American Physiological Society.

  17. The randomised controlled trial design: unrecognized opportunities for health sciences librarianship.

    PubMed

    Eldredge, Jonathan D

    2003-06-01

    to describe the essential components of the Randomised Controlled Trial (RCT) and its major variations; to describe less conventional applications of the RCT design found in the health sciences literature with potential relevance to health sciences librarianship; to discuss the limited number of RCTs within health sciences librarianship. narrative review supported to a limited extent with PubMed and Library Literature database searches consistent with specific search parameters. In addition, more systematic methods, including handsearching of specific journals, to identify health sciences librarianship RCTs. While many RCTs within the health sciences follow more conventional patterns, some RCTs assume certain unique features. Selected examples illustrate the adaptations of this experimental design to answering questions of possible relevance to health sciences librarians. The author offers several strategies for controlling bias in library and informatics applications of the RCT and acknowledges the potential of the electronic era in providing many opportunities to utilize the blinding aspects of RCTs. RCTs within health sciences librarianship inhabit a limited number of subject domains such as education. This limited scope offers both advantages and disadvantages for making Evidence-Based Librarianship (EBL) a reality. The RCT design offers the potential to answer far more EBL questions than have been addressed by the design to date. Librarians need only extend their horizons through use of the versatile RCT design into new subject domains to facilitate making EBL a reality.

  18. Next Generation Non-Vacuum, Maskless, Low Temperature Nanoparticle Ink Laser Digital Direct Metal Patterning for a Large Area Flexible Electronics

    PubMed Central

    Yeo, Junyeob; Hong, Sukjoon; Lee, Daehoo; Hotz, Nico; Lee, Ming-Tsang; Grigoropoulos, Costas P.; Ko, Seung Hwan

    2012-01-01

    Flexible electronics opened a new class of future electronics. The foldable, light and durable nature of flexible electronics allows vast flexibility in applications such as display, energy devices and mobile electronics. Even though conventional electronics fabrication methods are well developed for rigid substrates, direct application or slight modification of conventional processes for flexible electronics fabrication cannot work. The future flexible electronics fabrication requires totally new low-temperature process development optimized for flexible substrate and it should be based on new material too. Here we present a simple approach to developing a flexible electronics fabrication without using conventional vacuum deposition and photolithography. We found that direct metal patterning based on laser-induced local melting of metal nanoparticle ink is a promising low-temperature alternative to vacuum deposition– and photolithography-based conventional metal patterning processes. The “digital” nature of the proposed direct metal patterning process removes the need for expensive photomask and allows easy design modification and short turnaround time. This new process can be extremely useful for current small-volume, large-variety manufacturing paradigms. Besides, simple, scalable, fast and low-temperature processes can lead to cost-effective fabrication methods on a large-area polymer substrate. The developed process was successfully applied to demonstrate high-quality Ag patterning (2.1 µΩ·cm) and high-performance flexible organic field effect transistor arrays. PMID:22900011

  19. Next generation non-vacuum, maskless, low temperature nanoparticle ink laser digital direct metal patterning for a large area flexible electronics.

    PubMed

    Yeo, Junyeob; Hong, Sukjoon; Lee, Daehoo; Hotz, Nico; Lee, Ming-Tsang; Grigoropoulos, Costas P; Ko, Seung Hwan

    2012-01-01

    Flexible electronics opened a new class of future electronics. The foldable, light and durable nature of flexible electronics allows vast flexibility in applications such as display, energy devices and mobile electronics. Even though conventional electronics fabrication methods are well developed for rigid substrates, direct application or slight modification of conventional processes for flexible electronics fabrication cannot work. The future flexible electronics fabrication requires totally new low-temperature process development optimized for flexible substrate and it should be based on new material too. Here we present a simple approach to developing a flexible electronics fabrication without using conventional vacuum deposition and photolithography. We found that direct metal patterning based on laser-induced local melting of metal nanoparticle ink is a promising low-temperature alternative to vacuum deposition- and photolithography-based conventional metal patterning processes. The "digital" nature of the proposed direct metal patterning process removes the need for expensive photomask and allows easy design modification and short turnaround time. This new process can be extremely useful for current small-volume, large-variety manufacturing paradigms. Besides, simple, scalable, fast and low-temperature processes can lead to cost-effective fabrication methods on a large-area polymer substrate. The developed process was successfully applied to demonstrate high-quality Ag patterning (2.1 µΩ·cm) and high-performance flexible organic field effect transistor arrays.

  20. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications

    PubMed Central

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-01-01

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970

  1. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications.

    PubMed

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-12-27

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.

  2. New method of 2-dimensional metrology using mask contouring

    NASA Astrophysics Data System (ADS)

    Matsuoka, Ryoichi; Yamagata, Yoshikazu; Sugiyama, Akiyuki; Toyoda, Yasutaka

    2008-10-01

    We have developed a new method of accurately profiling and measuring of a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, this edge detection method is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. This method realizes two-dimensional metrology for refined pattern that had been difficult to measure conventionally by utilizing high precision contour profile. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. This is to say, demands for quality is becoming strenuous because of enormous quantity of data growth with increasing of refined pattern on photo mask manufacture. In the result, massive amount of simulated error occurs on mask inspection that causes lengthening of mask production and inspection period, cost increasing, and long delivery time. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method of a DFM solution using two-dimensional metrology for refined pattern.

  3. Design and fabrication of wraparound contact silicon solar cells

    NASA Technical Reports Server (NTRS)

    Goodelle, G.

    1972-01-01

    Work is reported on the development and production of 1,000 N+/P wraparound solar cells of two different design configurations: Design 1, a bar configuration wraparound and Design 2, a corner pad configuration wraparound. The project goal consisted of determining which of the two designs was better with regard to production cost where the typical cost of a conventional solar cell was considered as the norm. Emphasis was also placed on obtaining the highest possible output efficiency, although a minumum efficiency of 10.5% was required. Five hundred cells of Design 1 and 500 cells of Design 2 were fabricated. Design 1 which used similar procedures to those used in the fabrication of conventional cells, was the less expensive with a cost very close to that of a conventional cell. Design 2 was more expensive mainly because the more exotic process procedures used were less developed than those used for Design 1. However, Design 2 processing technology demonstrated a feasibility that should warrant future investigation toward improvement and refinement.

  4. Exploring the decision to disclose the use of natural products among outpatients: a mixed-method study

    PubMed Central

    2013-01-01

    Background There is little understanding of the reasons for the limited communication between patients and conventional healthcare professionals regarding patients’ use of complementary and alternative medicine (CAM). The purpose of this study is to explore the predictors of outpatients’ decision to disclose their use of natural products to conventional healthcare professionals. Methods A mixed method design was used. Quantitative data were obtained through a survey and qualitative data were obtained from semi-structured interviews. A total of 257 outpatients who fulfilled the criteria of having used natural products prior to the interview were recruited for this study. Subsequently, 39 patients of those who completed the survey were further selected to take part in an in-depth qualitative interview. Results Predictors of the decision to disclose the use of natural products to conventional healthcare professionals included age, frequency of clinic visits, knowledge of the natural products and the attitude towards the benefits of CAM use. The themes that emerged from the qualitative data included safeness of the natural products, consulting alternative sources of information, apprehension regarding the development of negative relationships with healthcare professionals and reactions from the healthcare professionals. Conclusions Understanding the factors and reasons affecting patients’ decision as to whether to disclose their use of natural products provides an opportunity for conventional healthcare professionals to communicate better with patients. It is important to encourage patients to disclose their use of natural products in order to provide responsible health care as well as increasing patient safety regarding medication usage. PMID:24245611

  5. Application of finite-element methods to dynamic analysis of flexible spatial and co-planar linkage systems, part 2

    NASA Technical Reports Server (NTRS)

    Dubowsky, Steven

    1989-01-01

    An approach is described to modeling the flexibility effects in spatial mechanisms and manipulator systems. The method is based on finite element representations of the individual links in the system. However, it should be noted that conventional finite element methods and software packages will not handle the highly nonlinear dynamic behavior of these systems which results form their changing geometry. In order to design high-performance lightweight systems and their control systems, good models of their dynamic behavior which include the effects of flexibility are required.

  6. Interactive computer aided technology, evolution in the design/manufacturing process

    NASA Technical Reports Server (NTRS)

    English, C. H.

    1975-01-01

    A powerful computer-operated three dimensional graphic system and associated auxiliary computer equipment used in advanced design, production design, and manufacturing was described. This system has made these activities more productive than when using older and more conventional methods to design and build aerospace vehicles. With the use of this graphic system, designers are now able to define parts using a wide variety of geometric entities, define parts as fully surface 3-dimensional models as well as "wire-frame" models. Once geometrically defined, the designer is able to take section cuts of the surfaced model and automatically determine all of the section properties of the planar cut, lightpen detect all of the surface patches and automatically determine the volume and weight of the part. Further, his designs are defined mathematically at a degree of accuracy never before achievable.

  7. Interbody fusion cage design using integrated global layout and local microstructure topology optimization.

    PubMed

    Lin, Chia-Ying; Hsiao, Chun-Ching; Chen, Po-Quan; Hollister, Scott J

    2004-08-15

    An approach combining global layout and local microstructure topology optimization was used to create a new interbody fusion cage design that concurrently enhanced stability, biofactor delivery, and mechanical tissue stimulation for improved arthrodesis. To develop a new interbody fusion cage design by topology optimization with porous internal architecture. To compare the performance of this new design to conventional threaded cage designs regarding early stability and long-term stress shielding effects on ingrown bone. Conventional interbody cage designs mainly fall into categories of cylindrical or rectangular shell shapes. The designs contribute to rigid stability and maintain disc height for successful arthrodesis but may also suffer mechanically mediated failures of dislocation or subsidence, as well as the possibility of bone resorption. The new optimization approach created a cage having designed microstructure that achieved desired mechanical performance while providing interconnected channels for biofactor delivery. The topology optimization algorithm determines the material layout under desirable volume fraction (50%) and displacement constraints favorable to bone formation. A local microstructural topology optimization method was used to generate periodic microstructures for porous isotropic materials. Final topology was generated by the integration of the two-scaled structures according to segmented regions and the corresponding material density. Image-base finite element analysis was used to compare the mechanical performance of the topology-optimized cage and conventional threaded cage. The final design can be fabricated by a variety of Solid Free-Form systems directly from the image output. The new design exhibited a narrower, more uniform displacement range than the threaded cage design and lower stress at the cage-vertebra interface, suggesting a reduced risk of subsidence. Strain energy density analysis also indicated that a higher portion of total strain energy density was transferred into the new bone region inside the new designed cage, indicating a reduced risk of stress shielding. The new design approach using integrated topology optimization demonstrated comparable or better stability by limited displacement and reduced localized deformation related to the risk of subsidence. Less shielding of newly formed bone was predicted inside the new designed cage. Using the present approach, it is also possible to tailor cage design for specific materials, either titanium or polymer, that can attain the desired balance between stability, reduced stress shielding, and porosity for biofactor delivery.

  8. Comparison of the lysis centrifugation method with the conventional blood culture method in cases of sepsis in a tertiary care hospital.

    PubMed

    Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M

    2012-07-01

    Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.

  9. Acoustic design by topology optimization

    NASA Astrophysics Data System (ADS)

    Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole

    2008-11-01

    To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.

  10. A work study of the CAD/CAM method and conventional manual method in the fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Wong, M W; So, S F

    2005-04-01

    A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.

  11. The Effect of Adaptive Nonlinear Frequency Compression on Phoneme Perception.

    PubMed

    Glista, Danielle; Hawkins, Marianne; Bohnert, Andrea; Rehmann, Julia; Wolfe, Jace; Scollie, Susan

    2017-12-12

    This study implemented a fitting method, developed for use with frequency lowering hearing aids, across multiple testing sites, participants, and hearing aid conditions to evaluate speech perception with a novel type of frequency lowering. A total of 8 participants, including children and young adults, participated in real-world hearing aid trials. A blinded crossover design, including posttrial withdrawal testing, was used to assess aided phoneme perception. The hearing aid conditions included adaptive nonlinear frequency compression (NFC), static NFC, and conventional processing. Enabling either adaptive NFC or static NFC improved group-level detection and recognition results for some high-frequency phonemes, when compared with conventional processing. Mean results for the distinction component of the Phoneme Perception Test (Schmitt, Winkler, Boretzki, & Holube, 2016) were similar to those obtained with conventional processing. Findings suggest that both types of NFC tested in this study provided a similar amount of speech perception benefit, when compared with group-level performance with conventional hearing aid technology. Individual-level results are presented with discussion around patterns of results that differ from the group average.

  12. Effects of conventional neurological treatment and a virtual reality training program on eye-hand coordination in children with cerebral palsy.

    PubMed

    Shin, Ji-Won; Song, Gui-Bin; Hwangbo, Gak

    2015-07-01

    [Purpose] The purpose of the study was to evaluate the effects of conventional neurological treatment and a virtual reality training program on eye-hand coordination in children with cerebral palsy. [Subjects] Sixteen children (9 males, 7 females) with spastic diplegic cerebral palsy were recruited and randomly assigned to the conventional neurological physical therapy group (CG) and virtual reality training group (VRG). [Methods] Eight children in the control group performed 45 minutes of therapeutic exercise twice a week for eight weeks. In the experimental group, the other eight children performed 30 minutes of therapeutic exercise and 15 minutes of a training program using virtual reality twice a week during the experimental period. [Results] After eight weeks of the training program, there were significant differences in eye-hand coordination and visual motor speed in the comparison of the virtual reality training group with the conventional neurological physical therapy group. [Conclusion] We conclude that a well-designed training program using virtual reality can improve eye-hand coordination in children with cerebral palsy.

  13. Laparoendoscopic single-site surgery varicocelectomy versus conventional laparoscopic varicocele ligation: A meta-analysis

    PubMed Central

    Li, Mingchao; Wang, Zhengyun

    2016-01-01

    Objective To perform a meta-analysis of data from available published studies comparing laparoendoscopic single-site surgery varicocelectomy (LESSV) with conventional transperitoneal laparoscopic varicocele ligation. Methods A comprehensive data search was performed in PubMed and Embase to identify randomized controlled trials and comparative studies that compared the two surgical approaches for the treatment of varicoceles. Results Six studies were included in the meta-analysis. LESSV required a significantly longer operative time than conventional laparoscopic varicocelectomy but was associated with significantly less postoperative pain at 6 h and 24 h, a shorter recovery time and greater patient satisfaction with the cosmetic outcome. There was no difference between the two surgical approaches in terms of postoperative semen quality or the incidence of complications. Conclusion These data suggest that LESSV offers a well tolerated and efficient alternative to conventional laparoscopic varicocelectomy, with less pain, a shorter recovery time and better cosmetic satisfaction. Further well-designed studies are required to confirm these findings and update the results of this meta-analysis. PMID:27688686

  14. Parametric study of a canard-configured transport using conceptual design optimization

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.

    1985-01-01

    Constrained-parameter optimization is used to perform optimal conceptual design of both canard and conventional configurations of a medium-range transport. A number of design constants and design constraints are systematically varied to compare the sensitivities of canard and conventional configurations to a variety of technology assumptions. Main-landing-gear location and canard surface high-lift performance are identified as critical design parameters for a statically stable, subsonic, canard-configured transport.

  15. Update of the equations of the limit state of the structural material with the realization of their deformation

    NASA Astrophysics Data System (ADS)

    Zenkov, E. V.

    2018-01-01

    Two methods are given in the article by considering the type of stressed-Deformed state (SDS) based on equations limit condition and analyzing the results of laboratory tests of special specimens for mechanical testing, focus having destruction thereof in the same view of SDS as in focus possible destruction of the structural member. The considered limited use of these methods in terms of considering physically consistent strength criterion type Pisarenko-Lebedev. A revised design-experimental procedure for determining the strength of the material of the structure, combining therein the elements of these two methods, consisting in determining the strength parameters of construction material, entering criterion equation Pisarenko-Lebedev, considering the actual appearance of the region-of-interest SDS structure. The implementation of the procedure is performed on the basis of the selection of the respective experimental laboratory specimens for mechanical testing, plan SDS in working zone coinciding with a SDS: structure whose strength is evaluated. The refinement process limit state equations demonstrated in determining 50CrV4 steel strength parameters, being in a state of biaxial stretching. Design-experimentally determined by, that steel for a given voltage limit value is almost a quarter of its value is reduced compared to the conventional tensile strength. value is reduced compared to the conventional tensile strength.

  16. Novel Hybrid Operating Table for Neuroendovascular Treatment.

    PubMed

    Jong-Hyun, Park; Jonghyeon, Mun; Dong-Seung, Shin; Bum-Tae, Kim

    2017-03-25

    The integration of interventional and surgical techniques is requiring the development of a new working environment equipped for the needs of an interdisciplinary neurovascular team. However, conventional surgical and interventional tables have only a limited ability to provide for these needs. We have developed a concept mobile hybrid operating table that provides the ability for such a team to conduct both endovascular and surgical procedures in a single session. We developed methods that provide surgeons with angiography-guided surgery techniques for use in a conventional operating room environment. In order to design a convenient device ideal for practical use, we consulted with mechanical engineers. The mobile hybrid operating table consists of two modules: a floating tabletop and a mobile module. In brief, the basic principle of the mobile hybrid operating table is as follows: firstly, the length of the mobile hybrid operating table is longer than that of a conventional surgical table and yet shorter than a conventional interventional table. It was designed with the goal of exhaustively meeting the intensive requirements of both endovascular and surgical procedures. Its mobile module allows for the floating tabletop to be moved quickly and precisely. It is important that during a procedure, a patient can be moved without being repositioned, particularly with a catheter in situ. Secondly, a slim-profile headrest facilitates the mounting of a radiolucent head cramp system for cranial stabilization and fixation. We have introduced a novel invention, a mobile hybrid operating table for use in an operating suite.

  17. Disappearance of T Cell-Mediated Rejection Despite Continued Antibody-Mediated Rejection in Late Kidney Transplant Recipients

    PubMed Central

    Chang, Jessica; Famulski, Konrad; Hidalgo, Luis G.; Salazar, Israel D.R.; Merino Lopez, Maribel; Matas, Arthur; Picton, Michael; de Freitas, Declan; Bromberg, Jonathan; Serón, Daniel; Sellarés, Joana; Einecke, Gunilla; Reeve, Jeff

    2015-01-01

    The prevalent renal transplant population presents an opportunity to observe the adaptive changes in the alloimmune response over time, but such studies have been limited by uncertainties in the conventional biopsy diagnosis of T cell-mediated rejection (TCMR) and antibody-mediated rejection (ABMR). To circumvent these limitations, we used microarrays and conventional methods to investigate rejection in 703 unselected biopsies taken 3 days to 35 years post-transplant from North American and European centers. Using conventional methods, we diagnosed rejection in 205 biopsy specimens (28%): 67 pure TCMR, 110 pure ABMR, and 28 mixed (89 designated borderline). Using microarrays, we diagnosed rejection in 228 biopsy specimens (32%): 76 pure TCMR, 124 pure ABMR, and 28 mixed (no borderline). Molecular assessment confirmed most conventional diagnoses (agreement was 90% for TCMR and 83% for ABMR) but revealed some errors, particularly in mixed rejection, and improved prediction of failure. ABMR was strongly associated with increased graft loss, but TCMR was not. ABMR became common in biopsy specimens obtained >1 year post-transplant and continued to appear in all subsequent intervals. TCMR was common early but progressively disappeared over time. In 108 biopsy specimens obtained 10.2–35 years post-transplant, TCMR defined by molecular and conventional features was never observed. We conclude that the main cause of kidney transplant failure is ABMR, which can present even decades after transplantation. In contrast, TCMR disappears by 10 years post-transplant, implying that a state of partial adaptive tolerance emerges over time in the kidney transplant population. PMID:25377077

  18. Design, fabrication, and experimental characterization of plasmonic photoconductive terahertz emitters.

    PubMed

    Berry, Christopher; Hashemi, Mohammad Reza; Unlu, Mehmet; Jarrahi, Mona

    2013-07-08

    In this video article we present a detailed demonstration of a highly efficient method for generating terahertz waves. Our technique is based on photoconduction, which has been one of the most commonly used techniques for terahertz generation (1-8). Terahertz generation in a photoconductive emitter is achieved by pumping an ultrafast photoconductor with a pulsed or heterodyned laser illumination. The induced photocurrent, which follows the envelope of the pump laser, is routed to a terahertz radiating antenna connected to the photoconductor contact electrodes to generate terahertz radiation. Although the quantum efficiency of a photoconductive emitter can theoretically reach 100%, the relatively long transport path lengths of photo-generated carriers to the contact electrodes of conventional photoconductors have severely limited their quantum efficiency. Additionally, the carrier screening effect and thermal breakdown strictly limit the maximum output power of conventional photoconductive terahertz sources. To address the quantum efficiency limitations of conventional photoconductive terahertz emitters, we have developed a new photoconductive emitter concept which incorporates a plasmonic contact electrode configuration to offer high quantum-efficiency and ultrafast operation simultaneously. By using nano-scale plasmonic contact electrodes, we significantly reduce the average photo-generated carrier transport path to photoconductor contact electrodes compared to conventional photoconductors (9). Our method also allows increasing photoconductor active area without a considerable increase in the capacitive loading to the antenna, boosting the maximum terahertz radiation power by preventing the carrier screening effect and thermal breakdown at high optical pump powers. By incorporating plasmonic contact electrodes, we demonstrate enhancing the optical-to-terahertz power conversion efficiency of a conventional photoconductive terahertz emitter by a factor of 50 (10).

  19. New valve and bonding designs for microfluidic biochips containing proteins.

    PubMed

    Lu, Chunmeng; Xie, Yubing; Yang, Yong; Cheng, Mark M-C; Koh, Chee-Guan; Bai, Yunling; Lee, L James; Juang, Yi-Je

    2007-02-01

    Two major concerns in the design and fabrication of microfluidic biochips are protein binding on the channel surface and protein denaturing during device assembly. In this paper, we describe new methods to solve these problems. A "fishbone" microvalve design based on the concept of superhydrophobicity was developed to replace the capillary valve in applications where the chip surface requires protein blocking to prevent nonspecific binding. Our experimental results show that the valve functions well in a CD-like ELISA device. The packaging of biochips containing pre-loaded proteins is also a challenging task since conventional sealing methods often require the use of high temperatures, electric voltages, or organic solvents that are detrimental to the protein activity. Using CO2 gas to enhance the diffusion of polymer molecules near the device surface can result in good bonding at low temperatures and low pressure. This bonding method has little influence on the activity of the pre-loaded proteins after bonding.

  20. Data decomposition method for parallel polygon rasterization considering load balancing

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun

    2015-12-01

    It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.

  1. Modeling a color-rendering operator for high dynamic range images using a cone-response function

    NASA Astrophysics Data System (ADS)

    Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju

    2015-09-01

    Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

  2. On NUFFT-based gridding for non-Cartesian MRI

    NASA Astrophysics Data System (ADS)

    Fessler, Jeffrey A.

    2007-10-01

    For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.

  3. Applying reliability analysis to design electric power systems for More-electric aircraft

    NASA Astrophysics Data System (ADS)

    Zhang, Baozhu

    The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.

  4. New protocol for construction of eyeglasses-supported provisional nasal prosthesis using CAD/CAM techniques.

    PubMed

    Ciocca, Leonardo; Fantini, Massimiliano; De Crescenzio, Francesca; Persiani, Franco; Scotti, Roberto

    2010-01-01

    A new protocol for making an immediate provisional eyeglasses-supported nasal prosthesis is presented that uses laser scanning, computer-aided design/computer-aided manufacturing procedures, and rapid prototyping techniques, reducing time and costs while increasing the quality of the final product. With this protocol, the eyeglasses were digitized, and the relative position of the nasal prosthesis was planned and evaluated in a virtual environment without any try-in appointment. This innovative method saves time, reduces costs, and restores the patient's aesthetic appearance after a disfiguration caused by ablation of the nasal pyramid better than conventional restoration methods. Moreover, the digital model of the designed nasal epithesis can be used to develop a definitive prosthesis anchored to osseointegrated craniofacial implants.

  5. A knowledge-based design framework for airplane conceptual and preliminary design

    NASA Astrophysics Data System (ADS)

    Anemaat, Wilhelmus A. J.

    The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane. Using these tools will show an improvement in efficiency over using separate programs due to the automatic recalculation with any change of input data. The direct visual feedback of 3D geometry in the AAA-AML, will lead to quicker resolving of problems as opposed to conventional methods.

  6. Conventional vs  invert-grayscale X-ray for diagnosis of pneumothorax in the emergency setting.

    PubMed

    Musalar, Ekrem; Ekinci, Salih; Ünek, Orkun; Arş, Eda; Eren, Hakan Şevki; Gürses, Bengi; Aktaş, Can

    2017-09-01

    Pneumothorax is a pathologic condition in which air is accumulated between the visceral and parietal pleura. After clinical suspicion, in order to diagnose the severity of the condition, imaging is necessary. By using the help of Picture Archiving and Communication Systems (PACS) direct conventional X-rays are converted to gray-scale and this has become a preferred method among many physicians. Our study design was a case-control study with cross-over design study. Posterior-anterior chest X-rays of patients were evaluated for pneumothorax by 10 expert physicians with at least 3years of experience and who have used inverted gray-scale posterior anterior chest X-ray for diagnosing pneumothorax. The study included posterior anterior chest X-ray images of 268 patients of which 106 were diagnosed with spontaneous pneumothorax and 162 patients used as a control group. The sensitivity of Digital-conventional X-rays was found to be higher than that of inverted gray-scale images (95% CI (2,08-5,04), p<0,01). There was no statistically significant difference between the gold standard and digital-conventional images (95% CI (0,45-2,17), p=0,20), while the evaluations of the gray-scale images were found to be less sensitive for diagnosis (95% CI (3,16-5,67) p<0,01). Inverted gray-scale imaging is not a superior imaging modality over digital-conventional X-ray for the diagnosis of pneumothorax. Prospective studies should be performed where diagnostic potency of inverted gray-scale radiograms is tested against gold standard chest CT. Further research should compare inverted grayscale to lung ultrasound to assess them as alternatives prior to CT. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Rapid high performance liquid chromatography method development with high prediction accuracy, using 5cm long narrow bore columns packed with sub-2microm particles and Design Space computer modeling.

    PubMed

    Fekete, Szabolcs; Fekete, Jeno; Molnár, Imre; Ganzler, Katalin

    2009-11-06

    Many different strategies of reversed phase high performance liquid chromatographic (RP-HPLC) method development are used today. This paper describes a strategy for the systematic development of ultrahigh-pressure liquid chromatographic (UHPLC or UPLC) methods using 5cmx2.1mm columns packed with sub-2microm particles and computer simulation (DryLab((R)) package). Data for the accuracy of computer modeling in the Design Space under ultrahigh-pressure conditions are reported. An acceptable accuracy for these predictions of the computer models is presented. This work illustrates a method development strategy, focusing on time reduction up to a factor 3-5, compared to the conventional HPLC method development and exhibits parts of the Design Space elaboration as requested by the FDA and ICH Q8R1. Furthermore this paper demonstrates the accuracy of retention time prediction at elevated pressure (enhanced flow-rate) and shows that the computer-assisted simulation can be applied with sufficient precision for UHPLC applications (p>400bar). Examples of fast and effective method development in pharmaceutical analysis, both for gradient and isocratic separations are presented.

  8. Slump sitting X-ray of the lumbar spine is superior to the conventional flexion view in assessing lumbar spine instability.

    PubMed

    Hey, Hwee Weng Dennis; Lau, Eugene Tze-Chun; Lim, Joel-Louis; Choong, Denise Ai-Wen; Tan, Chuen-Seng; Liu, Gabriel Ka-Po; Wong, Hee-Kit

    2017-03-01

    Flexion radiographs have been used to identify cases of spinal instability. However, current methods are not standardized and are not sufficiently sensitive or specific to identify instability. This study aimed to introduce a new slump sitting method for performing lumbar spine flexion radiographs and comparison of the angular range of motions (ROMs) and displacements between the conventional method and this new method. This study used is a prospective study on radiological evaluation of the lumbar spine flexion ROMs and displacements using dynamic radiographs. Sixty patients were recruited from a single spine tertiary center. Angular and displacement measurements of lumbar spine flexion were carried out. Participants were randomly allocated into two groups: those who did the new method first, followed by the conventional method versus those who did the conventional method first, followed by the new method. A comparison of the angular and displacement measurements of lumbar spine flexion between the conventional method and the new method was performed and tested for superiority and non-inferiority. The measurements of global lumbar angular ROM were, on average, 17.3° larger (p<.0001) using the new slump sitting method compared with the conventional method. They were most significant at the levels of L3-L4, L4-L5, and L5-S1 (p<.0001, p<.0001 and p=.001, respectively). There was no significant difference between both methods when measuring lumbar displacements (p=.814). The new method of slump sitting dynamic radiograph was shown to be superior to the conventional method in measuring the angular ROM and non-inferior to the conventional method in the measurement of displacement. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Aerodynamic Validation of Emerging Projectile and Missile Configurations

    DTIC Science & Technology

    2010-12-01

    Inflation Layers at the Surface of the M549 Projectile....................................39 Figure 33. Probe Profile from Nose to Shock Front...behavior is critical for the design of new projectile shapes. The conventional approach to predict this aerodynamic behavior is through wind tunnel ...tool to study fluid flows and complements empirical methods and wind tunnel testing. In this study, the computer program ANSYS CFX was used to

  10. Effectiveness of Mutual Learning Approach in the Academic Achievement of B.Ed Students in Learning Optional II English

    ERIC Educational Resources Information Center

    Arulselvi, Evangelin

    2013-01-01

    The present study aims at finding out the effectiveness of Mutual learning approach over the conventional method in learning English optional II among B.Ed students. The randomized pre-test, post test, control group and experimental group design was employed. The B.Ed students of the same college formed the control and experimental groups. Each…

  11. The Effect of Process Intervention on the Attitudes and Learning in a College Freshman Composition Class.

    ERIC Educational Resources Information Center

    Wahlberg, William Auman

    This study was designed to explore one method of intervening in the process of a conventional academic classroom to affect student attitude and improve the learning climate. Two college freshman composition classes of 22 students each provided the subjects for the study. Each class was taught by the same instructor for three hours a week; one…

  12. Conceptual design of fast-ignition laser fusion reactor FALCON-D

    NASA Astrophysics Data System (ADS)

    Goto, T.; Someya, Y.; Ogawa, Y.; Hiwatari, R.; Asaoka, Y.; Okano, K.; Sunahara, A.; Johzaki, T.

    2009-07-01

    A new conceptual design of the laser fusion power plant FALCON-D (Fast-ignition Advanced Laser fusion reactor CONcept with a Dry wall chamber) has been proposed. The fast-ignition method can achieve sufficient fusion gain for a commercial operation (~100) with about 10 times smaller fusion yield than the conventional central ignition method. FALCON-D makes full use of this property and aims at designing with a compact dry wall chamber (5-6 m radius). 1D/2D simulations by hydrodynamic codes showed a possibility of achieving sufficient gain with a laser energy of 400 kJ, i.e. a 40 MJ target yield. The design feasibility of the compact dry wall chamber and the solid breeder blanket system was shown through thermomechanical analysis of the dry wall and neutronics analysis of the blanket system. Moderate electric output (~400 MWe) can be achieved with a high repetition (30 Hz) laser. This dry wall reactor concept not only reduces several difficulties associated with a liquid wall system but also enables a simple cask maintenance method for the replacement of the blanket system, which can shorten the maintenance period. The basic idea of the maintenance method for the final optics system has also been proposed. Some critical R&D issues required for this design are also discussed.

  13. Development and clinical validation of a multiplex real-time PCR assay for herpes simplex and varicella zoster virus.

    PubMed

    Tan, Thean Yen; Zou, Hao; Ong, Danny Chee Tiong; Ker, Khor Jia; Chio, Martin Tze Wei; Teo, Rachael Yu Lin; Koh, Mark Jean Aan

    2013-12-01

    Herpes simplex virus (HSV) and varicella zoster virus (VZV) are related members of the Herpesviridae family and are well-documented human pathogens causing a spectrum of diseases, from mucocutaneous disease to infections of the central nervous system. This study was carried out to evaluate and validate the performance of a multiplex real-time polymerase chain reaction (PCR) assay in detecting and differentiating HSV1, HSV2, and VZV from clinical samples. Consensus PCR primers for HSV were designed from the UL30 component of the DNA polymerase gene of HSV, with 2 separate hydrolysis probes designed to differentiate HSV1 and HSV2. Separate primers and a probe were also designed against the DNA polymerase gene of VZV. A total of 104 clinical samples were available for testing by real-time PCR, conventional PCR, and viral culture. The sensitivity and specificity of the real-time assay was calculated by comparing the multiplex PCR result with that of a combined standard of virus culture and conventional PCR. The sensitivity of the real-time assay was 100%, with specificity ranging from 98% to 100% depending on the target gene. Both PCR methods detected more positive samples for HSV or VZV, compared with conventional virus culture. This multiplex PCR assay provides accurate and rapid diagnostic capabilities for the diagnosis and differentiation of HSV1, HSV2, and VZV infections, with the presence of an internal control to monitor for inhibition of the PCR reaction.

  14. A Single-Vector Force Calibration Method Featuring the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; Morton, M.; Draper, N.; Line, W.

    2001-01-01

    This paper proposes a new concept in force balance calibration. An overview of the state-of-the-art in force balance calibration is provided with emphasis on both the load application system and the experimental design philosophy. Limitations of current systems are detailed in the areas of data quality and productivity. A unique calibration loading system integrated with formal experimental design techniques has been developed and designated as the Single-Vector Balance Calibration System (SVS). This new concept addresses the limitations of current systems. The development of a quadratic and cubic calibration design is presented. Results from experimental testing are compared and contrasted with conventional calibration systems. Analyses of data are provided that demonstrate the feasibility of this concept and provide new insights into balance calibration.

  15. A distributed finite-element modeling and control approach for large flexible structures

    NASA Technical Reports Server (NTRS)

    Young, K. D.

    1989-01-01

    An unconventional framework is described for the design of decentralized controllers for large flexible structures. In contrast to conventional control system design practice which begins with a model of the open loop plant, the controlled plant is assembled from controlled components in which the modeling phase and the control design phase are integrated at the component level. The developed framework is called controlled component synthesis (CCS) to reflect that it is motivated by the well developed Component Mode Synthesis (CMS) methods which were demonstrated to be effective for solving large complex structural analysis problems for almost three decades. The design philosophy behind CCS is also closely related to that of the subsystem decomposition approach in decentralized control.

  16. Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation

    NASA Astrophysics Data System (ADS)

    Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong

    2017-05-01

    Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.

  17. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  18. Protocol for Determining Ultraviolet Light Emitting Diode (UV-LED) Fluence for Microbial Inactivation Studies.

    PubMed

    Kheyrandish, Ataollah; Mohseni, Madjid; Taghipour, Fariborz

    2018-06-15

    Determining fluence is essential to derive the inactivation kinetics of microorganisms and to design ultraviolet (UV) reactors for water disinfection. UV light emitting diodes (UV-LEDs) are emerging UV sources with various advantages compared to conventional UV lamps. Unlike conventional mercury lamps, no standard method is available to determine the average fluence of the UV-LEDs, and conventional methods used to determine the fluence for UV mercury lamps are not applicable to UV-LEDs due to the relatively low power output, polychromatic wavelength, and specific radiation profile of UV-LEDs. In this study, a method was developed to determine the average fluence inside a water suspension in a UV-LED experimental setup. In this method, the average fluence was estimated by measuring the irradiance at a few points for a collimated and uniform radiation on a Petri dish surface. New correction parameters were defined and proposed, and several of the existing parameters for determining the fluence of the UV mercury lamp apparatus were revised to measure and quantify the collimation and uniformity of the radiation. To study the effect of polychromatic output and radiation profile of the UV-LEDs, two UV-LEDs with peak wavelengths of 262 and 275 nm and different radiation profiles were selected as the representatives of typical UV-LEDs applied to microbial inactivation. The proper setup configuration for microorganism inactivation studies was also determined based on the defined correction factors.

  19. SU-E-T-07: 4DCT Robust Optimization for Esophageal Cancer Using Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, L; Department of Industrial Engineering, University of Houston, Houston, TX; Yu, J

    2015-06-15

    Purpose: To develop a 4DCT robust optimization method to reduce the dosimetric impact from respiratory motion in intensity modulated proton therapy (IMPT) for esophageal cancer. Methods: Four esophageal cancer patients were selected for this study. The different phases of CT from a set of 4DCT were incorporated into the worst-case dose distribution robust optimization algorithm. 4DCT robust treatment plans were designed and compared with the conventional non-robust plans. Result doses were calculated on the average and maximum inhale/exhale phases of 4DCT. Dose volume histogram (DVH) band graphic and ΔD95%, ΔD98%, ΔD5%, ΔD2% of CTV between different phases were used tomore » evaluate the robustness of the plans. Results: Compare to the IMPT plans optimized using conventional methods, the 4DCT robust IMPT plans can achieve the same quality in nominal cases, while yield a better robustness to breathing motion. The mean ΔD95%, ΔD98%, ΔD5% and ΔD2% of CTV are 6%, 3.2%, 0.9% and 1% for the robustly optimized plans vs. 16.2%, 11.8%, 1.6% and 3.3% from the conventional non-robust plans. Conclusion: A 4DCT robust optimization method was proposed for esophageal cancer using IMPT. We demonstrate that the 4DCT robust optimization can mitigate the dose deviation caused by the diaphragm motion.« less

  20. Revolutionary Concepts for Helicopter Noise Reduction: SILENT Program

    NASA Technical Reports Server (NTRS)

    Edwards, Bryan; Cox, Charles; Booth, Earl R., Jr. (Technical Monitor)

    2002-01-01

    As part of a NASA initiative to reduce helicopter main rotor noise, a Phase 1 study has been performed of candidate noise reduction concepts. Both conventional and novel design technologies have been analyzed that reduce the community impact of helicopter operations. In this study the noise reduction potential and design implications are assessed for conventional means of noise reduction, e.g., tip speed reduction, tip shapes and airfoil tailoring, and for two innovative design concepts: modulated blade spacing and x-force control. Main rotor designs that incorporate modulated blade spacing are shown to have reduced peak noise levels in most flight operations. X-force control alters the helicopter's force balance whereby the miss distance between main rotor blades and shed vortices can be controlled. This control provides a high potential to mitigate BVI noise radiation. Each concept is evaluated using best practice design and analysis methods, achieving the study's aim to significantly reduce noise with minimal performance degradation and no vibration increase. It is concluded that a SILENT main rotor design, incorporating the modulated blade spacing concept, offers significantly reduced noise levels and the potential of a breakthrough in how a helicopter's sound is perceived and judged. The SILENT rotor represents a definite advancement in the state-of-the-art and is selected as the design concept for demonstration in Phase 2. A Phase 2 Implementation Plan is developed for whirl cage and wind tunnel evaluations of a scaled model SILENT rotor.

  1. An improved predictive functional control method with application to PMSM systems

    NASA Astrophysics Data System (ADS)

    Li, Shihua; Liu, Huixian; Fu, Wenshu

    2017-01-01

    In common design of prediction model-based control method, usually disturbances are not considered in the prediction model as well as the control design. For the control systems with large amplitude or strong disturbances, it is difficult to precisely predict the future outputs according to the conventional prediction model, and thus the desired optimal closed-loop performance will be degraded to some extent. To this end, an improved predictive functional control (PFC) method is developed in this paper by embedding disturbance information into the system model. Here, a composite prediction model is thus obtained by embedding the estimated value of disturbances, where disturbance observer (DOB) is employed to estimate the lumped disturbances. So the influence of disturbances on system is taken into account in optimisation procedure. Finally, considering the speed control problem for permanent magnet synchronous motor (PMSM) servo system, a control scheme based on the improved PFC method is designed to ensure an optimal closed-loop performance even in the presence of disturbances. Simulation and experimental results based on a hardware platform are provided to confirm the effectiveness of the proposed algorithm.

  2. A case study on topology optimized design for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Gebisa, A. W.; Lemu, H. G.

    2017-12-01

    Topology optimization is an optimization method that employs mathematical tools to optimize material distribution in a part to be designed. Earlier developments of topology optimization considered conventional manufacturing techniques that have limitations in producing complex geometries. This has hindered the topology optimization efforts not to fully be realized. With the emergence of additive manufacturing (AM) technologies, the technology that builds a part layer upon a layer directly from three dimensional (3D) model data of the part, however, producing complex shape geometry is no longer an issue. Realization of topology optimization through AM provides full design freedom for the design engineers. The article focuses on topologically optimized design approach for additive manufacturing with a case study on lightweight design of jet engine bracket. The study result shows that topology optimization is a powerful design technique to reduce the weight of a product while maintaining the design requirements if additive manufacturing is considered.

  3. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  4. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  5. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  6. Multimodal system designed to reduce errors in recording and administration of drugs in anaesthesia: prospective randomised clinical evaluation.

    PubMed

    Merry, Alan F; Webster, Craig S; Hannam, Jacqueline; Mitchell, Simon J; Henderson, Robert; Reid, Papaarangi; Edwards, Kylie-Ellen; Jardim, Anisoara; Pak, Nick; Cooper, Jeremy; Hopley, Lara; Frampton, Chris; Short, Timothy G

    2011-09-22

    To clinically evaluate a new patented multimodal system (SAFERSleep) designed to reduce errors in the recording and administration of drugs in anaesthesia. Prospective randomised open label clinical trial. Five designated operating theatres in a major tertiary referral hospital. Eighty nine consenting anaesthetists managing 1075 cases in which there were 10,764 drug administrations. Use of the new system (which includes customised drug trays and purpose designed drug trolley drawers to promote a well organised anaesthetic workspace and aseptic technique; pre-filled syringes for commonly used anaesthetic drugs; large legible colour coded drug labels; a barcode reader linked to a computer, speakers, and touch screen to provide automatic auditory and visual verification of selected drugs immediately before each administration; automatic compilation of an anaesthetic record; an on-screen and audible warning if an antibiotic has not been administered within 15 minutes of the start of anaesthesia; and certain procedural rules-notably, scanning the label before each drug administration) versus conventional practice in drug administration with a manually compiled anaesthetic record. Primary: composite of errors in the recording and administration of intravenous drugs detected by direct observation and by detailed reconciliation of the contents of used drug vials against recorded administrations; and lapses in responding to an intermittent visual stimulus (vigilance latency task). Secondary: outcomes in patients; analyses of anaesthetists' tasks and assessments of workload; evaluation of the legibility of anaesthetic records; evaluation of compliance with the procedural rules of the new system; and questionnaire based ratings of the respective systems by participants. The overall mean rate of drug errors per 100 administrations was 9.1 (95% confidence interval 6.9 to 11.4) with the new system (one in 11 administrations) and 11.6 (9.3 to 13.9) with conventional methods (one in nine administrations) (P = 0.045 for difference). Most were recording errors, and, though fewer drug administration errors occurred with the new system, the comparison with conventional methods did not reach significance. Rates of errors in drug administration were lower when anaesthetists consistently applied two key principles of the new system (scanning the drug barcode before administering each drug and keeping the voice prompt active) than when they did not: mean 6.0 (3.1 to 8.8) errors per 100 administrations v 9.7 (8.4 to 11.1) respectively (P = 0.004). Lapses in the vigilance latency task occurred in 12% (58/471) of cases with the new system and 9% (40/473) with conventional methods (P = 0.052). The records generated by the new system were more legible, and anaesthetists preferred the new system, particularly in relation to long, complex, and emergency cases. There were no differences between new and conventional systems in respect of outcomes in patients or anaesthetists' workload. The new system was associated with a reduction in errors in the recording and administration of drugs in anaesthesia, attributable mainly to a reduction in recording errors. Automatic compilation of the anaesthetic record increased legibility but also increased lapses in a vigilance latency task and decreased time spent watching monitors. Trial registration Australian New Zealand Clinical Trials Registry No 12608000068369.

  7. Evaluation of Statistical Methods for Modeling Historical Resource Production and Forecasting

    NASA Astrophysics Data System (ADS)

    Nanzad, Bolorchimeg

    This master's thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed "cycle-jumping" wherein overlap of multiple cycles is limited. The model is designed in a way that each cycle is described by the same three parameters as conventional multicyclic Hubbert model and every two cycles are connected with a transition width. Transition width indicates the shift from one cycle to the next and is described as weighted coaddition of neighboring two cycles. It is determined by three parameters: transition year, transition width, and gamma parameter for weighting. The cycle-jumping method provides superior model compared to the conventional multicyclic Hubbert model and reflects historical production behavior more reasonably and practically, by better modeling of the effects of technological transitions and socioeconomic factors that affect historical resource production behavior by explicitly considering the form of the transitions between production cycles.

  8. Heat sink structural design concepts for a hypersonic research airplane

    NASA Technical Reports Server (NTRS)

    Taylor, A. H.; Jackson, L. R.

    1977-01-01

    Hypersonic research aircraft design requires careful consideration of thermal stresses. This paper relates some of the problems in a heat sink structural design that can be avoided by appropriate selection of design options including material selection, design concepts, and load paths. Data on several thermal loading conditions are presented on various conventional designs including bulkheads, longerons, fittings, and frames. Results indicate that conventional designs are inadequate and that acceptable designs are possible by incorporating innovative design practices. These include nonintegral pressure compartments, ball-jointed links to distribute applied loads without restraining the thermal expansion, and material selections based on thermal compatibility.

  9. Applications of Evolutionary Technology to Manufacturing and Logistics Systems : State-of-the Art Survey

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin

    Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.

  10. Control of Systems With Slow Actuators Using Time Scale Separation

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vehram; Nguyen, Nhan

    2009-01-01

    This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.

  11. Design of Current Leads for the MICE Coupling Magnet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Li, L.K.; Wu, Hong

    2008-04-02

    A pair of superconducting coupling magnets will be part of the Muon Ionization Cooling Experiment (MICE). They were designed and will be constructed by the Institute of Cryogenics and Superconductivity Technology, Harbin Institute of Technology, in collaboration with Lawrence Berkeley National Laboratory. The coupling magnet is to be cooled by using cryocoolers at 4.2K. In order to reduce the heat leak to the 4.2K cold mass from 300 K, a pair of current leads composed of conventional copper leads and high temperature superconductor (HTS) leads will be used to supply current to the magnet. This paper presents the optimization ofmore » the conventional conduction-cooled metal leads for the coupling magnet. Analyses on heat transfer down the leads using theoretical method and numerical simulation were carried out. The stray magnetic field around the HTS leads has been calculated and effects of the magnetic field on the performance of the HTS leads has also been analyzed.« less

  12. A comparative study on the experimentally derived electron densities of three protease inhibitor model compounds.

    PubMed

    Grabowsky, Simon; Pfeuffer, Thomas; Morgenroth, Wolfgang; Paulmann, Carsten; Schirmeister, Tanja; Luger, Peter

    2008-07-07

    In order to contribute to a rational design of optimised protease inhibitors which can covalently block the nucleophilic amino acids of the proteases' active sites, we have chosen three model compounds (aziridine , oxirane and acceptor-substituted olefin ) for the examination of their electron-density distribution. Therefore, high-resolution low temperature (9, 27 and 100 K) X-ray diffraction experiments on single-crystals were carried out with synchrotron and conventional X-radiation. It could be shown by the analysis of the electron density using mainly Bader's Theory of Atoms in Molecules, Volkov's EPMM method for interaction energies, electrostatic potentials and Gatti's Source Function that aziridine is most suitable for drug design in this field. A regioselective nucleophilic attack at carbon atom C1 could be predicted and even hints about the reaction's stereoselectivity could be obtained. Moreover, the comparison between two data sets of aziridine (conventional X-ray source vs. synchrotron radiation) gave an estimate concerning the reproducibility of the quantitative results.

  13. Comparison of conventional vs. modular hydrogen refueling stations and on-site production vs. delivery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hecht, Ethan S.; Pratt, Joseph William

    To meet the needs of public and private stakeholders involved in the development, construction, and operation of hydrogen fueling stations needed to support the widespread roll-out of hydrogen fuel cell electric vehicles, this work presents publicly available station templates and analyses. These ‘Reference Stations’ help reduce the cost and speed the deployment of hydrogen stations by providing a common baseline with which to start a design, enable quick assessment of potential sites for a hydrogen station, identify contributors to poor economics, and suggest areas of research. This work presents layouts, bills of materials, piping and instrumentation diagrams, and detailed analysesmore » of five new station designs. In the near term, delivered hydrogen results in a lower cost of hydrogen compared to on-site production via steam methane reforming or electrolysis, although the on-site production methods have other advantages. Modular station concepts including on-site production can reduce lot sizes from conventional assemble-on-site stations.« less

  14. Developing geogebra-assisted reciprocal teaching strategy to improve junior high school students’ abstraction ability, lateral thinking and mathematical persistence

    NASA Astrophysics Data System (ADS)

    Priatna, N.; Martadiputra, B. A. P.; Wibisono, Y.

    2018-05-01

    The development of science and technology requires reform in the utilization of various resources for mathematics teaching and learning process. One of the efforts that can be made is the implementation of GeoGebra-assisted Reciprocal Teaching strategy in mathematics instruction as an effective strategy in improving students’ cognitive, affective, and psychomotor abilities. This research is intended to implement GeoGebra-assisted Reciprocal Teaching strategy in improving abstraction ability, lateral thinking, and mathematical persistence of junior high school students. It employed quasi-experimental method with non-random pre-test and post-test control design. More specifically, it used the 2x3 factorial design, namely the learning factors that included GeoGebra-assisted Reciprocal Teaching and conventional teaching learning, and levels of early mathematical ability (high, middle, and low). The subjects in this research were the eighth grade students of junior high school, taken with purposive sampling. The results of this research show: Abstraction and lateral abilities of students who were taught with GeoGebra-assisted Reciprocal Teaching strategy were significantly higher than those of students who received conventional learning. Mathematical persistence of students taught with GeoGebra-assisted Reciprocal Teaching strategy was also significantly higher than of those taught with conventional learning.

  15. Flow Characteristics and Robustness of an Inclined Quad-vortex Range Hood

    PubMed Central

    CHEN, Jia-Kun; HUANG, Rong Fung

    2014-01-01

    A novel design of range hood, which was termed the inclined quad-vortex (IQV) range hood, was examined for its flow and containment leakage characteristics under the influence of a plate sweeping across the hood face. A flow visualization technique was used to unveil the flow behavior. Three characteristic flow modes were observed: convex, straight, and concave modes. A tracer gas detection method using sulfur hexafluoride (SF6) was employed to measure the containment leakage levels. The results were compared with the test data reported previously in the literature for a conventional range hood and an inclined air curtain (IAC) range hood. The leakage SF6 concentration of the IQV range hood under the influence of the plate sweeping was 0.039 ppm at a suction flow rate of 9.4 m3/min. The leakage concentration of the conventional range hood was 0.768 ppm at a suction flow rate of 15.0 m3/min. For the IAC range hood, the leakage concentration was 0.326 ppm at a suction flow rate of 10.9 m3/min. The IQV range hood presented a significantly lower leakage level at a smaller suction flow rate than the conventional and IAC range hoods due to its aerodynamic design for flow behavior. PMID:24583513

  16. Arterial spin labeled perfusion imaging using three-dimensional turbo spin echo with a distributed spiral-in/out trajectory.

    PubMed

    Li, Zhiqiang; Schär, Michael; Wang, Dinghui; Zwart, Nicholas R; Madhuranthakam, Ananth J; Karis, John P; Pipe, James G

    2016-01-01

    The three-dimensional (3D) spiral turbo spin echo (TSE) sequence is one of the preferred readout methods for arterial spin labeled (ASL) perfusion imaging. Conventional spiral TSE collects the data using a spiral-out readout on a stack of spirals trajectory. However, it may result in suboptimal image quality and is not flexible in protocol design. The goal of this study is to provide a more robust readout technique without such limitation. The proposed technique incorporates a spiral-in/out readout into 3D TSE, and collects the data on a distributed spirals trajectory. The data set is split into the spiral-in and -out subsets that are reconstructed separately and combined after image deblurring. The volunteer results acquired with the proposed technique show no geometric distortion or signal pileup, as is present with GRASE, and no signal loss, as is seen with conventional spiral TSE. Examples also demonstrate the flexibility in changing the imaging parameters to satisfy various criteria. The 3D TSE with a distributed spiral-in/out trajectory provides a robust readout technique and allows for easy protocol design, thus is a promising alternative to GRASE or conventional spiral TSE for ASL perfusion imaging. © 2015 Wiley Periodicals, Inc.

  17. DESIGN REPORT: LOW-NOX BURNERS FOR PACKAGE BOILERS

    EPA Science Inventory

    The report describes a low-NOx burner design, presented for residual-oil-fired industrial boilers and boilers cofiring conventional fuels and nitrated hazardous wastes. The burner offers lower NOx emission levels for these applications than conventional commercial burners. The bu...

  18. Performance assessment of conventional and base-isolated nuclear power plants for earthquake and blast loadings

    NASA Astrophysics Data System (ADS)

    Huang, Yin-Nan

    Nuclear power plants (NPPs) and spent nuclear fuel (SNF) are required by code and regulations to be designed for a family of extreme events, including very rare earthquake shaking, loss of coolant accidents, and tornado-borne missile impacts. Blast loading due to malevolent attack became a design consideration for NPPs and SNF after the terrorist attacks of September 11, 2001. The studies presented in this dissertation assess the performance of sample conventional and base isolated NPP reactor buildings subjected to seismic effects and blast loadings. The response of the sample reactor building to tornado-borne missile impacts and internal events (e.g., loss of coolant accidents) will not change if the building is base isolated and so these hazards were not considered. The sample NPP reactor building studied in this dissertation is composed of containment and internal structures with a total weight of approximately 75,000 tons. Four configurations of the reactor building are studied, including one conventional fixed-base reactor building and three base-isolated reactor buildings using Friction Pendulum(TM), lead rubber and low damping rubber bearings. The seismic assessment of the sample reactor building is performed using a new procedure proposed in this dissertation that builds on the methodology presented in the draft ATC-58 Guidelines and the widely used Zion method, which uses fragility curves defined in terms of ground-motion parameters for NPP seismic probabilistic risk assessment. The new procedure improves the Zion method by using fragility curves that are defined in terms of structural response parameters since damage and failure of NPP components are more closely tied to structural response parameters than to ground motion parameters. Alternate ground motion scaling methods are studied to help establish an optimal procedure for scaling ground motions for the purpose of seismic performance assessment. The proposed performance assessment procedure is used to evaluate the vulnerability of the conventional and base-isolated NPP reactor buildings. The seismic performance assessment confirms the utility of seismic isolation at reducing spectral demands on secondary systems. Procedures to reduce the construction cost of secondary systems in isolated reactor buildings are presented. A blast assessment of the sample reactor building is performed for an assumed threat of 2000 kg of TNT explosive detonated on the surface with a closest distance to the reactor building of 10 m. The air and ground shock waves produced by the design threat are generated and used for performance assessment. The air blast loading to the sample reactor building is computed using a Computational Fluid Dynamics code Air3D and the ground shock time series is generated using an attenuation model for soil/rock response. Response-history analysis of the sample conventional and base isolated reactor buildings to external blast loadings is performed using the hydrocode LS-DYNA. The spectral demands on the secondary systems in the isolated reactor building due to air blast loading are greater than those for the conventional reactor building but much smaller than those spectral demands associated with Safe Shutdown Earthquake shaking. The isolators are extremely effective at filtering out high acceleration, high frequency ground shock loading.

  19. Imaging of conductivity distributions using audio-frequency electromagnetic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ki Ha; Morrison, H.F.

    1990-10-01

    The objective of this study has been to develop mathematical methods for mapping conductivity distributions between boreholes using low frequency electromagnetic (em) data. In relation to this objective this paper presents two recent developments in high-resolution crosshole em imaging techniques. These are (1) audio-frequency diffusion tomography, and (2) a transform method in which low frequency data is first transformed into a wave-like field. The idea in the second approach is that we can then treat the transformed field using conventional techniques designed for wave field analysis.

  20. The MHOST finite element program: 3-D inelastic analysis methods for hot section components. Volume 3: Systems' manual

    NASA Technical Reports Server (NTRS)

    Nakazawa, Shohei

    1989-01-01

    The internal structure is discussed of the MHOST finite element program designed for 3-D inelastic analysis of gas turbine hot section components. The computer code is the first implementation of the mixed iterative solution strategy for improved efficiency and accuracy over the conventional finite element method. The control structure of the program is covered along with the data storage scheme and the memory allocation procedure and the file handling facilities including the read and/or write sequences.

  1. A new modelling approach for zooplankton behaviour

    NASA Astrophysics Data System (ADS)

    Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.

    We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.

  2. Ptosis assessment spectacles: a new method of measuring lid position and movement in children.

    PubMed

    Khandwala, Mona; Dey, Sarju; Harcourt, Cassie; Wood, Clive; Jones, Carole A

    2011-01-01

    Accurate assessment of eyelid position and movement is vital in planning the surgical correction of ptosis. Conventional measurements taken using a millimeter ruler are considered the gold standard, although in young children this can be a difficult procedure. The authors have designed ptosis assessment spectacles with a measuring millimeter scale marked on the center of the lens to facilitate accurate assessment of eyelid position and function in children. The purpose of the study was to assess the accuracy and reproducibility of eyelid measurement using these ptosis assessment spectacles. Fifty-two children aged 2-12 years were recruited in this study. Each child underwent 2 sets of measurements. The first was undertaken by an ophthalmologist in the conventional manner using a ruler, and the second set made with ptosis assessment spectacles. On each occasion the palpebral aperture, skin crease, and levator function were recorded in millimeters. A verbal analog scale was used to assess parent satisfaction with each method. Clinically acceptable reproducibility was shown with the ruler and the spectacles for all measurements: palpebral aperture, skin crease, and levator function. Parents significantly preferred the glasses for measurement, as compared with the ruler (p < 0.05). The spectacles are as accurate as conventional methods of measurement, but are easier to use. Children tolerate these spectacles well, and most parents preferred them to the ruler.

  3. Introducing genetic testing for cardiovascular disease in primary care: a qualitative study

    PubMed Central

    Middlemass, Jo B; Yazdani, Momina F; Kai, Joe; Standen, Penelope J; Qureshi, Nadeem

    2014-01-01

    Background While primary care systematically offers conventional cardiovascular risk assessment, genetic tests for coronary heart disease (CHD) are increasingly commercially available to patients. It is unclear how individuals may respond to these new sources of risk information. Aim To explore how patients who have had a recent conventional cardiovascular risk assessment, perceive additional information from genetic testing for CHD. Design and setting Qualitative interview study in 12 practices in Nottinghamshire from both urban and rural settings. Method Interviews were conducted with 29 adults, who consented to genetic testing after having had a conventional cardiovascular risk assessment. Results Individuals’ principal motivation for genetic testing was their family history of CHD and a desire to convey the results to their children. After testing, however, there was limited recall of genetic test results and scepticism about the value of informing their children. Participants dealt with conflicting findings from the genetic test, family history, and conventional assessment by either focusing on genetic risk or environmental lifestyle factors. In some participants, genetic test results appeared to reinforce healthy behaviour but others were falsely reassured, despite having an ‘above-average’ conventional cardiovascular risk score. Conclusion Although genetic testing was acceptable, participants were unclear how to interpret genetic risk results. To facilitate healthy behaviour, health professionals should explore patients’ understanding of genetic test results in light of their family history and conventional risk assessment. PMID:24771842

  4. Text feature extraction based on deep learning: a review.

    PubMed

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  5. Computational wave dynamics for innovative design of coastal structures

    PubMed Central

    GOTOH, Hitoshi; OKAYASU, Akio

    2017-01-01

    For innovative designs of coastal structures, Numerical Wave Flumes (NWFs), which are solvers of Navier-Stokes equation for free-surface flows, are key tools. In this article, various methods and techniques for NWFs are overviewed. In the former half, key techniques of NWFs, namely the interface capturing (MAC, VOF, C-CUP) and significance of NWFs in comparison with the conventional wave models are described. In the latter part of this article, recent improvements of the particle method are shown as one of cores of NWFs. Methods for attenuating unphysical pressure fluctuation and improving accuracy, such as CMPS method for momentum conservation, Higher-order Source of Poisson Pressure Equation (PPE), Higher-order Laplacian, Error-Compensating Source in PPE, and Gradient Correction for ensuring Taylor-series consistency, are reviewed briefly. Finally, the latest new frontier of the accurate particle method, including Dynamic Stabilization for providing minimum-required artificial repulsive force to improve stability of computation, and Space Potential Particle for describing the exact free-surface boundary condition, is described. PMID:29021506

  6. Analysis of light emitting diode array lighting system based on human vision: normal and abnormal uniformity condition.

    PubMed

    Qin, Zong; Ji, Chuangang; Wang, Kai; Liu, Sheng

    2012-10-08

    In this paper, condition for uniform lighting generated by light emitting diode (LED) array was systematically studied. To take human vision effect into consideration, contrast sensitivity function (CSF) was novelly adopted as critical criterion for uniform lighting instead of conventionally used Sparrow's Criterion (SC). Through CSF method, design parameters including system thickness, LED pitch, LED's spatial radiation distribution and viewing condition can be analytically combined. In a specific LED array lighting system (LALS) with foursquare LED arrangement, different types of LEDs (Lambertian and Batwing type) and given viewing condition, optimum system thicknesses and LED pitches were calculated and compared with those got through SC method. Results show that CSF method can achieve more appropriate optimum parameters than SC method. Additionally, an abnormal phenomenon that uniformity varies with structural parameters non-monotonically in LALS with non-Lambertian LEDs was found and analyzed. Based on the analysis, a design method of LALS that can bring about better practicability, lower cost and more attractive appearance was summarized.

  7. High-speed aerodynamic design of space vehicle and required hypersonic wind tunnel facilities

    NASA Astrophysics Data System (ADS)

    Sakakibara, Seizou; Hozumi, Kouichi; Soga, Kunio; Nomura, Shigeaki

    Problems associated with the aerodynamic design of space vehicles with emphasis of the role of hypersonic wind tunnel facilities in the development of the vehicle are considered. At first, to identify wind tunnel and computational fluid dynamics (CFD) requirements, operational environments are postulated for hypervelocity vehicles. Typical flight corridors are shown with the associated flow density: real gas effects, low density flow, and non-equilibrium flow. Based on an evaluation of these flight regimes and consideration of the operational requirements, the wind tunnel testing requirements for the aerodynamic design are examined. Then, the aerodynamic design logic and optimization techniques to develop and refine the configurations in a traditional phased approach based on the programmatic design of space vehicle are considered. Current design methodology for the determination of aerodynamic characteristics for designing the space vehicle, i.e., (1) ground test data, (2) numerical flow field solutions and (3) flight test data, are also discussed. Based on these considerations and by identifying capabilities and limits of experimental and computational methods, the role of a large conventional hypersonic wind tunnel and the high enthalpy tunnel and the interrelationship of the wind tunnels and CFD methods in actual aerodynamic design and analysis are discussed.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gang, G; Siewerdsen, J; Stayman, J

    Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFMmore » objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task-based optimization framework provides a promising approach that maximizes imaging performance under the same total dose constraint.« less

  9. Design and Evaluation of a New Boundary-Layer Rake for Flight Testing

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.; Oates, David L.; Gonsalez, Jose C.

    2000-01-01

    A new boundary-layer rake has been designed and built for flight testing on the NASA Dryden Flight Research Center F-15B/Flight Test Fixture. A feature unique to this rake is its curved body, which allows pitot tubes to be more densely clustered in the near-wall region than conventional rakes allow. This curved rake design has a complex three-dimensional shape that requires innovative solid-modeling and machining techniques. Finite-element stress analysis of the new design shows high factors of safety. The rake has passed a ground test in which random vibration measuring 12 g rms was applied for 20 min in each of the three normal directions. Aerodynamic evaluation of the rake has been conducted in the NASA Glenn Research Center 8 x 6 Supersonic Wind Tunnel at Mach 0-2. The pitot pressures from the new rake agree with conventional rake data over the range of Mach numbers tested. The boundary-layer profiles computed from the rake data have been shown to have the standard logarithmic-law profile. Skin friction values computed from the rake data using the Clauser plot method agree with the Preston tube results and the van Driest II compressible skin friction correlation to approximately +/-5 percent.

  10. Robust approximation-free prescribed performance control for nonlinear systems and its application

    NASA Astrophysics Data System (ADS)

    Sun, Ruisheng; Na, Jing; Zhu, Bin

    2018-02-01

    This paper presents a robust prescribed performance control approach and its application to nonlinear tail-controlled missile systems with unknown dynamics and uncertainties. The idea of prescribed performance function (PPF) is incorporated into the control design, such that both the steady-state and transient control performance can be strictly guaranteed. Unlike conventional PPF-based control methods, we further tailor a recently proposed systematic control design procedure (i.e. approximation-free control) using the transformed tracking error dynamics, which provides a proportional-like control action. Hence, the function approximators (e.g. neural networks, fuzzy systems) that are widely used to address the unknown nonlinearities in the nonlinear control designs are not needed. The proposed control design leads to a robust yet simplified function approximation-free control for nonlinear systems. The closed-loop system stability and the control error convergence are all rigorously proved. Finally, comparative simulations are conducted based on nonlinear missile systems to validate the improved response and the robustness of the proposed control method.

  11. Nanosecond pulsed electric fields (nsPEFs) low cost generator design using power MOSFET and Cockcroft-Walton multiplier circuit as high voltage DC source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulaeman, M. Y.; Widita, R.

    2014-09-30

    Purpose: Non-ionizing radiation therapy for cancer using pulsed electric field with high intensity field has become an interesting field new research topic. A new method using nanosecond pulsed electric fields (nsPEFs) offers a novel means to treat cancer. Not like the conventional electroporation, nsPEFs able to create nanopores in all membranes of the cell, including membrane in cell organelles, like mitochondria and nucleus. NsPEFs will promote cell death in several cell types, including cancer cell by apoptosis mechanism. NsPEFs will use pulse with intensity of electric field higher than conventional electroporation, between 20–100 kV/cm and with shorter duration of pulsemore » than conventional electroporation. NsPEFs requires a generator to produce high voltage pulse and to achieve high intensity electric field with proper pulse width. However, manufacturing cost for creating generator that generates a high voltage with short duration for nsPEFs purposes is highly expensive. Hence, the aim of this research is to obtain the low cost generator design that is able to produce a high voltage pulse with nanosecond width and will be used for nsPEFs purposes. Method: Cockcroft-Walton multiplier circuit will boost the input of 220 volt AC into high voltage DC around 1500 volt and it will be combined by a series of power MOSFET as a fast switch to obtain a high voltage with nanosecond pulse width. The motivation using Cockcroft-Walton multiplier is to acquire a low-cost high voltage DC generator; it will use capacitors and diodes arranged like a step. Power MOSFET connected in series is used as voltage divider to share the high voltage in order not to damage them. Results: This design is expected to acquire a low-cost generator that can achieve the high voltage pulse in amount of −1.5 kV with falltime 3 ns and risetime 15 ns into a 50Ω load that will be used for nsPEFs purposes. Further detailed on the circuit design will be explained at presentation.« less

  12. Composite Matrix Regenerator for Stirling Engines

    NASA Technical Reports Server (NTRS)

    Knowles, Timothy R.

    1997-01-01

    This project concerns the design, fabrication and testing of carbon regenerators for use in Stirling power convertors. Radial fiber design with nonmetallic components offers a number of potential advantages over conventional steel regenerators: reduced conduction and pressure drop losses, and the capability for higher temperature, higher frequency operation. Diverse composite fabrication methods are explored and lessons learned are summarized. A pulsed single-blow test rig has been developed that has been used for generating thermal effectiveness data for different flow velocities. Carbon regenerators have been fabricated by carbon vapor infiltration of electroflocked preforms. Performance data in a small Stirling engine are obtained. Prototype regenerators designed for the BP-1000 power convertor were fabricated and delivered to NASA-Lewis.

  13. Intelligent failure-tolerant control

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.

    1991-01-01

    An overview of failure-tolerant control is presented, beginning with robust control, progressing through parallel and analytical redundancy, and ending with rule-based systems and artificial neural networks. By design or implementation, failure-tolerant control systems are 'intelligent' systems. All failure-tolerant systems require some degrees of robustness to protect against catastrophic failure; failure tolerance often can be improved by adaptivity in decision-making and control, as well as by redundancy in measurement and actuation. Reliability, maintainability, and survivability can be enhanced by failure tolerance, although each objective poses different goals for control system design. Artificial intelligence concepts are helpful for integrating and codifying failure-tolerant control systems, not as alternatives but as adjuncts to conventional design methods.

  14. LUMIS Interactive graphics operating instructions and system specifications

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Yu, T. C.; Landini, A. J.

    1976-01-01

    The LUMIS program has designed an integrated geographic information system to assist program managers and planning groups in metropolitan regions. Described is the system designed to interactively interrogate a data base, display graphically a portion of the region enclosed in the data base, and perform cross tabulations of variables within each city block, block group, or census tract. The system is designed to interface with U. S. Census DIME file technology, but can accept alternative districting conventions. The system is described on three levels: (1) introduction to the systems's concept and potential applications; (2) the method of operating the system on an interactive terminal; and (3) a detailed system specification for computer facility personnel.

  15. Absorber design for a compound parabolic concentrator collector without transmission loss.

    PubMed

    Suzuki, A; Kobayashi, S

    1994-10-01

    A new design method for a compound parabolic concentrator heat collector is described. The conventional design of the ideal compound parabolic concentrator collector has a touching point between a light absorber and the reflectors. This structure is not preferable from the standpoint of conductive heat leakage and thermal stress on reflector materials. On the other hand, if the absorber and the reflectors are separated from each other, the gap between them usually causes optical errors such as light transmission loss or an increase in the reflection number. We discuss the fact that ideal heat collection is possible, in spite of the gap, by introducing the idea of an effective heat concentration ratio.

  16. High-precision Non-Contact Measurement of Creep of Ultra-High Temperature Materials for Aerospace

    NASA Technical Reports Server (NTRS)

    Rogers, Jan R.; Hyers, Robert

    2008-01-01

    For high-temperature applications (greater than 2,000 C) such as solid rocket motors, hypersonic aircraft, nuclear electric/thermal propulsion for spacecraft, and more efficient jet engines, creep becomes one of the most important design factors to be considered. Conventional creep-testing methods, where the specimen and test apparatus are in contact with each other, are limited to temperatures approximately 1,700 C. Development of alloys for higher-temperature applications is limited by the availability of testing methods at temperatures above 2000 C. Development of alloys for applications requiring a long service life at temperatures as low as 1500 C, such as the next generation of jet turbine superalloys, is limited by the difficulty of accelerated testing at temperatures above 1700 C. For these reasons, a new, non-contact creep-measurement technique is needed for higher temperature applications. A new non-contact method for creep measurements of ultra-high-temperature metals and ceramics has been developed and validated. Using the electrostatic levitation (ESL) facility at NASA Marshall Space Flight Center, a spherical sample is rotated quickly enough to cause creep deformation due to centrifugal acceleration. Very accurate measurement of the deformed shape through digital image analysis allows the stress exponent n to be determined very precisely from a single test, rather than from numerous conventional tests. Validation tests on single-crystal niobium spheres showed excellent agreement with conventional tests at 1985 C; however the non-contact method provides much greater precision while using only about 40 milligrams of material. This method is being applied to materials including metals and ceramics for non-eroding throats in solid rockets and next-generation superalloys for turbine engines. Recent advances in the method and the current state of these new measurements will be presented.

  17. Analysis of International Space Station Materials on MISSE-3 and MISSE-4

    NASA Technical Reports Server (NTRS)

    Finckenor, Miria M.; Golden, Johnny L.; O'Rourke, Mary Jane

    2008-01-01

    For high-temperature applications (> 2,000 C) such as solid rocket motors, hypersonic aircraft, nuclear electric/thermal propulsion for spacecraft, and more efficient jet engines, creep becomes one of the most important design factors to be considered. Conventional creep-testing methods, where the specimen and test apparatus are in contact with each other, are limited to temperatures 1,700 deg. C. Development of alloys for higher-temperature applications is limited by the availability of testing methods at temperatures above 2000 C. Development of alloys for applications requiring a long service life at temperatures as low as 1500 C, such as the next generation of jet turbine superalloys, is limited by the difficulty of accelerated testing at temperatures above 1700 0c. For these reasons, a new, non-contact creep-measurement technique is needed for higher temperature applications. A new non-contact method for creep measurements of ultra-high-temperature metals and ceramics has been developed and validated. Using the electrostatic levitation (ESL) facility at NASA Marshall Space Flight Center, a spherical sample is rotated quickly enough to cause creep deformation due to centrifugal acceleration. Very accurate measurement of the deformed shape through digital image analysis allows the stress exponent n to be determined very precisely from a single test, rather than from numerous conventional tests. Validation tests on single-crystal niobium spheres showed excellent agreement with conventional tests at 1985 C; however the non-contact method provides much greater precision while using only about 40 milligrams of material. This method is being applied to materials including metals and ceramics for noneroding throats in solid rockets and next-generation superalloys for turbine engines. Recent advances in the method and the current state of these new measurements will be presented.

  18. Comparison of the effect of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume on midwifery students: A randomized clinical trial

    PubMed Central

    Kordi, Masoumeh; Fakari, Farzaneh Rashidi; Mazloum, Seyed Reza; Khadivzadeh, Talaat; Akhlaghi, Farideh; Tara, Mahmoud

    2016-01-01

    Introduction: Delay in diagnosis of bleeding can be due to underestimation of the actual amount of blood loss during delivery. Therefore, this research aimed to compare the efficacy of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume. Materials and Methods: This three-group randomized clinical trial study was performed on 105 midwifery students in Mashhad School of Nursing and Midwifery in 2013. The samples were selected by the convenience method and were randomly divided into three groups of web-based, simulation-based, and conventional training. The three groups participated before and 1 week after the training course in eight station practical tests, then, the students of the web-based group were trained on-line for 1 week, the students of the simulation-based group were trained in the Clinical Skills Centre for 4 h, and the students of the conventional group were trained for 4 h presentation by researchers. The data gathering tool was a demographic questionnaire designed by the researchers and objective structured clinical examination. Data were analyzed by software version 11.5. Results: The accuracy of visual estimation of postpartum hemorrhage volume after training increased significantly in the three groups at all stations (1, 2, 4, 5, 6 and 7 (P = 0.001), 8 (P = 0.027)) except station 3 (blood loss of 20 cc, P = 0.095), but the mean score of blood loss estimation after training did not significantly different between the three groups (P = 0.95). Conclusion: Training increased the accuracy of estimation of postpartum hemorrhage, but no significant difference was found among the three training groups. We can use web-based training as a substitute or supplement of training along with two other more common simulation and conventional methods. PMID:27500175

  19. Precision Fit of Screw-Retained Implant-Supported Fixed Dental Prostheses Fabricated by CAD/CAM, Copy-Milling, and Conventional Methods.

    PubMed

    de França, Danilo Gonzaga; Morais, Maria Helena; das Neves, Flávio D; Carreiro, Adriana Fonte; Barbosa, Gustavo As

    The aim of this study was to evaluate the effectiveness of fabrication methods (computer-aided design/computer-aided manufacture [CAD/CAM], copy-milling, and conventional casting) in the fit accuracy of three-unit, screw-retained fixed dental prostheses. Sixteen three-unit implant-supported screw-retained frameworks were fabricated to fit an in vitro model. Eight frameworks were fabricated using the CAD/CAM system, four in zirconia and four in cobalt-chromium. Four zirconia frameworks were fabricated using the copy-milled system, and four were cast in cobalt-chromium using conventional casting with premachined abutments. The vertical and horizontal misfit at the implant-framework interface was measured using scanning electron microscopy at ×250. The results for vertical misfit were analyzed using Kruskal-Wallis and Mann-Whitney tests. The horizontal misfits were categorized as underextended, equally extended, or overextended. Statistical analysis established differences between groups according to the chi-square test (α = .05). The mean vertical misfit was 5.9 ± 3.6 μm for CAD/CAM-fabricated zirconia, 1.2 ± 2.2 μm for CAD/CAM-fabricated cobalt-chromium frameworks, 7.6 ± 9.2 μm for copy-milling-fabricated zirconia frameworks, and 11.8 (9.8) μm for conventionally fabricated frameworks. The Mann-Whitney test revealed significant differences between all but the zirconia-fabricated frameworks. A significant association was observed between the horizontal misfits and the fabrication method. The percentage of horizontal misfits that were underextended and overextended was higher in milled zirconia (83.3%), CAD/CAM cobaltchromium (66.7%), cast cobalt-chromium (58.3%), and CAD/CAM zirconia (33.3%) frameworks. CAD/CAM-fabricated frameworks exhibit better vertical misfit and low variability compared with copy-milled and conventionally fabricated frameworks. The percentage of interfaces equally extended was higher when CAD/CAM and zirconia were used.

  20. Different Strokes for Different Folks: Visual Presentation Design between Disciplines

    PubMed Central

    Gomez, Steven R.; Jianu, Radu; Ziemkiewicz, Caroline; Guo, Hua; Laidlaw, David H.

    2015-01-01

    We present an ethnographic study of design differences in visual presentations between academic disciplines. Characterizing design conventions between users and data domains is an important step in developing hypotheses, tools, and design guidelines for information visualization. In this paper, disciplines are compared at a coarse scale between four groups of fields: social, natural, and formal sciences; and the humanities. Two commonplace presentation types were analyzed: electronic slideshows and whiteboard “chalk talks”. We found design differences in slideshows using two methods – coding and comparing manually-selected features, like charts and diagrams, and an image-based analysis using PCA called eigenslides. In whiteboard talks with controlled topics, we observed design behaviors, including using representations and formalisms from a participant’s own discipline, that suggest authors might benefit from novel assistive tools for designing presentations. Based on these findings, we discuss opportunities for visualization ethnography and human-centered authoring tools for visual information. PMID:26357149

  1. Different Strokes for Different Folks: Visual Presentation Design between Disciplines.

    PubMed

    Gomez, S R; Jianu, R; Ziemkiewicz, C; Guo, Hua; Laidlaw, D H

    2012-12-01

    We present an ethnographic study of design differences in visual presentations between academic disciplines. Characterizing design conventions between users and data domains is an important step in developing hypotheses, tools, and design guidelines for information visualization. In this paper, disciplines are compared at a coarse scale between four groups of fields: social, natural, and formal sciences; and the humanities. Two commonplace presentation types were analyzed: electronic slideshows and whiteboard "chalk talks". We found design differences in slideshows using two methods - coding and comparing manually-selected features, like charts and diagrams, and an image-based analysis using PCA called eigenslides. In whiteboard talks with controlled topics, we observed design behaviors, including using representations and formalisms from a participant's own discipline, that suggest authors might benefit from novel assistive tools for designing presentations. Based on these findings, we discuss opportunities for visualization ethnography and human-centered authoring tools for visual information.

  2. Eddy Covariance Method: Overview of General Guidelines and Conventional Workflow

    NASA Astrophysics Data System (ADS)

    Burba, G. G.; Anderson, D. J.; Amen, J. L.

    2007-12-01

    Atmospheric flux measurements are widely used to estimate water, heat, carbon dioxide and trace gas exchange between the ecosystem and the atmosphere. The Eddy Covariance method is one of the most direct, defensible ways to measure and calculate turbulent fluxes within the atmospheric boundary layer. However, the method is mathematically complex, and requires significant care to set up and process data. These reasons may be why the method is currently used predominantly by micrometeorologists. Modern instruments and software can potentially expand the use of this method beyond micrometeorology and prove valuable for plant physiology, hydrology, biology, ecology, entomology, and other non-micrometeorological areas of research. The main challenge of the method for a non-expert is the complexity of system design, implementation, and processing of the large volume of data. In the past several years, efforts of the flux networks (e.g., FluxNet, Ameriflux, CarboEurope, Fluxnet-Canada, Asiaflux, etc.) have led to noticeable progress in unification of the terminology and general standardization of processing steps. The methodology itself, however, is difficult to unify, because various experimental sites and different purposes of studies dictate different treatments, and site-, measurement- and purpose-specific approaches. Here we present an overview of theory and typical workflow of the Eddy Covariance method in a format specifically designed to (i) familiarize a non-expert with general principles, requirements, applications, and processing steps of the conventional Eddy Covariance technique, (ii) to assist in further understanding the method through more advanced references such as textbooks, network guidelines and journal papers, (iii) to help technicians, students and new researchers in the field deployment of the Eddy Covariance method, and (iv) to assist in its use beyond micrometeorology. The overview is based, to a large degree, on the frequently asked questions received from new users of the Eddy Covariance method and relevant instrumentation, and employs non-technical language to be of practical use to those new to this field. Information is provided on theory of the method (including state of methodology, basic derivations, practical formulations, major assumptions and sources of errors, error treatment, and use in non- traditional terrains), practical workflow (e.g., experimental design, implementation, data processing, and quality control), alternative methods and applications, and the most frequently overlooked details of the measurements. References and access to an extended 141-page Eddy Covariance Guideline in three electronic formats are also provided.

  3. Utilization of design data on conventional system to building information modeling (BIM)

    NASA Astrophysics Data System (ADS)

    Akbar, Boyke M.; Z. R., Dewi Larasati

    2017-11-01

    Nowadays infrastructure development becomes one of the main priorities in the developed country such as Indonesia. The use of conventional design system is considered no longer effectively support the infrastructure projects, especially for the high complexity building design, due to its fragmented system issues. BIM comes as one of the solutions in managing projects in an integrated manner. Despite of the all known BIM benefits, there are some obstacles on the migration process to BIM. The two main of the obstacles are; the BIM implementation unpreparedness of some project parties and a concerns to leave behind the existing database and create a new one on the BIM system. This paper discusses the utilization probabilities of the existing CAD data from the conventional design system for BIM system. The existing conventional CAD data's and BIM design system output was studied to examine compatibility issues between two subject and followed by an utilization scheme-strategy probabilities. The goal of this study is to add project parties' eagerness in migrating to BIM by maximizing the existing data utilization and hopefully could also increase BIM based project workflow quality.

  4. Movement quality of conventional prostheses and the DEKA Arm during everyday tasks

    PubMed Central

    Cowley, Jeffrey; Resnik, Linda; Wilken, Jason; Walters, Lisa Smurr; Gates, Deanna

    2017-01-01

    Background Conventional prosthetic devices fail to restore the function and characteristic movement quality of the upper limb. The DEKA Arm is a new, advanced prosthesis featuring a compound, powered wrist and multiple grip configurations. Objectives The purpose of this study was to determine if the DEKA Arm improved the movement quality of upper limb prosthesis users compared to conventional prostheses. Study design Case series. Methods Three people with transradial amputation completed tasks of daily life with their conventional prosthesis and with the DEKA Arm. A total of 10 healthy controls completed the same tasks. The trajectory of the wrist joint center was analyzed to determine how different prostheses affected movement duration, speed, smoothness, and curvature compared to patients’ own intact limbs and controls. Results Movement quality decreased with the DEKA Arm for two participants, and increased for the third. Prosthesis users made slower, less smooth, more curved movements with the prosthetic limb compared to the intact limb and controls, particularly when grasping and manipulating objects. Conclusion The effects of one month of training with the DEKA Arm on movement quality varied with participants’ skill and experience with conventional prostheses. Future studies should examine changes in movement quality after long-term use of advanced prostheses. PMID:26932980

  5. Laparoendoscopic single-site surgery (LESS) versus conventional laparoscopic surgery for adnexal preservation: a randomized controlled study

    PubMed Central

    Cho, Yeon Jean; Kim, Mi-La; Lee, Soo Yoon; Lee, Hee Suk; Kim, Joo Myoung; Joo, Kwan Young

    2012-01-01

    Objective To compare the operative outcomes, postoperative pain, and subsequent convalescence after laparoendoscopic single-site surgery (LESS) or conventional laparoscopic surgery for adnexal preservation. Study design From December 2009 to September 2010, 63 patients underwent LESS (n = 33) or a conventional laparoscopic surgery (n = 30) for cyst enucleation. The overall operative outcomes including postoperative pain measurement using the visual analog scale (VAS) were evaluated (time points 6, 24, and 24 hours). The convalescence data included data obtained from questionnaires on the need for analgesics and on patient-reported time to recovery end points. Results The preoperative characteristics did not significantly differ between the two groups. The postoperative hemoglobin drop was higher in the LESS group than in the conventional laparoscopic surgery group (P = 0.048). Postoperative pain at each VAS time point, oral analgesic requirement, intramuscular analgesic requirement, and the number of days until return to work were similar in both groups. Conclusion In adnexa-preserving surgery performed in reproductive-age women, the operative outcomes, including satisfaction of the patients and convalescence after surgery, are comparable for LESS and conventional laparoscopy. LESS may be a feasible and a promising alternative method for scarless abdominal surgery in the treatment of young women with adnexal cysts PMID:22448110

  6. A study of environmental characterization of conventional and advanced aluminum alloys for selection and design. Phase 1: Literature review

    NASA Technical Reports Server (NTRS)

    Sprowls, D. O.

    1984-01-01

    A review of the literature is presented with the objectives of identifying relationships between various accelerated stress corrosion testing techniques, and for determining the combination of test methods best suited to selection and design of high strength aluminum alloys. The following areas are reviewed: status of stress-corrosion test standards, the influence of mechanical and environmental factors on stress corrosion testing, correlation of accelerated test data with in-service experience, and procedures used to avoid stress corrosion problems in service. Promising areas for further work are identified.

  7. Space Shuttle Orbiter windshield bird impact analysis

    NASA Technical Reports Server (NTRS)

    Edelstein, Karen S.; Mccarty, Robert E.

    1988-01-01

    The NASA Space Shuttle Orbiter's windshield employs three glass panes separated by air gaps. The brittleness of the glass offers much less birdstrike energy-absorption capability than the laminated polycarbonate windshields of more conventional aircraft; attention must accordingly be given to the risk of catastrophic bird impact, and to methods of strike prevention that address bird populations around landing sites rather than the modification of the window's design. Bird populations' direct reduction, as well as careful scheduling of Orbiter landing times, are suggested as viable alternatives. The question of birdstrike-resistant glass windshield design for hypersonic aerospacecraft is discussed.

  8. Matching technique yields optimum LNA performance. [Low Noise Amplifiers

    NASA Technical Reports Server (NTRS)

    Sifri, J. D.

    1986-01-01

    The present article is concerned with a case in which an optimum noise figure and unconditional stability have been designed into a 2.385-GHz low-noise preamplifier via an unusual method for matching the input with a suspended line. The results obtained with several conventional line-matching techniques were not satisfactory. Attention is given to the minimization of thermal noise, the design procedure, requirements for a high-impedance line, a sampling of four matching networks, the noise figure of the single-line matching network as a function of frequency, and the approaches used to achieve unconditional stability.

  9. Analysis and design of fiber-coupled high-power laser diode array

    NASA Astrophysics Data System (ADS)

    Zhou, Chongxi; Liu, Yinhui; Xie, Weimin; Du, Chunlei

    2003-11-01

    A conclusion that a single conventional optical system could not realize fiber coupled high-power laser diode array is drawn based on the BPP of laser beam. According to the parameters of coupled fiber, a method to couple LDA beams into a single multi-mode fiber including beams collimating, shaping, focusing and coupling is present. The divergence angles after collimating are calculated and analyzed; the shape equation of the collimating micro-lenses array is deprived. The focusing lens is designed. A fiber coupled LDA result with the core diameter of 800 um and numeric aperture of 0.37 is gotten.

  10. Industrializing Offshore Wind Power with Serial Assembly and Lower-cost Deployment - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kempton, Willett

    A team of engineers and contractors has developed a method to move offshore wind installation toward lower cost, faster deployment, and lower environmental impact. A combination of methods, some incremental and some breaks from past practice, interact to yield multiple improvements. Three designs were evaluated based on detailed engineering: 1) a 5 MW turbine on a jacket with pin piles (base case), 2) a 10 MW turbine on a conventional jacket with pin piles, assembled at sea, and 3) a 10 MW turbine on tripod jacket with suction buckets (caissons) and with complete turbine assembly on-shore. The larger turbine, assemblymore » ashore, and the use of suction buckets together substantially reduce capital cost of offshore wind projects. Notable capital cost reductions are: changing from 5 MW to 10 MW turbine, a 31% capital cost reduction, and assembly on land then single-piece install at sea an additional 9% capital cost reduction. An estimated Design 4) estimates further cost reduction when equipment and processes of Design 3) are optimized, rather than adapted to existing equipment and process. Cost of energy for each of the four Designs are also calculated, yielding approximately the same percentage reductions. The methods of Design 3) analyzed here include accepted structures such as suction buckets used in new ways, innovations conceived but previously without engineering and economic validation, combined with new methods not previously proposed. Analysis of Designs 2) and 3) are based on extensive engineering calculations and detailed cost estimates. All design methods can be done with existing equipment, including lift equipment, ports and ships (except that design 4 assumes a more optimized ship). The design team consists of experienced offshore structure designers, heavy lift engineers, wind turbine designers, vessel operators, and marine construction contractors. Comparing the methods based on criteria of cost and deployment speed, the study selected the third design. That design is, in brief: a conventional turbine and tubular tower is mounted on a tripod jacket, in turn atop three suction buckets. Blades are mounted on the tower, not on the hub. The entire structure is built in port, from the bottom up, then assembled structures are queued in the port for deployment. During weather windows, the fully-assembled structures are lifted off the quay, lashed to the vessel, and transported to the deployment site. The vessel analyzed is a shear leg crane vessel with dynamic positioning like the existing Gulliver, or it could be a US-built crane barge. On site, the entire structure is lowered to the bottom by the crane vessel, then pumping of the suction buckets is managed by smaller service vessels. Blades are lifted into place by small winches operated by workers in the nacelle without lift vessel support. Advantages of the selected design include: cost and time at sea of the expensive lift vessel are significantly reduced; no jack up vessel is required; the weather window required for each installation is shorter; turbine structure construction is continuous with a queue feeding the weather-dependent installation process; pre-installation geotechnical work is faster and less expensive; there are no sound impacts on marine mammals, thus minimal spotting and no work stoppage Industrializing Offshore Wind Power 6 of 96 9 for mammal passage; the entire structure can be removed for decommissioning or major repairs; the method has been validated for current turbines up to 10 MW, and a calculation using simple scaling shows it usable up to 20 MW turbines.« less

  11. Design of voice coil motor dynamic focusing unit for a laser scanner

    NASA Astrophysics Data System (ADS)

    Lee, Moon G.; Kim, Gaeun; Lee, Chan-Woo; Lee, Soo-Hun; Jeon, Yongho

    2014-04-01

    Laser scanning systems have been used for material processing tasks such as welding, cutting, marking, and drilling. However, applications have been limited by the small range of motion and slow speed of the focusing unit, which carries the focusing optics. To overcome these limitations, a dynamic focusing system with a long travel range and high speed is needed. In this study, a dynamic focusing unit for a laser scanning system with a voice coil motor (VCM) mechanism is proposed to enable fast speed and a wide focusing range. The VCM has finer precision and higher speed than conventional step motors and a longer travel range than earlier lead zirconium titanate actuators. The system has a hollow configuration to provide a laser beam path. This also makes it compact and transmission-free and gives it low inertia. The VCM's magnetics are modeled using a permeance model. Its design parameters are determined by optimization using the Broyden-Fletcher-Goldfarb-Shanno method and a sequential quadratic programming algorithm. After the VCM is designed, the dynamic focusing unit is fabricated and assembled. The permeance model is verified by a magnetic finite element method simulation tool, Maxwell 2D and 3D, and by measurement data from a gauss meter. The performance is verified experimentally. The results show a resolution of 0.2 μm and travel range of 16 mm. These are better than those of conventional focusing systems; therefore, this focusing unit can be applied to laser scanning systems for good machining capability.

  12. Design of the evolution of management strategies of heart failure patients with implantable defibrillators (EVOLVO) study to assess the ability of remote monitoring to treat and triage patients more effectively

    PubMed Central

    Marzegalli, Maurizio; Landolina, Maurizio; Lunati, Maurizio; Perego, Giovanni B; Pappone, Alessia; Guenzati, Giuseppe; Campana, Carlo; Frigerio, Maria; Parati, Gianfranco; Curnis, Antonio; Colangelo, Irene; Valsecchi, Sergio

    2009-01-01

    Background Heart failure patients with implantable defibrillators (ICD) frequently visit the clinic for routine device monitoring. Moreover, in the case of clinical events, such as ICD shocks or alert notifications for changes in cardiac status or safety issues, they often visit the emergency department or the clinic for an unscheduled visit. These planned and unplanned visits place a great burden on healthcare providers. Internet-based remote device interrogation systems, which give physicians remote access to patients' data, are being proposed in order to reduce routine and interim visits and to detect and notify alert conditions earlier. Methods The EVOLVO study is a prospective, randomized, parallel, unblinded, multicenter clinical trial designed to compare remote ICD management with the current standard of care, in order to assess its ability to treat and triage patients more effectively. Two-hundred patients implanted with wireless-transmission-enabled ICD will be enrolled and randomized to receive either the Medtronic CareLink® monitor for remote transmission or the conventional method of in-person evaluations. The purpose of this manuscript is to describe the design of the trial. The results, which are to be presented separately, will characterize healthcare utilizations as a result of ICD follow-up by means of remote monitoring instead of conventional in-person evaluations. Trial registration ClinicalTrials.gov: NCT00873899 PMID:19538734

  13. Design of voice coil motor dynamic focusing unit for a laser scanner.

    PubMed

    Lee, Moon G; Kim, Gaeun; Lee, Chan-Woo; Lee, Soo-Hun; Jeon, Yongho

    2014-04-01

    Laser scanning systems have been used for material processing tasks such as welding, cutting, marking, and drilling. However, applications have been limited by the small range of motion and slow speed of the focusing unit, which carries the focusing optics. To overcome these limitations, a dynamic focusing system with a long travel range and high speed is needed. In this study, a dynamic focusing unit for a laser scanning system with a voice coil motor (VCM) mechanism is proposed to enable fast speed and a wide focusing range. The VCM has finer precision and higher speed than conventional step motors and a longer travel range than earlier lead zirconium titanate actuators. The system has a hollow configuration to provide a laser beam path. This also makes it compact and transmission-free and gives it low inertia. The VCM's magnetics are modeled using a permeance model. Its design parameters are determined by optimization using the Broyden-Fletcher-Goldfarb-Shanno method and a sequential quadratic programming algorithm. After the VCM is designed, the dynamic focusing unit is fabricated and assembled. The permeance model is verified by a magnetic finite element method simulation tool, Maxwell 2D and 3D, and by measurement data from a gauss meter. The performance is verified experimentally. The results show a resolution of 0.2 μm and travel range of 16 mm. These are better than those of conventional focusing systems; therefore, this focusing unit can be applied to laser scanning systems for good machining capability.

  14. Method of fabricating a flow device

    DOEpatents

    Hale, Robert L.

    1978-01-01

    This invention is a novel method for fabricating leak-tight tubular articles which have an interior flow channel whose contour must conform very closely with design specifications but which are composed of metal which tends to warp if welded. The method comprises designing two longitudinal half-sections of the article, the half-sections being contoured internally to cooperatively form the desired flow passageway. Each half-section is designed with a pair of opposed side flanges extending between the end flanges and integral therewith. The half-sections are positioned with their various flanges in confronting relation and with elongated metal gaskets extending between the confronting flanges for the length of the array. The gaskets are a deformable metal which is fusion-weldable to the end flanges. The mating side flanges are joined mechanically to deform the gaskets and provide a longitudinally sealed assembly. The portions of the end flanges contiguous with the ends of the gaskets then are welded to provide localized end welds which incorporate ends of the gaskets, thus transversely sealing the assembly. This method of fabrication provides leak-tight articles having the desired precisely contoured flow channels, whereas various conventional methods have been found unsatisfactory.

  15. Ion beam figuring of small optical components

    NASA Astrophysics Data System (ADS)

    Drueding, Thomas W.; Fawcett, Steven C.; Wilson, Scott R.; Bifano, Thomas G.

    1995-12-01

    Ion beam figuring provides a highly deterministic method for the final precision figuring of optical components with advantages over conventional methods. The process involves bombarding a component with a stable beam of accelerated particles that selectively removes material from the surface. Figure corrections are achieved by rastering the fixed-current beam across the workplace at appropriate, time-varying velocities. Unlike conventional methods, ion figuring is a noncontact technique and thus avoids such problems as edge rolloff effects, tool wear, and force loading of the workpiece. This work is directed toward the development of the precision ion machining system at NASA's Marshall Space Flight Center. This system is designed for processing small (approximately equals 10-cm diam) optical components. Initial experiments were successful in figuring 8-cm-diam fused silica and chemical-vapor-deposited SiC samples. The experiments, procedures, and results of figuring the sample workpieces to shallow spherical, parabolic (concave and convex), and non-axially-symmetric shapes are discussed. Several difficulties and limitations encountered with the current system are discussed. The use of a 1-cm aperture for making finer corrections on optical components is also reported.

  16. Ultrasound-assisted extraction of natural antioxidants from the flower of Limonium sinuatum: Optimization and comparison with conventional methods.

    PubMed

    Xu, Dong-Ping; Zheng, Jie; Zhou, Yue; Li, Ya; Li, Sha; Li, Hua-Bin

    2017-02-15

    Natural antioxidants are widely used as dietary supplements or food additives. An optimized method of ultrasound-assisted extraction (UAE) was proposed for the effective extraction of antioxidants from the flowers of Limonium sinuatum and evaluated by response surface methodology. In this study, ethanol concentration, ratio of solvent to solid, ultrasonication time and temperature were investigated and optimized using a central composite rotatable design. The optimum extraction conditions were as follows: ethanol concentration, 60%; ratio of solvent to solid, 56.9:1mL/g; ultrasonication time, 9.8min; and temperature, 40°C. Under the optimal UAE conditions, the experimental values (483.01±15.39μmolTrolox/gDW) matched with those predicted (494.13μmolTrolox/gDW) within a 95% confidence level. In addition, the antioxidant activities of UAE were compared with those of conventional maceration and Soxhlet extraction methods, and the ultrasound-assisted extraction could give higher yield of antioxidants and markedly reduce the extraction time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. A Three-Dimensional Hydrodynamic Focusing Method for Polyplex Synthesis

    PubMed Central

    Lu, Mengqian; Ho, Yi-Ping; Grigsby, Christopher L.; Nawaz, Ahmad Ahsan; Leong, Kam W.; Huang, Tony Jun

    2014-01-01

    Successful intracellular delivery of nucleic acid therapeutics relies on multi-aspect optimization, one of which is formulation. While there has been ample innovation on chemical design of polymeric gene carriers, the same cannot be said for physical processing of polymer-DNA nanocomplexes (polyplexes). Conventional synthesis of polyplexes by bulk mixing depends on the operators’ experience. The poorly controlled bulk-mixing process may also lead to batch-to-batch variation and consequent irreproducibility. Here, we synthesize polyplexes by using a three-dimensional hydrodynamic focusing (3D-HF) technique in a single-layered, planar microfluidic device. Without any additional chemical treatment or post processing, the polyplexes prepared by the 3D-HF method show smaller size, slower aggregation rate, and higher transfection efficiency, while exhibiting reduced cytotoxicity compared to the ones synthesized by conventional bulk mixing. In addition, by introducing external acoustic perturbation, mixing can be further enhanced, leading to even smaller nanocomplexes. The 3D-HF method provides a simple and reproducible process for synthesizing high-quality polyplexes, addressing a critical barrier in the eventual translation of nucleic acid therapeutics. PMID:24341632

  18. Stratified Sampling Design Based on Data Mining

    PubMed Central

    Kim, Yeonkook J.; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon

    2013-01-01

    Objectives To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. Methods We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Results Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. Conclusions This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea. PMID:24175117

  19. Status of stretched-membrane heliostats

    NASA Astrophysics Data System (ADS)

    Alpert, D. J.; Houser, R. M.; Heckes, A. A.

    1990-01-01

    Since the early 1980s, Sandia National Laboratories has been developing stretched-membrane heliostats for solar central receiver power plants. They differ from conventional glass-mirror heliostats in that the optical surface is a stretched membrane -- a thin metal foil stretched over both sides of a large diameter ring. The reflective surface is provided by either a silvered-acrylic film or thin glass mirrors attached to the front membrane. Heliostats with single 14 m diameter (150 sq meter) stretched-membrane reflectors have been designed. Because of their simplicity and light weight, stretched-membrane heliostats are expected to cost up to one-third less than conventional glass-mirror designs. Two generations of 50 sq meter prototype stretched-membrane mirror modules have been built and evaluated at Sandia's Central Receiver Test Facility in Albuquerque, NM. They demonstrated that the optical performance of membrane heliostats rivals that of glass-mirror heliostats. The durability of the silvered-acrylic reflective film has improved so that a lifetime of at least 5 years is likely; methods of replacing the film in the field are being investigated. Sandia recently initiated the final phase of development: the design of fully integrated, market-ready heliostats. Field tests of these heliostats are planned to begin in FY90.

  20. Effects of mesh type on a non-premixed model in a flameless combustion simulation

    NASA Astrophysics Data System (ADS)

    Komonhirun, Seekharin; Yongyingsakthavorn, Pisit; Nontakeaw, Udomkiat

    2018-01-01

    Flameless combustion is a recently developed combustion system, which provides zero emission product. This phenomenon requires auto-ignition by supplying high-temperature air with low oxygen concentration. The flame is vanished and colorless. Temperature of the flameless combustion is less than that of a conventional case, where NOx reactions can be well suppressed. To design a flameless combustor, the computational fluid dynamics (CFD) is employed. The designed air-and-fuel injection method can be applied with the turbulent and non-premixed models. Due to the fact that nature of turbulent non-premixed combustion is based on molecular randomness, inappropriate mesh type can lead to significant numerical errors. Therefore, this research aims to numerically investigate the effects of mesh type on flameless combustion characteristics, which is a primary step of design process. Different meshes, i.e. tetrahedral, hexagonal are selected. Boundary conditions are 5% of oxygen and 900 K of air-inlet temperature for the flameless combustion, and 21% of oxygen and 300 K of air-inlet temperature for the conventional case. The results are finally presented and discussed in terms of velocity streamlines, and contours of turbulent kinetic energy and viscosity, temperature, and combustion products.

Top