Sample records for robust multichip analysis

  1. Multichip module technology for automotive application

    NASA Astrophysics Data System (ADS)

    Johnson, R. Wayne; Evans, John L.; Bosley, Larry

    1995-01-01

    Advancements in multichip module technology are creating design freedoms previously unavailable to design engineers. These advancements are opening new markets for laminate based multichip module products. In particular, material improvements in laminate printed wiring boards are allowing multichip module technology to meet more stringent environmental conditions. In addition, improvements in encapsulants and adhesives are enhancing the capabilities of multichip module technology to meet harsh environment. Furthermore, improvements in manufacturing techniques are providing the reliability improvements necessary for use in high quality electronic systems. These advances are making multichip module technology viable for high volume, harsh environment applications like under-the-hood automotive electronics. This paper will provide a brief review of multichip module technology, a discussion of specific research activities with Chrysler for use of multichip modules in automotive engine controllers and finally a discussion of prototype multichip modules fabricated and tested.

  2. Experimental Study on Active Cooling Systems Used for Thermal Management of High-Power Multichip Light-Emitting Diodes

    PubMed Central

    2014-01-01

    The objective of this study was to develop suitable cooling systems for high-power multichip LEDs. To this end, three different active cooling systems were investigated to control the heat generated by the powering of high-power multichip LEDs in two different configurations (30 and 2 × 15 W). The following cooling systems were used in the study: an integrated multi-fin heat sink design with a fan, a cooling system with a thermoelectric cooler (TEC), and a heat pipe cooling device. According to the results, all three systems were observed to be sufficient for cooling high-power LEDs. Furthermore, it was observed that the integrated multifin heat sink design with a fan was the most efficient cooling system for a 30 W high-power multichip LED. The cooling system with a TEC and 46 W input power was the most efficient cooling system for 2 × 15 W high-power multichip LEDs. PMID:25162058

  3. Programmable Multi-Chip Module

    DOEpatents

    Kautz, David; Morgenstern, Howard; Blazek, Roy J.

    2005-05-24

    A multi-chip module comprising a low-temperature co-fired ceramic substrate having a first side on which are mounted active components and a second side on which are mounted passive components, wherein this segregation of components allows for hermetically sealing the active components with a cover while leaving accessible the passive components, and wherein the passive components are secured using a reflow soldering technique and are removable and replaceable so as to make the multi-chip module substantially programmable with regard to the passive components.

  4. Programmable Multi-Chip Module

    DOEpatents

    Kautz, David; Morgenstern, Howard; Blazek, Roy J.

    2004-11-16

    A multi-chip module comprising a low-temperature co-fired ceramic substrate having a first side on which are mounted active components and a second side on which are mounted passive components, wherein this segregation of components allows for hermetically sealing the active components with a cover while leaving accessible the passive components, and wherein the passive components are secured using a reflow soldering technique and are removable and replaceable so as to make the multi-chip module substantially programmable with regard to the passive components.

  5. Programmable multi-chip module

    DOEpatents

    Kautz, David; Morgenstern, Howard; Blazek, Roy J.

    2004-03-02

    A multi-chip module comprising a low-temperature co-fired ceramic substrate having a first side on which are mounted active components and a second side on which are mounted passive components, wherein this segregation of components allows for hermetically sealing the active components with a cover while leaving accessible the passive components, and wherein the passive components are secured using a reflow soldering technique and are removable and replaceable so as to make the multi-chip module substantially programmable with regard to the passive components.

  6. Compact Multimedia Systems in Multi-chip Module Technology

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Alkalaj, Leon

    1995-01-01

    This tutorial paper shows advanced multimedia system designs based on multi-chip module (MCM) technologies that provide essential computing, compression, communication, and storage capabilities for various large scale information highway applications.!.

  7. FERMI: a digital Front End and Readout MIcrosystem for high resolution calorimetry

    NASA Astrophysics Data System (ADS)

    Alexanian, H.; Appelquist, G.; Bailly, P.; Benetta, R.; Berglund, S.; Bezamat, J.; Blouzon, F.; Bohm, C.; Breveglieri, L.; Brigati, S.; Cattaneo, P. W.; Dadda, L.; David, J.; Engström, M.; Genat, J. F.; Givoletti, M.; Goggi, V. G.; Gong, S.; Grieco, G. M.; Hansen, M.; Hentzell, H.; Holmberg, T.; Höglund, I.; Inkinen, S. J.; Kerek, A.; Landi, C.; Ledortz, O.; Lippi, M.; Lofstedt, B.; Lund-Jensen, B.; Maloberti, F.; Mutz, S.; Nayman, P.; Piuri, V.; Polesello, G.; Sami, M.; Savoy-Navarro, A.; Schwemling, P.; Stefanelli, R.; Sundblad, R.; Svensson, C.; Torelli, G.; Vanuxem, J. P.; Yamdagni, N.; Yuan, J.; Ödmark, A.; Fermi Collaboration

    1995-02-01

    We present a digital solution for the front-end electronics of high resolution calorimeters at future colliders. It is based on analogue signal compression, high speed {A}/{D} converters, a fully programmable pipeline and a digital signal processing (DSP) chain with local intelligence and system supervision. This digital solution is aimed at providing maximal front-end processing power by performing waveform analysis using DSP methods. For the system integration of the multichannel device a multi-chip, silicon-on-silicon multi-chip module (MCM) has been adopted. This solution allows a high level of integration of complex analogue and digital functions, with excellent flexibility in mixing technologies for the different functional blocks. This type of multichip integration provides a high degree of reliability and programmability at both the function and the system level, with the additional possibility of customising the microsystem to detector-specific requirements. For enhanced reliability in high radiation environments, fault tolerance strategies, i.e. redundancy, reconfigurability, majority voting and coding for error detection and correction, are integrated into the design.

  8. An analog silicon retina with multichip configuration.

    PubMed

    Kameda, Seiji; Yagi, Tetsuya

    2006-01-01

    The neuromorphic silicon retina is a novel analog very large scale integrated circuit that emulates the structure and the function of the retinal neuronal circuit. We fabricated a neuromorphic silicon retina, in which sample/hold circuits were embedded to generate fluctuation-suppressed outputs in the previous study [1]. The applications of this silicon retina, however, are limited because of a low spatial resolution and computational variability. In this paper, we have fabricated a multichip silicon retina in which the functional network circuits are divided into two chips: the photoreceptor network chip (P chip) and the horizontal cell network chip (H chip). The output images of the P chip are transferred to the H chip with analog voltages through the line-parallel transfer bus. The sample/hold circuits embedded in the P and H chips compensate for the pattern noise generated on the circuits, including the analog communication pathway. Using the multichip silicon retina together with an off-chip differential amplifier, spatial filtering of the image with an odd- and an even-symmetric orientation selective receptive fields was carried out in real time. The analog data transfer method in the present multichip silicon retina is useful to design analog neuromorphic multichip systems that mimic the hierarchical structure of neuronal networks in the visual system.

  9. A 1-Gigabit Memory System on a multi-Chip Module for Space Applications

    NASA Technical Reports Server (NTRS)

    Louie, Marianne E.; Topliffe, Douglas A.; Alkalai, Leon

    1996-01-01

    Current spaceborne applications desire compact, low weight, and high capacity data storage systems along with the additional requirement of radiation tolerance. This paper discusses a memory system on a multi-chip module (MCM) that is designed for space applications.

  10. Designing an Electronics Data Package for Printed Circuit Boards (PCBs)

    DTIC Science & Technology

    2013-08-01

    finished PCB flatness deviation should be less than 0.010 inches per inch. 4  The minimum copper wall thickness of plated-thru holes should be...Memory Card International Association)  IPC-6015 MCM-L (Multi-Chip Module – Laminated )  IPC-6016 HDI (High Density Interconnect)  IPC-6018...Interconnect ICT In Circuit Tester IPC Association Connecting Electronics Industries MCM-L Multi-Chip Module – Laminated MIL Military NEMA National

  11. Thermal Characterization for a Modular 3-D Multichip Module

    NASA Technical Reports Server (NTRS)

    Fan, Mark S.; Plante, Jeannette; Shaw, Harry

    2000-01-01

    NASA Goddard Space Flight Center has designed a high-density modular 3-D multichip module (MCM) for future spaceflight use. This MCM features a complete modular structure, i.e., each stack can be removed from the package without damaging the structure. The interconnection to the PCB is through the Column Grid Array (CGA) technology. Because of its high-density nature, large power dissipation from multiple layers of circuitry is anticipated and CVD diamond films are used in the assembly for heat conduction enhancement. Since each stacked layer dissipates certain amount of heat, designing effective heat conduction paths through each stack and balancing the heat dissipation within each stack for optimal thermal performance become a challenging task. To effectively remove the dissipated heat from the package, extensive thermal analysis has been performed with finite element methods. Through these analyses, we are able to improve the thermal design and increase the total wattage of the package for maximum electrical performance. This paper provides details on the design-oriented thermal analysis and performance enhancement. It also addresses issues relating to contact thermal resistance between the diamond film and the metallic heat conduction paths.

  12. An introduction to NASA's advanced computing program: Integrated computing systems in advanced multichip modules

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Alkalai, Leon

    1996-01-01

    Recent changes within NASA's space exploration program favor the design, implementation, and operation of low cost, lightweight, small and micro spacecraft with multiple launches per year. In order to meet the future needs of these missions with regard to the use of spacecraft microelectronics, NASA's advanced flight computing (AFC) program is currently considering industrial cooperation and advanced packaging architectures. In relation to this, the AFC program is reviewed, considering the design and implementation of NASA's AFC multichip module.

  13. Three dimensional, multi-chip module

    DOEpatents

    Bernhardt, A.F.; Petersen, R.W.

    1993-08-31

    A plurality of multi-chip modules are stacked and bonded around the perimeter by sold-bump bonds to adjacent modules on, for instance, three sides of the perimeter. The fourth side can be used for coolant distribution, for more interconnect structures, or other features, depending on particular design considerations of the chip set. The multi-chip modules comprise a circuit board, having a planarized interconnect structure formed on a first major surface, and integrated circuit chips bonded to the planarized interconnect surface. Around the periphery of each circuit board, long, narrow dummy chips'' are bonded to the finished circuit board to form a perimeter wall. The wall is higher than any of the chips on the circuit board, so that the flat back surface of the board above will only touch the perimeter wall. Module-to-module interconnect is laser-patterned on the sides of the boards and over the perimeter wall in the same way and at the same time that chip to board interconnect may be laser-patterned.

  14. Three dimensional, multi-chip module

    DOEpatents

    Bernhardt, Anthony F.; Petersen, Robert W.

    1993-01-01

    A plurality of multi-chip modules are stacked and bonded around the perimeter by sold-bump bonds to adjacent modules on, for instance, three sides of the perimeter. The fourth side can be used for coolant distribution, for more interconnect structures, or other features, depending on particular design considerations of the chip set. The multi-chip modules comprise a circuit board, having a planarized interconnect structure formed on a first major surface, and integrated circuit chips bonded to the planarized interconnect surface. Around the periphery of each circuit board, long, narrow "dummy chips" are bonded to the finished circuit board to form a perimeter wall. The wall is higher than any of the chips on the circuit board, so that the flat back surface of the board above will only touch the perimeter wall. Module-to-module interconnect is laser-patterned o the sides of the boards and over the perimeter wall in the same way and at the same time that chip to board interconnect may be laser-patterned.

  15. Orientation-selective aVLSI spiking neurons.

    PubMed

    Liu, S C; Kramer, J; Indiveri, G; Delbrück, T; Burg, T; Douglas, R

    2001-01-01

    We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.

  16. A multichip aVLSI system emulating orientation selectivity of primary visual cortical cells.

    PubMed

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2005-07-01

    In this paper, we designed and fabricated a multichip neuromorphic analog very large scale integrated (aVLSI) system, which emulates the orientation selective response of the simple cell in the primary visual cortex. The system consists of a silicon retina and an orientation chip. An image, which is filtered by a concentric center-surround (CS) antagonistic receptive field of the silicon retina, is transferred to the orientation chip. The image transfer from the silicon retina to the orientation chip is carried out with analog signals. The orientation chip selectively aggregates multiple pixels of the silicon retina, mimicking the feedforward model proposed by Hubel and Wiesel. The chip provides the orientation-selective (OS) outputs which are tuned to 0 degrees, 60 degrees, and 120 degrees. The feed-forward aggregation reduces the fixed pattern noise that is due to the mismatch of the transistors in the orientation chip. The spatial properties of the orientation selective response were examined in terms of the adjustable parameters of the chip, i.e., the number of aggregated pixels and size of the receptive field of the silicon retina. The multichip aVLSI architecture used in the present study can be applied to implement higher order cells such as the complex cell of the primary visual cortex.

  17. Selective attention in multi-chip address-event systems.

    PubMed

    Bartolozzi, Chiara; Indiveri, Giacomo

    2009-01-01

    Selective attention is the strategy used by biological systems to cope with the inherent limits in their available computational resources, in order to efficiently process sensory information. The same strategy can be used in artificial systems that have to process vast amounts of sensory data with limited resources. In this paper we present a neuromorphic VLSI device, the "Selective Attention Chip" (SAC), which can be used to implement these models in multi-chip address-event systems. We also describe a real-time sensory-motor system, which integrates the SAC with a dynamic vision sensor and a robotic actuator. We present experimental results from each component in the system, and demonstrate how the complete system implements a real-time stimulus-driven selective attention model.

  18. Gene expression profiles analysis identifies key genes for acute lung injury in patients with sepsis.

    PubMed

    Guo, Zhiqiang; Zhao, Chuncheng; Wang, Zheng

    2014-09-26

    To identify critical genes and biological pathways in acute lung injury (ALI), a comparative analysis of gene expression profiles of patients with ALI + sepsis compared with patients with sepsis alone were performed with bioinformatic tools. GSE10474 was downloaded from Gene Expression Omnibus, including a collective of 13 whole blood samples with ALI + sepsis and 21 whole blood samples with sepsis alone. After pre-treatment with robust multichip averaging (RMA) method, differential analysis was conducted using simpleaffy package based upon t-test and fold change. Hierarchical clustering was also performed using function hclust from package stats. Beisides, functional enrichment analysis was conducted using iGepros. Moreover, the gene regulatory network was constructed with information from Kyoto Encyclopedia of Genes and Genomes (KEGG) and then visualized by Cytoscape. A total of 128 differentially expressed genes (DEGs) were identified, including 47 up- and 81 down-regulated genes. The significantly enriched functions included negative regulation of cell proliferation, regulation of response to stimulus and cellular component morphogenesis. A total of 27 DEGs were significantly enriched in 16 KEGG pathways, such as protein digestion and absorption, fatty acid metabolism, amoebiasis, etc. Furthermore, the regulatory network of these 27 DEGs was constructed, which involved several key genes, including protein tyrosine kinase 2 (PTK2), v-src avian sarcoma (SRC) and Caveolin 2 (CAV2). PTK2, SRC and CAV2 may be potential markers for diagnosis and treatment of ALI. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5865162912987143.

  19. Leaving out control groups: an internal contrast analysis of gene expression profiles in atrial fibrillation patients--a systems biology approach to clinical categorization.

    PubMed

    Vanhoutte, Kurt; de Asmundis, Carlo; Francesconi, Anna; Figysl, Jurgen; Steurs, Griet; Boussy, Tim; Roos, Markus; Mueller, Andreas; Massimo, Lucio; Paparella, Gaetano; Van Caelenberg, Kristien; Chierchia, Gian Battista; Sarkozy, Andrea; Terradellas, Pedro Brugada Y; Zizi, Martin

    2009-01-01

    Atrial fibrillation (AF) is a frequent chronic dysrythmia with an incidence that increases with age (>40). Because of its medical and socio-economic impacts it is expected to become an increasing burden on most health care systems. AF is a multi-factorial disease for which the identification of subtypes is warranted. Novel approaches based on the broad concepts of systems biology may overcome the blurred notion of normal and pathological phenotype, which is inherent to high throughput molecular arrays analysis. Here we apply an internal contrast algorithm on AF patient data with an analytical focus on potential entry pathways into the disease. We used a RMA (Robust Multichip Average) normalized Affymetrix micro-array data set from 10 AF patients (geo_accession #GSE2240). Four series of probes were selected based on physiopathogenic links with AF entryways: apoptosis (remodeling), MAP kinase (cell remodeling), OXPHOS (ability to sustain hemodynamic workload) and glycolysis (ischemia). Annotated probe lists were polled with Bioconductor packages in R (version 2.7.1). Genetic profile contrasts were analysed with hierarchical clustering and principal component analysis. The analysis revealed distinct patient groups for all probe sets. A substantial part (54% till 67%) of the variance is explained in the first 2 principal components. Genes in PC1/2 with high discriminatory value were selected and analyzed in detail. We aim for reliable molecular stratification of AF. We show that stratification is possible based on physiologically relevant gene sets. Genes with high contrast value are likely to give pathophysiological insight into permanent AF subtypes.

  20. Tunneling Accelerometer Multichip Module

    NASA Technical Reports Server (NTRS)

    Boyadzhyan, V. V.; Choma, J., Jr.

    1998-01-01

    This paper summarizes and reports a radiation hardened Spacecraft Integrated Sensor/Circut Technology that has been delivered for flight applications to be flown on board STRV-2 (Space Technology Research Vehicle-2).

  1. Tunable All-Solid-State Local Oscillators to 1900 GHz

    NASA Technical Reports Server (NTRS)

    Ward, John; Chattopadhyay, Goutam; Maestrini, Alain; Schlecht, Erich; Gill, John; Javadi, Hamid; Pukala, David; Maiwald, Frank; Mehdi, Imran

    2004-01-01

    We present a status report of an ongoing effort to develop robust tunable all-solid-state sources up to 1900 GHz for the Heterodyne Instrument for the Far Infrared (HIFI) on the Herschel Space Observatory. GaAs based multi-chip power amplifier modules at W-band are used to drive cascaded chains of multipliers. We have demonstrated performance from chains comprised of four doublers up to 1600 GHz as well as from a x2x3x3 chain to 1900 GHz. Measured peak output power of 23 (micro)W at 1782 GHz and 2.6 (micro)W at 1900 GHz has been achieved when the multipliers are cooled to 120K. The 1900 GHz tripler was pumped with a four anode tripler that produces a peak of 4 mW at 630 GHz when cooled to 120 K. We believe that these sources can now be used to pump hot electron bolometer (HEB) heterodyne mixers.ter (HEB) heterodyne mixers.

  2. Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH) program

    NASA Astrophysics Data System (ADS)

    Fayette, Daniel F.; Speicher, Patricia; Stoklosa, Mark J.; Evans, Jillian V.; Evans, John W.; Gentile, Mike; Pagel, Chuck A.; Hakim, Edward

    1993-08-01

    A joint military-commercial effort to evaluate multichip module (MCM) structures is discussed. The program, Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH), has been designed to identify the failure mechanisms that are possible in MCM structures. The RELTECH test vehicles, technical assessment task, product evaluation plan, reliability modeling task, accelerated and environmental testing, and post-test physical analysis and failure analysis are described. The information obtained through RELTECH can be used to address standardization issues, through development of cost effective qualification and appropriate screening criteria, for inclusion into a commercial specification and the MIL-H-38534 general specification for hybrid microcircuits.

  3. Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH) program

    NASA Technical Reports Server (NTRS)

    Fayette, Daniel F.; Speicher, Patricia; Stoklosa, Mark J.; Evans, Jillian V.; Evans, John W.; Gentile, Mike; Pagel, Chuck A.; Hakim, Edward

    1993-01-01

    A joint military-commercial effort to evaluate multichip module (MCM) structures is discussed. The program, Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH), has been designed to identify the failure mechanisms that are possible in MCM structures. The RELTECH test vehicles, technical assessment task, product evaluation plan, reliability modeling task, accelerated and environmental testing, and post-test physical analysis and failure analysis are described. The information obtained through RELTECH can be used to address standardization issues, through development of cost effective qualification and appropriate screening criteria, for inclusion into a commercial specification and the MIL-H-38534 general specification for hybrid microcircuits.

  4. An integrated circuit switch

    NASA Technical Reports Server (NTRS)

    Bonin, E. L.

    1969-01-01

    Multi-chip integrated circuit switch consists of a GaAs photon-emitting diode in close proximity with S1 phototransistor. A high current gain is obtained when the transistor has a high forward common-emitter current gain.

  5. Fabrication and demonstration of 1 × 8 silicon-silica multi-chip switch based on optical phased array

    NASA Astrophysics Data System (ADS)

    Katayose, Satomi; Hashizume, Yasuaki; Itoh, Mikitaka

    2016-08-01

    We experimentally demonstrated a 1 × 8 silicon-silica hybrid thermo-optic switch based on an optical phased array using a multi-chip integration technique. The switch consists of a silicon chip with optical phase shifters and two silica-based planar lightwave circuit (PLC) chips composed of optical couplers and fiber connections. We adopted a rib waveguide as the silicon waveguide to reduce the coupling loss and increase the alignment tolerance for coupling between silicon and silica waveguides. As a result, we achieved a fast switching response of 81 µs, a high extinction ratio of over 18 dB and a low insertion loss of 4.9-8.1 dB including a silicon-silica coupling loss of 0.5 ± 0.3 dB at a wavelength of 1.55 µm.

  6. The Use of Metal Filled Via Holes for Improving Isolation in LTCC RF and Wireless Multichip Packages

    NASA Technical Reports Server (NTRS)

    Ponchak, George E.; Chun, Donghoon; Yook, Jong-Gwan; Katehi, Linda P. B.

    1999-01-01

    LTCC MCMs (Low Temperature Cofired Ceramic MultiChip Module) for RF and wireless systems often use metal filled via holes to improve isolation between the stripline and microstrip interconnects. In this paper, results from a 3D-FEM electromagnetic characterization of microstrip and stripline interconnects with metal filled via fences for isolation are presented. It is shown that placement of a via hole fence closer than three times the substrate height to the transmission lines increases radiation and coupling. Radiation loss and reflections are increased when a short via fence is used in areas suspected of having high radiation. Also, via posts should not be separated by more than three times the substrate height for low radiation loss, coupling, and suppression of higher order modes in a package.

  7. Leaving out control groups: an internal contrast analysis of gene expression profiles in atrial fibrillation patients ‐ A systems biology approach to clinical categorization

    PubMed Central

    Vanhoutte, Kurt; de Asmundis, Carlo; Francesconi, Anna; Figys1, Jurgen; Steurs, Griet; Boussy, Tim; Roos, Markus; Mueller, Andreas; Massimo, Lucio; Paparella, Gaetano; Van Caelenberg, Kristien; Chierchia, Gian Battista; Sarkozy, Andrea; Y Terradellas, Pedro Brugada; Zizi, Martin

    2009-01-01

    Atrial fibrillation (AF) is a frequent chronic dysrythmia with an incidence that increases with age (>40). Because of its medical and socio-economic impacts it is expected to become an increasing burden on most health care systems. AF is a multi-factorial disease for which the identification of subtypes is warranted. Novel approaches based on the broad concepts of systems biology may overcome the blurred notion of normal and pathological phenotype, which is inherent to high throughput molecular arrays analysis. Here we apply an internal contrast algorithm on AF patient data with an analytical focus on potential entry pathways into the disease. We used a RMA (Robust Multichip Average) normalized Affymetrix micro-array data set from 10 AF patients (geo_accession #GSE2240). Four series of probes were selected based on physiopathogenic links with AF entryways: apoptosis (remodeling), MAP kinase (cell remodeling), OXPHOS (ability to sustain hemodynamic workload) and glycolysis (ischemia). Annotated probe lists were polled with Bioconductor packages in R (version 2.7.1). Genetic profile contrasts were analysed with hierarchical clustering and principal component analysis. The analysis revealed distinct patient groups for all probe sets. A substantial part (54% till 67%) of the variance is explained in the first 2 principal components. Genes in PC1/2 with high discriminatory value were selected and analyzed in detail. We aim for reliable molecular stratification of AF. We show that stratification is possible based on physiologically relevant gene sets. Genes with high contrast value are likely to give pathophysiological insight into permanent AF subtypes. PMID:19255648

  8. Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform

    PubMed Central

    Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B.

    2016-01-01

    Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks. PMID:26909015

  9. Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform.

    PubMed

    Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B

    2016-01-01

    Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks.

  10. Thermal fatigue life evaluation of SnAgCu solder joints in a multi-chip power module

    NASA Astrophysics Data System (ADS)

    Barbagallo, C.; Malgioglio, G. L.; Petrone, G.; Cammarata, G.

    2017-05-01

    For power devices, the reliability of thermal fatigue induced by thermal cycling has been prioritized as an important concern. The main target of this work is to apply a numerical procedure to assess the fatigue life for lead-free solder joints, that represent, in general, the weakest part of the electronic modules. Starting from a real multi-chip power module, FE-based models were built-up by considering different conditions in model implementation in order to simulate, from one hand, the worst working condition for the module and, from another one, the module standing into a climatic test room performing thermal cycles. Simulations were carried-out both in steady and transient conditions in order to estimate the module thermal maps, the stress-strain distributions, the effective plastic strain distributions and finally to assess the number of cycles to failure of the constitutive solder layers.

  11. Smart substrates: Making multi-chip modules smarter

    NASA Astrophysics Data System (ADS)

    Wunsch, T. F.; Treece, R. K.

    1995-05-01

    A novel multi-chip module (MCM) design and manufacturing methodology which utilizes active CMOS circuits in what is normally a passive substrate realizes the 'smart substrate' for use in highly testable, high reliability MCMS. The active devices are used to test the bare substrate, diagnose assembly errors or integrated circuit (IC) failures that require rework, and improve the testability of the final MCM assembly. A static random access memory (SRAM) MCM has been designed and fabricated in Sandia Microelectronics Development Laboratory in order to demonstrate the technical feasibility of this concept and to examine design and manufacturing issues which will ultimately determine the economic viability of this approach. The smart substrate memory MCM represents a first in MCM packaging. At the time the first modules were fabricated, no other company or MCM vendor had incorporated active devices in the substrate to improve manufacturability and testability, and thereby improve MCM reliability and reduce cost.

  12. Reliability assessment of Multichip Module technologies via the Triservice/NASA RELTECH program

    NASA Astrophysics Data System (ADS)

    Fayette, Daniel F.

    1994-10-01

    Multichip Module (MCM) packaging/interconnect technologies have seen increased emphasis from both the commercial and military communities as a means of increasing capability and performance while providing a vehicle for reducing cost, power and weight of the end item electronic application. This is accomplished through three basic Multichip module technologies, MCM-L that are laminates, MCM-C that are ceramic type substrates and MCM-D that are deposited substrates (e.g., polymer dielectric with thin film metals). Three types of interconnect structures are also used with these substrates and include, wire bond, Tape Automated Bonds (TAB) and flip chip ball bonds. Application, cost, producibility and reliability are the drivers that will determine which MCM technology will best fit a respective need or requirement. With all the benefits and technologies cited, it would be expected that the use of, or the planned use of, MCM's would be more extensive in both military and commercial applications. However, two significant roadblocks exist to implementation of these new technologies: the absence of reliability data and a single national standard for the procurement of reliable/quality MCM's. To address the preceding issues, the Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH) program has been established. This program, which began in May 1992, has endeavored to evaluate a cross section of MCM technologies covering all classes of MCM's previously cited. NASA and the Tri-Services (Air Force Rome Laboratory, Naval Surface Warfare Center, Crane IN and Army Research Laboratory) have teamed together with sponsorship from ARPA to evaluate the performance, reliability and producibility of MCM's for both military and commercial usage. This is done in close cooperation with our industry partners whose support is critical to the goals of the program. Several tasks are being performed by the RELTECH program and data from this effort, in conjunction with information from our industry partners as well as discussions with industry organizations (IPC, EIA, ISHM, etc.) are being used to develop the qualification and screening requirements for MCM's. Specific tasks being performed by the RELTECH program include technical assessments, product evaluations, reliability modeling, environmental testing, and failure analysis. This paper will describe the various tasks associated with the RELTECH program, status, progress and a description of the national dual use specification being developed for MCM technologies.

  13. In situ 3D nanoprinting of free-form coupling elements for hybrid photonic integration

    NASA Astrophysics Data System (ADS)

    Dietrich, P.-I.; Blaicher, M.; Reuter, I.; Billah, M.; Hoose, T.; Hofmann, A.; Caer, C.; Dangel, R.; Offrein, B.; Troppenz, U.; Moehrle, M.; Freude, W.; Koos, C.

    2018-04-01

    Hybrid photonic integration combines complementary advantages of different material platforms, offering superior performance and flexibility compared with monolithic approaches. This applies in particular to multi-chip concepts, where components can be individually optimized and tested. The assembly of such systems, however, requires expensive high-precision alignment and adaptation of optical mode profiles. We show that these challenges can be overcome by in situ printing of facet-attached beam-shaping elements. Our approach allows precise adaptation of vastly dissimilar mode profiles and permits alignment tolerances compatible with cost-efficient passive assembly techniques. We demonstrate a selection of beam-shaping elements at chip and fibre facets, achieving coupling efficiencies of up to 88% between edge-emitting lasers and single-mode fibres. We also realize printed free-form mirrors that simultaneously adapt beam shape and propagation direction, and we explore multi-lens systems for beam expansion. The concept paves the way to automated assembly of photonic multi-chip systems with unprecedented performance and versatility.

  14. Characterization of visceral and subcutaneous adipose tissue transcriptome in pregnant women with and without spontaneous labor at term: implication of alternative splicing in the metabolic adaptations of adipose tissue to parturition.

    PubMed

    Mazaki-Tovi, Shali; Tarca, Adi L; Vaisbuch, Edi; Kusanovic, Juan Pedro; Than, Nandor Gabor; Chaiworapongsa, Tinnakorn; Dong, Zhong; Hassan, Sonia S; Romero, Roberto

    2016-10-01

    The aim of this study was to determine gene expression and splicing changes associated with parturition and regions (visceral vs. subcutaneous) of the adipose tissue of pregnant women. The transcriptome of visceral and abdominal subcutaneous adipose tissue from pregnant women at term with (n=15) and without (n=25) spontaneous labor was profiled with the Affymetrix GeneChip Human Exon 1.0 ST array. Overall gene expression changes and the differential exon usage rate were compared between patient groups (unpaired analyses) and adipose tissue regions (paired analyses). Selected genes were tested by quantitative reverse transcription-polymerase chain reaction. Four hundred and eighty-two genes were differentially expressed between visceral and subcutaneous fat of pregnant women with spontaneous labor at term (q-value <0.1; fold change >1.5). Biological processes enriched in this comparison included tissue and vasculature development as well as inflammatory and metabolic pathways. Differential splicing was found for 42 genes [q-value <0.1; differences in Finding Isoforms using Robust Multichip Analysis scores >2] between adipose tissue regions of women not in labor. Differential exon usage associated with parturition was found for three genes (LIMS1, HSPA5, and GSTK1) in subcutaneous tissues. We show for the first time evidence of implication of mRNA splicing and processing machinery in the subcutaneous adipose tissue of women in labor compared to those without labor.

  15. Materials Research for GHz Multi-Chip Modules

    DTIC Science & Technology

    1993-09-30

    Publications: Laursen, K., Hertling, D., Berry, N., Bidstrup, S.A., Kohl, P., and Arroz , A., "Measurement of the Electrical Properties of Hligh Performance...Materials, Fall 1992. Herding, D.R., Laursen, K., Bidstrup, S.A., Kohl, P.A., Arroz , G.S.., "Measurement of the Electrical Properties of High

  16. Transcriptional profiles in liver from rats treated with tumorigenic and non-tumorigenic triazole conazole fungicides: Propiconazole, triadimefon, and myclobutanil.

    PubMed

    Hester, Susan D; Wolf, Douglas C; Nesnow, Stephen; Thai, Sheau-Fung

    2006-01-01

    Conazoles are a class of fungicides used as pharmaceutical and agricultural agents. In chronic bioassays in rats, triadimefon was hepatotoxic and induced follicular cell adenomas in the thyroid gland, whereas, propiconazole and myclobutanil were hepatotoxic but had no effect on the thyroid gland. These conazoles administered in the feed to male Wistar/Han rats were found to induce hepatomegaly, induce high levels of pentoxyresorufin-O-dealkylase, increase cell proliferation in the liver, increase serum cholesterol, decrease serum T3 and T4, and increase hepatic uridine diphosphoglucuronosyl transferase activity. The goal of the present study was to define pathways that explain the biologic outcomes. Male Wistar/Han rats (3 per group), were exposed to the 3 conazoles in the feed for 4, 30, or 90 days of treatment at tumorigenic and nontumorigenic doses. Hepatic gene expression was determined using high-density Affymetrix GeneChips (Rat 230_2). Differential gene expression was assessed at the probe level using Robust Multichip Average analysis. Principal component analysis by treatment and time showed within group sample similarity and that the treatment groups were distinct from each other. The number of altered genes varied by treatment, dose, and time. The greatest number of altered genes was induced by triadimefon and propiconazole after 90 days of treatment, while myclobutanil had minimal effects at that time point. Pathway level analyses revealed that after 90 days of treatment the most significant numbers of altered pathways were related to cell signaling, growth, and metabolism. Pathway level analysis for triadimefon and propiconazole resulted in 71 altered pathways common to both chemicals. These pathways controlled cholesterol metabolism, activation of nuclear receptors, and N-ras and K-ras signaling. There were 37 pathways uniquely changed by propiconazole, and triadimefon uniquely altered 34 pathways. Pathway level analysis of altered gene expression resulted in a more complete description of the associated toxicological effects that can distinguish triadimefon from propiconazole and myclobutanil.

  17. Two-Stage, 90-GHz, Low-Noise Amplifier

    NASA Technical Reports Server (NTRS)

    Samoska, Lorene A.; Gaier, Todd C.; Xenos, Stephanie; Soria, Mary M.; Kangaslahti, Pekka P.; Cleary, Kieran A.; Ferreira, Linda; Lai, Richard; Mei, Xiaobing

    2010-01-01

    A device has been developed for coherent detection of the polarization of the cosmic microwave background (CMB). A two-stage amplifier has been designed that covers 75-110 GHz. The device uses the emerging 35-nm InP HEMT technology recently developed at Northrop Grumman Corporation primarily for use at higher frequencies. The amplifier has more than 18 dB gain and less than 35 K noise figure across the band. These devices have noise less than 30 K at 100 GHz. The development started with design activities at JPL, as well as characterization of multichip modules using existing InP. Following processing, a test campaign was carried out using single-chip modules at 100 GHz. Successful development of the chips will lead to development of multichip modules, with simultaneous Q and U Stokes parameter detection. This MMIC (monolithic microwave integrated circuit) amplifier takes advantage of performance improvements intended for higher frequencies, but in this innovation are applied at 90 GHz. The large amount of available gain ultimately leads to lower possible noise performance at 90 GHz.

  18. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    PubMed Central

    Indiveri, Giacomo

    2008-01-01

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention. PMID:27873818

  19. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems.

    PubMed

    Indiveri, Giacomo

    2008-09-03

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  20. Thin-film decoupling capacitors for multi-chip modules

    NASA Astrophysics Data System (ADS)

    Dimos, D.; Lockwood, S. J.; Schwartz, R. W.; Rogers, M. S.

    Thin-film decoupling capacitors based on ferroelectric lead lanthanum zirconate titanate (PLZT) films are being developed for use in advanced packages, such as multi-chip modules. These thin-film decoupling capacitors are intended to replace multi-layer ceramic capacitors for certain applications, since they can be more fully integrated into the packaging architecture. The increased integration that can be achieved should lead to decreased package volume and improved high-speed performance, due to a decrease in interconnect inductance. PLZT films are fabricated by spin coating using metal carboxylate/alkoxide solutions. These films exhibit very high dielectric constants ((var epsilon) greater than or equal to 900), low dielectric losses (tan(delta) = 0.01), excellent insulation resistances (rho greater than 10(exp 13) (Omega)-cm at 125 C), and good breakdown field strengths (E(sub B) = 900 kV/cm). For integrated circuit applications, the PLZT dielectric is less than 1 micron thick, which results in a large capacitance/area (8-9 nF/sq mm). The thin-film geometry and processing conditions also make these capacitors suitable for direct incorporation onto integrated circuits and for packages that require embedded components.

  1. Compact microelectrode array system: tool for in situ monitoring of drug effects on neurotransmitter release from neural cells.

    PubMed

    Chen, Yu; Guo, Chunxian; Lim, Layhar; Cheong, Serchoong; Zhang, Qingxin; Tang, Kumcheong; Reboud, Julien

    2008-02-15

    This paper presents a compact microelectrode array (MEA) system, to study potassium ion-induced dopamine release from PC12 neural cells, without relying on a micromanipulator and a microscope. The MEA chip was integrated with a custom-made "test jig", which provides a robust electrical interfacing tool between the microchip and the macroenvironment, together with a potentiostat and a microfluidic syringe pump. This integrated system significantly simplifies the operation procedures, enhances sensing performance, and reduces fabrication costs. The achieved detection limit for dopamine is 3.8 x 10-2 muM (signal/noise, S/N = 3) and the dopamine linear calibration range is up to 7.39 +/- 0.06 muM (mean +/- SE). The effects of the extracelluar matrix collagen coating of the microelectrodes on dopamine sensing behaviors, as well as the influences of K+ and l-3,4-digydroxyphenylalanine concentrations and incubation times on dopamine release, were extensively studied. The results show that our system is well suited for biologists to study chemical release from living cells as well as drug effects on secreting cells. The current system also shows a potential for further improvements toward a multichip array system for drug screening applications.

  2. Low power signal processing research at Stanford

    NASA Technical Reports Server (NTRS)

    Burr, J.; Williamson, P. R.; Peterson, A.

    1991-01-01

    This paper gives an overview of the research being conducted at Stanford University's Space, Telecommunications, and Radioscience Laboratory in the area of low energy computation. It discusses the work we are doing in large scale digital VLSI neural networks, interleaved processor and pipelined memory architectures, energy estimation and optimization, multichip module packaging, and low voltage digital logic.

  3. Ultra low power CMOS technology

    NASA Technical Reports Server (NTRS)

    Burr, J.; Peterson, A.

    1991-01-01

    This paper discusses the motivation, opportunities, and problems associated with implementing digital logic at very low voltages, including the challenge of making use of the available real estate in 3D multichip modules, energy requirements of very large neural networks, energy optimization metrics and their impact on system design, modeling problems, circuit design constraints, possible fabrication process modifications to improve performance, and barriers to practical implementation.

  4. Advanced Interconnect Roadmap for Space Applications

    NASA Technical Reports Server (NTRS)

    Galbraith, Lissa

    1999-01-01

    This paper presents the NASA electronic parts and packaging program for space applications. The topics include: 1) Forecasts; 2) Technology Challenges; 3) Research Directions; 4) Research Directions for Chip on Board (COB); 5) Research Directions for HDPs: Multichip Modules (MCMs); 6) Research Directions for Microelectromechanical systems (MEMS); 7) Research Directions for Photonics; and 8) Research Directions for Materials. This paper is presented in viewgraph form.

  5. An ultra-compact processor module based on the R3000

    NASA Astrophysics Data System (ADS)

    Mullenhoff, D. J.; Kaschmitter, J. L.; Lyke, J. C.; Forman, G. A.

    1992-08-01

    Viable high density packaging is of critical importance for future military systems, particularly space borne systems which require minimum weight and size and high mechanical integrity. A leading, emerging technology for high density packaging is multi-chip modules (MCM). During the 1980's, a number of different MCM technologies have emerged. In support of Strategic Defense Initiative Organization (SDIO) programs, Lawrence Livermore National Laboratory (LLNL) has developed, utilized, and evaluated several different MCM technologies. Prior LLNL efforts include modules developed in 1986, using hybrid wafer scale packaging, which are still operational in an Air Force satellite mission. More recent efforts have included very high density cache memory modules, developed using laser pantography. As part of the demonstration effort, LLNL and Phillips Laboratory began collaborating in 1990 in the Phase 3 Multi-Chip Module (MCM) technology demonstration project. The goal of this program was to demonstrate the feasibility of General Electric's (GE) High Density Interconnect (HDI) MCM technology. The design chosen for this demonstration was the processor core for a MIPS R3000 based reduced instruction set computer (RISC), which has been described previously. It consists of the R3000 microprocessor, R3010 floating point coprocessor and 128 Kbytes of cache memory.

  6. Partition resampling and extrapolation averaging: approximation methods for quantifying gene expression in large numbers of short oligonucleotide arrays.

    PubMed

    Goldstein, Darlene R

    2006-10-01

    Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.

  7. MM&T for VHSIC Multichip Packages

    DTIC Science & Technology

    1989-09-20

    broader the network bandwidth must be in order to preserve the integrity of the rising and falling edges of the signal. However, for very fast ...inch of path length adds approximately 100-300 ps of delay to the signal. System designers of fast ECL type devices must incorporate these delays into... intermittently in low temperature areas. Permanent changes in operating characteristics and physical damage produced during temperature shock occur

  8. Silicon Wafer Advanced Packaging (SWAP). Multichip Module (MCM) Foundry Study. Version 2

    DTIC Science & Technology

    1991-04-08

    Next Layer Dielectric Spacing - Additional Metal Thickness Impact on Dielectric Uniformity/Adhiesion. The first step in .!Ie EPerimental design would be... design CAM - computer aided manufacturing CAE - computer aided engineering CALCE - computer aided life cycle engineering center CARMA - computer aided...expansion 5 j- CVD - chemical vapor deposition J . ..- j DA - design automation J , DEC - Digital Equipment Corporation --- DFT - design for testability

  9. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  10. Vertical-cavity surface-emitting lasers: the applications

    NASA Astrophysics Data System (ADS)

    Morgan, Robert A.; Lehman, John A.; Hibbs-Brenner, Mary K.; Liu, Yue; Bristow, Julian P. G.

    1997-05-01

    In this paper, we focus on how vertical-cavity surface- emitting lasers (VCSELs) and arrays have led to many feasible advanced technological applications. Their intrinsic characteristics, performance, and producibility offer substantial advantages over alternative sources. Demonstrated performance of `commercial-grade' VCSELs include low operating powers (< 2 V, mAs), high speeds (3 dB BWs > 15 GHz), and high temperature operating ranges (10 K to 400 K and -55 degree(s)C to 125 degree(s)C, and T > 200 degree(s)C). Moreover, their robustness is manifest by high reliability in excess of 107 hours mean time between failures at room temperature and tenfold improvement over existing rad-hard LEDs. Hence, even these `commercial-grade' VCSELs offer potential within cryogenic and avionics/military or space environments. We have also demonstrated submilliamp ITH, stable, single-mode VCSELs utilized within bias-free 1-Gbit/s data links. These low- power VCSELs may also serve in applications from printers to low-cost atomic clocks. The greatest near-term VCSEL applications are upgrades to low-cost LEDs and high-grade copper wire in data links and sensors. Exploiting their surface-emitting geometry, VCSELs are also compatible with established multichip module packaging. Hence VCSELs and VCSEL arrays are ideal components for interconnect-intensive processing applications between and within computing systems.

  11. Novel Techniques for Millimeter-Wave Packages

    NASA Technical Reports Server (NTRS)

    Herman, Martin I.; Lee, Karen A.; Kolawa, Elzbieta A.; Lowry, Lynn E.; Tulintseff, Ann N.

    1995-01-01

    A new millimeter-wave package architecture with supporting electrical, mechanical and material science experiment and analysis is presented. This package is well suited for discrete devices, monolithic microwave integrated circuits (MMIC's) and multichip module (MCM) applications. It has low-loss wide-band RF transitions which are necessary to overcome manufacturing tolerances leading to lower per unit cost Potential applications of this new packaging architecture which go beyond the standard requirements of device protection include integration of antennas, compatibility to photonic networks and direct transitions to waveguide systems. Techniques for electromagnetic analysis, thermal control and hermetic sealing were explored. Three dimensional electromagnetic analysis was performed using a finite difference time-domain (FDTD) algorithm and experimentally verified for millimeter-wave package input and output transitions. New multi-material system concepts (AlN, Cu, and diamond thin films) which allow excellent surface finishes to be achieved with enhanced thermal management have been investigated. A new approach utilizing block copolymer coatings was employed to hermetically seal packages which met MIL STD-883.

  12. Development of beam leaded low power logic circuits

    NASA Technical Reports Server (NTRS)

    Smith, B. W.; Malone, F.

    1972-01-01

    The technologies of low power TTL and beam lead processing were merged into a single product family. This family offers the power and thermal advantages of low power(54L), while providing the additional reliability advantages of beam leads. The reduction in the power and heat levels also allows the system designer to take advantage, through beam lead, multichip assemblies, of increased package density to reduce system size and weight.

  13. Technology transfer of military space microprocessor developments

    NASA Astrophysics Data System (ADS)

    Gorden, C.; King, D.; Byington, L.; Lanza, D.

    1999-01-01

    Over the past 13 years the Air Force Research Laboratory (AFRL) has led the development of microprocessors and computers for USAF space and strategic missile applications. As a result of these Air Force development programs, advanced computer technology is available for use by civil and commercial space customers as well. The Generic VHSIC Spaceborne Computer (GVSC) program began in 1985 at AFRL to fulfill a deficiency in the availability of space-qualified data and control processors. GVSC developed a radiation hardened multi-chip version of the 16-bit, Mil-Std 1750A microprocessor. The follow-on to GVSC, the Advanced Spaceborne Computer Module (ASCM) program, was initiated by AFRL to establish two industrial sources for complete, radiation-hardened 16-bit and 32-bit computers and microelectronic components. Development of the Control Processor Module (CPM), the first of two ASCM contract phases, concluded in 1994 with the availability of two sources for space-qualified, 16-bit Mil-Std-1750A computers, cards, multi-chip modules, and integrated circuits. The second phase of the program, the Advanced Technology Insertion Module (ATIM), was completed in December 1997. ATIM developed two single board computers based on 32-bit reduced instruction set computer (RISC) processors. GVSC, CPM, and ATIM technologies are flying or baselined into the majority of today's DoD, NASA, and commercial satellite systems.

  14. Panoramic Images Mapping Tools Integrated Within the ESRI ArcGIS Software

    NASA Astrophysics Data System (ADS)

    Guo, Jiao; Zhong, Ruofei; Zeng, Fanyang

    2014-03-01

    There is a general study on panoramic images which are presented along with appearance of the Google street map. Despite 360 degree viewing of street, we can realize more applications over panoramic images. This paper developed a toolkits plugged in ArcGIS, which can view panoramic photographs at street level directly from ArcMap and measure and capture all visible elements as frontages, trees and bridges. We use a series of panoramic images adjoined with absolute coordinate through GPS and IMU. There are two methods in this paper to measure object from these panoramic images: one is to intersect object position through a stereogram; the other one is multichip matching involved more than three images which all cover the object. While someone wants to measure objects from these panoramic images, each two panoramic images which both contain the object can be chosen to display on ArcMap. Then we calculate correlation coefficient of the two chosen panoramic images so as to calculate the coordinate of object. Our study test different patterns of panoramic pairs and compare the results of measurement to the real value of objects so as to offer the best choosing suggestion. The article has mainly elaborated the principles of calculating correlation coefficient and multichip matching.

  15. Miniature MMIC Low Mass/Power Radiometer Modules for the 180 GHz GeoSTAR Array

    NASA Technical Reports Server (NTRS)

    Kangaslahti, Pekka; Tanner, Alan; Pukala, David; Lambrigtsen, Bjorn; Lim, Boon; Mei, Xiaobing; Lai, Richard

    2010-01-01

    We have developed and demonstrated miniature 180 GHz Monolithic Microwave Integrated Circuit (MMIC) radiometer modules that have low noise temperature, low mass and low power consumption. These modules will enable the Geostationary Synthetic Thinned Aperture Radiometer (GeoSTAR) of the Precipitation and All-weather Temperature and Humidity (PATH) Mission for atmospheric temperature and humidity profiling. The GeoSTAR instrument has an array of hundreds of receivers. Technology that was developed included Indium Phosphide (InP) MMIC Low Noise Amplifiers (LNAs) and second harmonic MMIC mixers and I-Q mixers, surface mount Multi-Chip Module (MCM) packages at 180 GHz, and interferometric array at 180 GHz. A complete MMIC chip set for the 180 GHz receiver modules (LNAs and I-Q Second harmonic mixer) was developed. The MMIC LNAs had more than 50% lower noise temperature (NT=300K) than previous state-of-art and MMIC I-Q mixers demonstrated low LO power (3 dBm). Two lots of MMIC wafers were processed with very high DC transconductance of up to 2800 mS/mm for the 35 nm gate length devices. Based on these MMICs a 180 GHz Multichip Module was developed that had a factor of 100 lower mass/volume (16x18x4.5 mm3, 3g) than previous generation 180 GHz receivers.

  16. JPRS Report, Science & Technology Europe

    DTIC Science & Technology

    1992-08-12

    Head on Chip Industry, Plans [Heinrich von Pierer Interview; Bonn DIE WELT, 15 Jun 92] 33 Swiss Contraves Develops High-Density Multichip Module... Investment costs are low because the method is based on low pressure, 0.1-1.0 MPA, during injection. This permits the use of simpler molds and...For example, ABS Pumpen AG [ABS Pumps German Stock Corporation] in Lohmar needed 1.5 years to define the "Ceramic Compo- nents for Friction

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodenbeck, Christopher T; Girardi, Michael

    Internal nodes of a constituent integrated circuit (IC) package of a multichip module (MCM) are protected from excessive charge during plasma cleaning of the MCM. The protected nodes are coupled to an internal common node of the IC package by respectively associated discharge paths. The common node is connected to a bond pad of the IC package. During MCM assembly, and before plasma cleaning, this bond pad receives a wire bond to a ground bond pad on the MCM substrate.

  18. Read-In Integrated Circuits for Large-Format Multi-Chip Emitter Arrays

    DTIC Science & Technology

    2015-03-31

    chip has been designed and fabricated using ONSEMI C5N process to verify our approach. Keywords: Large scale arrays; Tiling; Mosaic; Abutment ...required. X and y addressing is not a sustainable and easily expanded addressing architecture nor will it work well with abutted RIICs. Abutment Method... Abutting RIICs into an array is challenging because of the precise positioning required to achieve a uniform image. This problem is a new design

  19. Interpreter composition issues in the formal verification of a processor-memory module

    NASA Technical Reports Server (NTRS)

    Fura, David A.; Cohen, Gerald C.

    1994-01-01

    This report describes interpreter composition techniques suitable for the formal specification and verification of a processor-memory module using the HOL theorem proving system. The processor-memory module is a multichip subsystem within a fault-tolerant embedded system under development within the Boeing Defense and Space Group. Modeling and verification methods were developed that permit provably secure composition at the transaction-level of specification, significantly reducing the complexity of the hierarchical verification of the system.

  20. High Flux Heat Exchanger

    DTIC Science & Technology

    1993-01-01

    maximum jet velocity (6.36 m/s), and maximum number of jets (nine). Wadsworth and Mudawar [49] describe the use of a single slotted nozzle to provide...H00503 (ASME), pp. 121-128, 1989. 40 49. D. C. Wadsworth and I. Mudawar , "Cooling of a Multichip Electronic Module by Means of Confined Two-Dimensional...Jets of Dielectric Liquid," HTD-Vol. 111, Heat Transfer in Electrglif, Book No. H00503 (ASME), pp. 79-87, 1989. 50. D.C. Wadsworth and I. Mudawar

  1. Materials Processing and Manufacturing Technologies for Diamond Substrates Multichip Modules

    DTIC Science & Technology

    1994-10-14

    document are those of the authors and should not be intepreted as representing the official policies, either express or implied, of the Defense Advanced...release of the diamond at the end of the deposition step, "* deposition of non-uniform films for stress/flatness control. 75kW Reactor & Modelling Studies...too strong (the film releases partially or not at all) to too weak (the film delaminates during the run from growth stresses), and are continuing to

  2. Methods of fabricating applique circuits

    DOEpatents

    Dimos, Duane B.; Garino, Terry J.

    1999-09-14

    Applique circuits suitable for advanced packaging applications are introduced. These structures are particularly suited for the simple integration of large amounts (many nanoFarads) of capacitance into conventional integrated circuit and multichip packaging technology. In operation, applique circuits are bonded to the integrated circuit or other appropriate structure at the point where the capacitance is required, thereby minimizing the effects of parasitic coupling. An immediate application is to problems of noise reduction and control in modern high-frequency circuitry.

  3. Medical telesensors

    NASA Astrophysics Data System (ADS)

    Ferrell, Trinidad L.; Crilly, P. B.; Smith, S. F.; Wintenberg, Alan L.; Britton, Charles L., Jr.; Morrison, Gilbert W.; Ericson, M. N.; Hedden, D.; Bouldin, Donald W.; Passian, A.; Downey, Todd R.; Wig, A. G.; Meriaudeau, Fabrice

    1998-05-01

    Medical telesensors are self-contained integrated circuits for measuring and transmitting vital signs over a distance of approximately 1-2 meters. The circuits are unhoused and contain a sensor, signal processing and modulation electronics, a spread-spectrum transmitter, an antenna and a thin-film battery. We report on a body-temperature telesensor, which is sufficiently small to be placed on a tympanic membrane in a child's ear. We also report on a pulse-oximeter telesensor and a micropack receiver/long- range transmitter unit, which receives form a telesensor array and analyzes and re-transmits the vital signs over a longer range. Signal analytics are presented for the pulse oximeter, which is currently in the form of a finger ring. A multichip module is presented as the basic signal-analysis component. The module contains a microprocessor, a field=programmable gate array, memory elements and other components necessary for determining trauma and reporting signals.

  4. High density electronic circuit and process for making

    DOEpatents

    Morgan, William P.

    1999-01-01

    High density circuits with posts that protrude beyond one surface of a substrate to provide easy mounting of devices such as integrated circuits. The posts also provide stress relief to accommodate differential thermal expansion. The process allows high interconnect density with fewer alignment restrictions and less wasted circuit area than previous processes. The resulting substrates can be test platforms for die testing and for multi-chip module substrate testing. The test platform can contain active components and emulate realistic operational conditions, replacing shorts/opens net testing.

  5. A novel miniaturized PCR multi-reactor array fabricated using flip-chip bonding techniques

    NASA Astrophysics Data System (ADS)

    Zou, Zhi-Qing; Chen, Xiang; Jin, Qing-Hui; Yang, Meng-Su; Zhao, Jian-Long

    2005-08-01

    This paper describes a novel miniaturized multi-chamber array capable of high throughput polymerase chain reaction (PCR). The structure of the proposed device is verified by using finite element analysis (FEA) to optimize the thermal performance, and then implemented on a glass-silicon substrate using a standard MEMS process and post-processing. Thermal analysis simulation and verification of each reactor cell is equipped with integrated Pt temperature sensors and heaters at the bottom of the reaction chamber for real-time accurate temperature sensing and control. The micro-chambers are thermally separated from each other, and can be controlled independently. The multi-chip array was packaged on a printed circuit board (PCB) substrate using a conductive polymer flip-chip bonding technique, which enables effective heat dissipation and suppresses thermal crosstalk between the chambers. The designed system has successfully demonstrated a temperature fluctuation of ±0.5 °C during thermal multiplexing of up to 2 × 2 chambers, a full speed of 30 min for 30 cycle PCR, as well as the capability of controlling each chamber digitally and independently.

  6. Repairable chip bonding/interconnect process

    DOEpatents

    Bernhardt, Anthony F.; Contolini, Robert J.; Malba, Vincent; Riddle, Robert A.

    1997-01-01

    A repairable, chip-to-board interconnect process which addresses cost and testability issues in the multi-chip modules. This process can be carried out using a chip-on-sacrificial-substrate technique, involving laser processing. This process avoids the curing/solvent evolution problems encountered in prior approaches, as well is resolving prior plating problems and the requirements for fillets. For repairable high speed chip-to-board connection, transmission lines can be formed on the sides of the chip from chip bond pads, ending in a gull wing at the bottom of the chip for subsequent solder.

  7. High density electronic circuit and process for making

    DOEpatents

    Morgan, W.P.

    1999-06-29

    High density circuits with posts that protrude beyond one surface of a substrate to provide easy mounting of devices such as integrated circuits are disclosed. The posts also provide stress relief to accommodate differential thermal expansion. The process allows high interconnect density with fewer alignment restrictions and less wasted circuit area than previous processes. The resulting substrates can be test platforms for die testing and for multi-chip module substrate testing. The test platform can contain active components and emulate realistic operational conditions, replacing shorts/opens net testing. 8 figs.

  8. Miniaturized ultrasound imaging probes enabled by CMUT arrays with integrated frontend electronic circuits.

    PubMed

    Khuri-Yakub, B T; Oralkan, Omer; Nikoozadeh, Amin; Wygant, Ira O; Zhuang, Steve; Gencel, Mustafa; Choe, Jung Woo; Stephens, Douglas N; de la Rama, Alan; Chen, Peter; Lin, Feng; Dentinger, Aaron; Wildes, Douglas; Thomenius, Kai; Shivkumar, Kalyanam; Mahajan, Aman; Seo, Chi Hyung; O'Donnell, Matthew; Truong, Uyen; Sahn, David J

    2010-01-01

    Capacitive micromachined ultrasonic transducer (CMUT) arrays are conveniently integrated with frontend integrated circuits either monolithically or in a hybrid multichip form. This integration helps with reducing the number of active data processing channels for 2D arrays. This approach also preserves the signal integrity for arrays with small elements. Therefore CMUT arrays integrated with electronic circuits are most suitable to implement miniaturized probes required for many intravascular, intracardiac, and endoscopic applications. This paper presents examples of miniaturized CMUT probes utilizing 1D, 2D, and ring arrays with integrated electronics.

  9. Enhanced thermaly managed packaging for III-nitride light emitters

    NASA Astrophysics Data System (ADS)

    Kudsieh, Nicolas

    In this Dissertation our work on `enhanced thermally managed packaging of high power semiconductor light sources for solid state lighting (SSL)' is presented. The motivation of this research and development is to design thermally high stable cost-efficient packaging of single and multi-chip arrays of III-nitrides wide bandgap semiconductor light sources through mathematical modeling and simulations. Major issues linked with this technology are device overheating which causes serious degradation in their illumination intensity and decrease in the lifetime. In the introduction the basics of III-nitrides WBG semiconductor light emitters are presented along with necessary thermal management of high power cingulated and multi-chip LEDs and laser diodes. This work starts at chip level followed by its extension to fully packaged lighting modules and devices. Different III-nitride structures of multi-quantum well InGaN/GaN and AlGaN/GaN based LEDs and LDs were analyzed using advanced modeling and simulation for different packaging designs and high thermal conductivity materials. Study started with basic surface mounted devices using conventional packaging strategies and was concluded with the latest thermal management of chip-on-plate (COP) method. Newly discovered high thermal conductivity materials have also been incorporated for this work. Our study also presents the new approach of 2D heat spreaders using such materials for SSL and micro LED array packaging. Most of the work has been presented in international conferences proceedings and peer review journals. Some of the latest work has also been submitted to well reputed international journals which are currently been reviewed for publication. .

  10. Optical interconnection using polyimide waveguide for multichip module

    NASA Astrophysics Data System (ADS)

    Koyanagi, Mitsumasa

    1996-01-01

    We have developed a parallel processor system with 152 RISC processor chips specific for Monte-Carlo analysis. This system has the ring-bus architecture. The performance of several Gflops is expected in this system according to the computer simulation. However, it was revealed that the data transfer speed of the bus has to be increased more dramatically in order to further increase the performance. Then, we propose to introduce the optical interconnection into the parallel processor system to increase the data transfer speed of the buses. The double ringbus architecture is employed in this new parallel processor system with optical interconnection. The free-space optical interconnection arid the optical waveguide are used for the optical ring-bus. Thin polyimide film was used to form the optical waveguide. A relatively low propagation loss was achieved in the polyimide optical waveguide. In addition, it was confirmed that the propagation direction of signal light can be easily changed by using a micro-mirror.

  11. Optical interconnection using polyimide waveguide for multichip module

    NASA Astrophysics Data System (ADS)

    Koyanagi, Mitsumasa

    1996-01-01

    We have developed a parallel processor system with 152 RISC processor chips specific for Monte-Carlo analysis. This system has the ring-bus architecture. The performance of several Gflops is expected in this system according to the computer simulation. However, it was revealed that the data transfer speed of the bus has to be increased more dramatically in order to further increase the performance. Then, we propose to introduce the optical interconnection into the parallel processor system to increase the data transfer speed of the buses. The double ring-bus architecture is employed in this new parallel processor system with optical interconnection. The free-space optical interconnection and the optical waveguide are used for the optical ring-bus. Thin polyimide film was used to form the optical waveguide. A relatively low propagation loss was achieved in the polyimide optical waveguide. In addition, it was confirmed that the propagation direction of signal light can be easily changed by using a micro-mirror.

  12. Repairable chip bonding/interconnect process

    DOEpatents

    Bernhardt, A.F.; Contolini, R.J.; Malba, V.; Riddle, R.A.

    1997-08-05

    A repairable, chip-to-board interconnect process which addresses cost and testability issues in the multi-chip modules is disclosed. This process can be carried out using a chip-on-sacrificial-substrate technique, involving laser processing. This process avoids the curing/solvent evolution problems encountered in prior approaches, as well is resolving prior plating problems and the requirements for fillets. For repairable high speed chip-to-board connection, transmission lines can be formed on the sides of the chip from chip bond pads, ending in a gull wing at the bottom of the chip for subsequent solder. 10 figs.

  13. Reconfigurable tree architectures using subtree oriented fault tolerance

    NASA Technical Reports Server (NTRS)

    Lowrie, Matthew B.

    1987-01-01

    An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.

  14. Continuing evolution of in-vitro diagnostic instrumentation

    NASA Astrophysics Data System (ADS)

    Cohn, Gerald E.

    2000-04-01

    The synthesis of analytical instrumentation and analytical biochemistry technologies in modern in vitro diagnostic instrumentation continues to generate new systems with improved performance and expanded capability. Detection modalities have expanded to include multichip modes of fluorescence, scattering, luminescence and reflectance so as to accommodate increasingly sophisticated immunochemical and nucleic acid based reagent systems. The time line graph of system development now extends from the earliest automated clinical spectrophotometers through molecule recognition assays and biosensors to the new breakthroughs of biochip and DNA diagnostics. This brief review traces some of the major innovations in the evolution of system technologies and previews the conference program.

  15. Advanced flight computer. Special study

    NASA Technical Reports Server (NTRS)

    Coo, Dennis

    1995-01-01

    This report documents a special study to define a 32-bit radiation hardened, SEU tolerant flight computer architecture, and to investigate current or near-term technologies and development efforts that contribute to the Advanced Flight Computer (AFC) design and development. An AFC processing node architecture is defined. Each node may consist of a multi-chip processor as needed. The modular, building block approach uses VLSI technology and packaging methods that demonstrate a feasible AFC module in 1998 that meets that AFC goals. The defined architecture and approach demonstrate a clear low-risk, low-cost path to the 1998 production goal, with intermediate prototypes in 1996.

  16. Miniaturized Ultrasound Imaging Probes Enabled by CMUT Arrays with Integrated Frontend Electronic Circuits

    PubMed Central

    Khuri-Yakub, B. (Pierre) T.; Oralkan, Ömer; Nikoozadeh, Amin; Wygant, Ira O.; Zhuang, Steve; Gencel, Mustafa; Choe, Jung Woo; Stephens, Douglas N.; de la Rama, Alan; Chen, Peter; Lin, Feng; Dentinger, Aaron; Wildes, Douglas; Thomenius, Kai; Shivkumar, Kalyanam; Mahajan, Aman; Seo, Chi Hyung; O’Donnell, Matthew; Truong, Uyen; Sahn, David J.

    2010-01-01

    Capacitive micromachined ultrasonic transducer (CMUT) arrays are conveniently integrated with frontend integrated circuits either monolithically or in a hybrid multichip form. This integration helps with reducing the number of active data processing channels for 2D arrays. This approach also preserves the signal integrity for arrays with small elements. Therefore CMUT arrays integrated with electronic circuits are most suitable to implement miniaturized probes required for many intravascular, intracardiac, and endoscopic applications. This paper presents examples of miniaturized CMUT probes utilizing 1D, 2D, and ring arrays with integrated electronics. PMID:21097106

  17. Neuromorphic VLSI vision system for real-time texture segregation.

    PubMed

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2008-10-01

    The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.

  18. Photodiodes integration on a suspended ridge structure VOA using 2-step flip-chip bonding method

    NASA Astrophysics Data System (ADS)

    Kim, Seon Hoon; Kim, Tae Un; Ki, Hyun Chul; Kim, Doo Gun; Kim, Hwe Jong; Lim, Jung Woon; Lee, Dong Yeol; Park, Chul Hee

    2015-01-01

    In this works, we have demonstrated a VOA integrated with mPDs, based on silica-on-silicon PLC and flip-chip bonding technologies. The suspended ridge structure was applied to reduce the power consumption. It achieves the attenuation of 30dB in open loop operation with the power consumption of below 30W. We have applied two-step flipchip bonding method using passive alignment to perform high density multi-chip integration on a VOA with eutectic AuSn solder bumps. The average bonding strength of the two-step flip-chip bonding method was about 90gf.

  19. Broad Frequency LTCC Vertical Interconnect Transition for Multichip Modules and System on Package Applications

    NASA Technical Reports Server (NTRS)

    Decrossas, Emmanuel; Glover, Michael D.; Porter, Kaoru; Cannon, Tom; Mantooth, H. Alan; Hamilton, M. C.

    2013-01-01

    Various stripline structures and flip chip interconnect designs for high-speed digital communication systems implemented in low temperature co-fired ceramic (LTCC) substrates are studied in this paper. Specifically, two different transition designs from edge launch 2.4 millimeter connectors to stripline transmission lines embedded in LTCC are discussed. After characterizing the DuPont (sup trademark) 9K7 green tape, different designs are proposed to improve signal integrity for high-speed digital data. The full-wave simulations and experimental data validate the presented designs over a broad frequency band from Direct Current to 50 gigahertz and beyond.

  20. Experiences with integral microelectronics on smart structures for space

    NASA Astrophysics Data System (ADS)

    Nye, Ted; Casteel, Scott; Navarro, Sergio A.; Kraml, Bob

    1995-05-01

    One feature of a smart structure implies that some computational and signal processing capability can be performed at a local level, perhaps integral to the controlled structure. This requires electronics with a minimal mechanical influence regarding structural stiffening, heat dissipation, weight, and electrical interface connectivity. The Advanced Controls Technology Experiment II (ACTEX II) space-flight experiments implemented such a local control electronics scheme by utilizing composite smart members with integral processing electronics. These microelectronics, tested to MIL-STD-883B levels, were fabricated with conventional thick film on ceramic multichip module techniques. Kovar housings and aluminum-kapton multilayer insulation was used to protect against harsh space radiation and thermal environments. Development and acceptance testing showed the electronics design was extremely robust, operating in vacuum and at temperature range with minimal gain variations occurring just above room temperatures. Four electronics modules, used for the flight hardware configuration, were connected by a RS-485 2 Mbit per second serial data bus. The data bus was controlled by Actel field programmable gate arrays arranged in a single master, four slave configuration. An Intel 80C196KD microprocessor was chosen as the digital compensator in each controller. It was used to apply a series of selectable biquad filters, implemented via Delta Transforms. Instability in any compensator was expected to appear as large amplitude oscillations in the deployed structure. Thus, over-vibration detection circuitry with automatic output isolation was incorporated into the design. This was not used however, since during experiment integration and test, intentionally induced compensator instabilities resulted in benign mechanical oscillation symptoms. Not too surprisingly, it was determined that instabilities were most detectable by large temperature increases in the electronics, typically noticeable within minutes of unstable operation.

  1. (abstract) Electronic Packaging for Microspacecraft Applications

    NASA Technical Reports Server (NTRS)

    Wasler, David

    1993-01-01

    The intent of this presentation is to give a brief look into the future of electronic packaging for microspacecraft applications. Advancements in electronic packaging technology areas have developed to the point where a system engineer's visions, concepts, and requirements for a microspacecraft can now be a reality. These new developments are ideal candidates for microspacecraft applications. These technologies are capable of bringing about major changes in how we design future spacecraft while taking advantage of the benefits due to size, weight, power, performance, reliability , and cost. This presentation will also cover some advantages and limitations of surface mount technology (SMT), multichip modules (MCM), and wafer scale integration (WSI), and what is needed to implement these technologies into microspacecraft.

  2. Wireless Interconnects for Intra-chip & Inter-chip Transmission

    NASA Astrophysics Data System (ADS)

    Narde, Rounak Singh

    With the emergence of Internet of Things and information revolution, the demand of high performance computing systems is increasing. The copper interconnects inside the computing chips have evolved into a sophisticated network of interconnects known as Network on Chip (NoC) comprising of routers, switches, repeaters, just like computer networks. When network on chip is implemented on a large scale like in Multicore Multichip (MCMC) systems for High Performance Computing (HPC) systems, length of interconnects increases and so are the problems like power dissipation, interconnect delays, clock synchronization and electrical noise. In this thesis, wireless interconnects are chosen as the substitute for wired copper interconnects. Wireless interconnects offer easy integration with CMOS fabrication and chip packaging. Using wireless interconnects working at unlicensed mm-wave band (57-64GHz), high data rate of Gbps can be achieved. This thesis presents study of transmission between zigzag antennas as wireless interconnects for Multichip multicores (MCMC) systems and 3D IC. For MCMC systems, a four-chips 16-cores model is analyzed with only four wireless interconnects in three configurations with different antenna orientations and locations. Return loss and transmission coefficients are simulated in ANSYS HFSS. Moreover, wireless interconnects are designed, fabricated and tested on a 6'' silicon wafer with resistivity of 55O-cm using a basic standard CMOS process. Wireless interconnect are designed to work at 30GHz using ANSYS HFSS. The fabricated antennas are resonating around 20GHz with a return loss of less than -10dB. The transmission coefficients between antenna pair within a 20mm x 20mm silicon die is found to be varying between -45dB to -55dB. Furthermore, wireless interconnect approach is extended for 3D IC. Wireless interconnects are implemented as zigzag antenna. This thesis extends the work of analyzing the wireless interconnects in 3D IC with different configurations of antenna orientations and coolants. The return loss and transmission coefficients are simulated using ANSYS HFSS.

  3. Update on Development of SiC Multi-Chip Power Modules

    NASA Technical Reports Server (NTRS)

    Lostetter, Alexander; Cilio, Edgar; Mitchell, Gavin; Schupbach, Roberto

    2008-01-01

    Progress has been made in a continuing effort to develop multi-chip power modules (SiC MCPMs). This effort at an earlier stage was reported in 'SiC Multi-Chip Power Modules as Power-System Building Blocks' (LEW-18008-1), NASA Tech Briefs, Vol. 31, No. 2 (February 2007), page 28. The following recapitulation of information from the cited prior article is prerequisite to a meaningful summary of the progress made since then: 1) SiC MCPMs are, more specifically, electronic power-supply modules containing multiple silicon carbide power integrated-circuit chips and silicon-on-insulator (SOI) control integrated-circuit chips. SiC MCPMs are being developed as building blocks of advanced expandable, reconfigurable, fault-tolerant power-supply systems. Exploiting the ability of SiC semiconductor devices to operate at temperatures, breakdown voltages, and current densities significantly greater than those of conventional Si devices, the designs of SiC MCPMs and of systems comprising multiple SiC MCPMs are expected to afford a greater degree of miniaturization through stacking of modules with reduced requirements for heat sinking; 2) The stacked SiC MCPMs in a given system can be electrically connected in series, parallel, or a series/parallel combination to increase the overall power-handling capability of the system. In addition to power connections, the modules have communication connections. The SOI controllers in the modules communicate with each other as nodes of a decentralized control network, in which no single controller exerts overall command of the system. Control functions effected via the network include synchronization of switching of power devices and rapid reconfiguration of power connections to enable the power system to continue to supply power to a load in the event of failure of one of the modules; and, 3) In addition to serving as building blocks of reliable power-supply systems, SiC MCPMs could be augmented with external control circuitry to make them perform additional power-handling functions as needed for specific applications. Because identical SiC MCPM building blocks could be utilized in such a variety of ways, the cost and difficulty of designing new, highly reliable power systems would be reduced considerably. This concludes the information from the cited prior article. The main activity since the previously reported stage of development was the design, fabrication, and testing a 120- VDC-to-28-VDC modular power-converter system composed of eight SiC MCPMs in a 4 (parallel)-by-2 (series) matrix configuration, with normally-off controllable power switches. The SiC MCPM power modules include closed-loop control subsystems and are capable of operating at high power density or high temperature. The system was tested under various configurations, load conditions, load-transient conditions, and failure-recovery conditions. Planned future work includes refinement of the demonstrated modular system concept and development of a new converter hardware topology that would enable sharing of currents without the need for communication among modules. Toward these ends, it is also planned to develop a new converter control algorithm that would provide for improved sharing of current and power under all conditions, and to implement advanced packaging concepts that would enable operation at higher power density.

  4. Method for reworkable packaging of high speed, low electrical parasitic power electronics modules through gate drive integration

    DOEpatents

    Passmore, Brandon; Cole, Zach; Whitaker, Bret; Barkley, Adam; McNutt, Ty; Lostetter, Alexander

    2016-08-02

    A multichip power module directly connecting the busboard to a printed-circuit board that is attached to the power substrate enabling extremely low loop inductance for extreme environments such as high temperature operation. Wire bond interconnections are taught from the power die directly to the busboard further enabling enable low parasitic interconnections. Integration of on-board high frequency bus capacitors provide extremely low loop inductance. An extreme environment gate driver board allows close physical proximity of gate driver and power stage to reduce overall volume and reduce impedance in the control circuit. Parallel spring-loaded pin gate driver PCB connections allows a reliable and reworkable power module to gate driver interconnections.

  5. Automated software configuration in the MONSOON system

    NASA Astrophysics Data System (ADS)

    Daly, Philip N.; Buchholz, Nick C.; Moore, Peter C.

    2004-09-01

    MONSOON is the next generation OUV-IR controller project being developed at NOAO. The design is flexible, emphasizing code re-use, maintainability and scalability as key factors. The software needs to support widely divergent detector systems ranging from multi-chip mosaics (for LSST, QUOTA, ODI and NEWFIRM) down to large single or multi-detector laboratory development systems. In order for this flexibility to be effective and safe, the software must be able to configure itself to the requirements of the attached detector system at startup. The basic building block of all MONSOON systems is the PAN-DHE pair which make up a single data acquisition node. In this paper we discuss the software solutions used in the automatic PAN configuration system.

  6. T/R Multi-Chip MMIC Modules for 150 GHz

    NASA Technical Reports Server (NTRS)

    Samoska, Lorene A.; Pukala, David M.; Soria, Mary M.; Sadowy, Gregory A.

    2009-01-01

    Modules containing multiple monolithic microwave integrated-circuit (MMIC) chips have been built as prototypes of transmitting/receiving (T/R) modules for millimeter-wavelength radar systems, including phased-array radar systems to be used for diverse purposes that could include guidance and avoidance of hazards for landing spacecraft, imaging systems for detecting hidden weapons, and hazard-avoidance systems for automobiles. Whereas prior landing radar systems have operated at frequencies around 35 GHz, the integrated circuits in this module operate in a frequency band centered at about 150 GHz. The higher frequency (and, hence, shorter wavelength), is expected to make it possible to obtain finer spatial resolution while also using smaller antennas and thereby reducing the sizes and masses of the affected systems.

  7. A digital front-end and readout microsystem for calorimetry at LHC

    NASA Astrophysics Data System (ADS)

    Alippi, C.; Appelquist, G.; Berglund, S.; Bohm, C.; Breveglieri, L.; Brigati, S.; Carlson, P.; Cattaneo, P.; Dadda, L.; David, J.; Del Buono, L.; Dell'Acqua, A.; Engström, M.; Fumagalli, G.; Gatti, U.; Genat, J. F.; Goggi, G.; Hansen, M.; Hentzell, H.; Höglund, I.; Inkinen, S.; Kerek, A.; Lebbolo, H.; LeDortz, O.; Lofstedt, B.; Maloberti, F.; Nayman, P.; Persson, S.-T.; Piuri, V.; Salice, F.; Sami, M.; Savoy-Navarro, A.; Stefanelli, R.; Sundblad, R.; Svensson, C.; Torelli, G.; Vanuxem, J. P.; Yamdagni, N.; Yuan, J.; Zitoun, R.

    1994-04-01

    A digital solution to the front-end electronics for calorimetric detectors at future supercolliders is presented. The solution is based on high speed {A}/{D} converters, a fully programmable pipeline/digital filter chain and local intelligence. Questions of error correction, fault-tolerance and system redundancy are also being considered. A system integration of a multichannel device in a multichip, Silicon-on-Silicon Microsystem hybrid, is used. This solution allows a new level of integration of complex analogue and digital functions, with an excellent flexibility in mixing technologies for the different functional blocks. It also allows a high degree of programmability at both the function and the system level, and offers the possibility of customising the microsystem with detector-specific functions.

  8. Dry soldering with hot filament produced atomic hydrogen

    DOEpatents

    Panitz, Janda K. G.; Jellison, James L.; Staley, David J.

    1995-01-01

    A system for chemically transforming metal surface oxides to metal that is especially, but not exclusively, suitable for preparing metal surfaces for dry soldering and solder reflow processes. The system employs one or more hot, refractory metal filaments, grids or surfaces to thermally dissociate molecular species in a low pressure of working gas such as a hydrogen-containing gas to produce reactive species in a reactive plasma that can chemically reduce metal oxides and form volatile compounds that are removed in the working gas flow. Dry soldering and solder reflow processes are especially applicable to the manufacture of printed circuit boards, semiconductor chip lead attachment and packaging multichip modules. The system can be retrofitted onto existing metal treatment ovens, furnaces, welding systems and wave soldering system designs.

  9. Design and implementation of GaAs HBT circuits with ACME

    NASA Technical Reports Server (NTRS)

    Hutchings, Brad L.; Carter, Tony M.

    1993-01-01

    GaAs HBT circuits offer high performance (5-20 GHz) and radiation hardness (500 Mrad) that is attractive for space applications. ACME is a CAD tool specifically developed for HBT circuits. ACME implements a novel physical schematic-capture design technique where designers simultaneously view the structure and physical organization of a circuit. ACME's design interface is similar to schematic capture; however, unlike conventional schematic capture, designers can directly control the physical placement of both function and interconnect at the schematic level. In addition, ACME provides design-time parasitic extraction, complex wire models, and extensions to Multi-Chip Modules (MCM's). A GaAs HBT gate-array and semi-custom circuits have been developed with ACME; several circuits have been fabricated and found to be fully functional .

  10. Planned development of a 3D computer based on free-space optical interconnects

    NASA Astrophysics Data System (ADS)

    Neff, John A.; Guarino, David R.

    1994-05-01

    Free-space optical interconnection has the potential to provide upwards of a million data channels between planes of electronic circuits. This may result in the planar board and backplane structures of today giving away to 3-D stacks of wafers or multi-chip modules interconnected via channels running perpendicular to the processor planes, thereby eliminating much of the packaging overhead. Three-dimensional packaging is very appealing for tightly coupled fine-grained parallel computing where the need for massive numbers of interconnections is severely taxing the capabilities of the planar structures. This paper describes a coordinated effort by four research organizations to demonstrate an operational fine-grained parallel computer that achieves global connectivity through the use of free space optical interconnects.

  11. Dry-film polymer waveguide for silicon photonics chip packaging.

    PubMed

    Hsu, Hsiang-Han; Nakagawa, Shigeru

    2014-09-22

    Polymer waveguide made by dry film process is demonstrated for silicon photonics chip packaging. With 8 μm × 11.5 μm core waveguide, little penalty is observed up to 25 Gbps before or after the light propagate through a 10-km long single-mode fiber (SMF). Coupling loss to SMF is 0.24 dB and 1.31 dB at the polymer waveguide input and output ends, respectively. Alignment tolerance for 0.5 dB loss increase is +/- 1.0 μm along both vertical and horizontal directions for the coupling from the polymer waveguide to SMF. The dry-film polymer waveguide demonstrates promising performance for silicon photonics chip packaging used in next generation optical multi-chip module.

  12. Capacitive charge generation apparatus and method for testing circuits

    DOEpatents

    Cole, E.I. Jr.; Peterson, K.A.; Barton, D.L.

    1998-07-14

    An electron beam apparatus and method for testing a circuit are disclosed. The electron beam apparatus comprises an electron beam incident on an outer surface of an insulating layer overlying one or more electrical conductors of the circuit for generating a time varying or alternating current electrical potential on the surface; and a measurement unit connected to the circuit for measuring an electrical signal capacitively coupled to the electrical conductors to identify and map a conduction state of each of the electrical conductors, with or without an electrical bias signal being applied to the circuit. The electron beam apparatus can further include a secondary electron detector for forming a secondary electron image for registration with a map of the conduction state of the electrical conductors. The apparatus and method are useful for failure analysis or qualification testing to determine the presence of any open-circuits or short-circuits, and to verify the continuity or integrity of electrical conductors buried below an insulating layer thickness of 1-100 {micro}m or more without damaging or breaking down the insulating layer. The types of electrical circuits that can be tested include integrated circuits, multi-chip modules, printed circuit boards and flexible printed circuits. 7 figs.

  13. Capacitive charge generation apparatus and method for testing circuits

    DOEpatents

    Cole, Jr., Edward I.; Peterson, Kenneth A.; Barton, Daniel L.

    1998-01-01

    An electron beam apparatus and method for testing a circuit. The electron beam apparatus comprises an electron beam incident on an outer surface of an insulating layer overlying one or more electrical conductors of the circuit for generating a time varying or alternating current electrical potential on the surface; and a measurement unit connected to the circuit for measuring an electrical signal capacitively coupled to the electrical conductors to identify and map a conduction state of each of the electrical conductors, with or without an electrical bias signal being applied to the circuit. The electron beam apparatus can further include a secondary electron detector for forming a secondary electron image for registration with a map of the conduction state of the electrical conductors. The apparatus and method are useful for failure analysis or qualification testing to determine the presence of any open-circuits or short-circuits, and to verify the continuity or integrity of electrical conductors buried below an insulating layer thickness of 1-100 .mu.m or more without damaging or breaking down the insulating layer. The types of electrical circuits that can be tested include integrated circuits, multi-chip modules, printed circuit boards and flexible printed circuits.

  14. Multichip imager with improved optical performance near the butt region

    NASA Technical Reports Server (NTRS)

    Kinnard, Kenneth P. (Inventor); Strong, Jr., Richard T. (Inventor); Goldfarb, Samuel (Inventor); Tower, John R. (Inventor)

    1991-01-01

    A compound imager consists of two or more individual chips, each with at least one line array of sensors thereupon. Each chip has a glass support plate attached to the side from which light reaches the line arrays. The chips are butted together end-to-end to make large line arrays of sensors. Because of imperfections in cutting, the butted surfaces define a gap. Light entering in the region of the gap is either lost or falls on an individual imager other than the one for which it is intended. This results in vignetting and/or crosstalk near the butted region. The gap is filled with an epoxy resin or other similar material which, when hardened, has an index of referaction near that of the glass support plate.

  15. Dry soldering with hot filament produced atomic hydrogen

    DOEpatents

    Panitz, J.K.G.; Jellison, J.L.; Staley, D.J.

    1995-04-25

    A system is disclosed for chemically transforming metal surface oxides to metal that is especially, but not exclusively, suitable for preparing metal surfaces for dry soldering and solder reflow processes. The system employs one or more hot, refractory metal filaments, grids or surfaces to thermally dissociate molecular species in a low pressure of working gas such as a hydrogen-containing gas to produce reactive species in a reactive plasma that can chemically reduce metal oxides and form volatile compounds that are removed in the working gas flow. Dry soldering and solder reflow processes are especially applicable to the manufacture of printed circuit boards, semiconductor chip lead attachment and packaging multichip modules. The system can be retrofitted onto existing metal treatment ovens, furnaces, welding systems and wave soldering system designs. 1 fig.

  16. Powerful, Efficient Electric Vehicle Chargers: Low-Cost, Highly-Integrated Silicon Carbide (SiC) Multichip Power Modules (MCPMs) for Plug-In Hybrid Electric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-09-14

    ADEPT Project: Currently, charging the battery of an electric vehicle (EV) is a time-consuming process because chargers can only draw about as much power from the grid as a hair dryer. APEI is developing an EV charger that can draw as much power as a clothes dryer, which would drastically speed up charging time. APEI's charger uses silicon carbide (SiC)-based power transistors. These transistors control the electrical energy flowing through the charger's circuits more effectively and efficiently than traditional transistors made of straight silicon. The SiC-based transistors also require less cooling, enabling APEI to create EV chargers that are 10more » times smaller than existing chargers.« less

  17. Custom chipset and compact module design for a 75-110 GHz laboratory signal source

    NASA Astrophysics Data System (ADS)

    Morgan, Matthew A.; Boyd, Tod A.; Castro, Jason J.

    2016-12-01

    We report on the development and characterization of a compact, full-waveguide bandwidth (WR-10) signal source for general-purpose testing of mm-wave components. The monolithic microwave integrated circuit (MMIC) based multichip module is designed for compactness and ease-of-use, especially in size-constrained test sets such as a wafer probe station. It takes as input a cm-wave continuous-wave (CW) reference and provides a factor of three frequency multiplication as well as amplification, output power adjustment, and in situ output power monitoring. It utilizes a number of custom MMIC chips such as a Schottky-diode limiter and a broadband mm-wave detector, both designed explicitly for this module, as well as custom millimeter-wave multipliers and amplifiers reported in previous papers.

  18. The Design and Implementation of NASA's Advanced Flight Computing Module

    NASA Technical Reports Server (NTRS)

    Alkakaj, Leon; Straedy, Richard; Jarvis, Bruce

    1995-01-01

    This paper describes a working flight computer Multichip Module developed jointly by JPL and TRW under their respective research programs in a collaborative fashion. The MCM is fabricated by nCHIP and is packaged within a 2 by 4 inch Al package from Coors. This flight computer module is one of three modules under development by NASA's Advanced Flight Computer (AFC) program. Further development of the Mass Memory and the programmable I/O MCM modules will follow. The three building block modules will then be stacked into a 3D MCM configuration. The mass and volume of the flight computer MCM achieved at 89 grams and 1.5 cubic inches respectively, represent a major enabling technology for future deep space as well as commercial remote sensing applications.

  19. Boeing's STAR-FODB test results

    NASA Astrophysics Data System (ADS)

    Fritz, Martin E.; de la Chapelle, Michael; Van Ausdal, Arthur W.

    1995-05-01

    Boeing has successfully concluded a 2 1/2 year, two phase developmental contract for the STAR-Fiber Optic Data Bus (FODB) that is intended for future space-based applications. The first phase included system analysis, trade studies, behavior modeling, and architecture and protocal selection. During this phase we selected AS4074 Linear Token Passing Bus (LTPB) protocol operating at 200 Mbps, along with the passive, star-coupled fiber media. The second phase involved design, build, integration, and performance and environmental test of brassboard hardware. The resulting brassboard hardware successfully passed performance testing, providing 200 Mbps operation with a 32 X 32 star-coupled medium. This hardware is suitable for a spaceflight experiment to validate ground testing and analysis and to demonstrate performace in the intended environment. The fiber bus interface unit (FBIU) is a multichip module containing transceiver, protocol, and data formatting chips, buffer memory, and a station management controller. The FBIU has been designed for low power, high reliability, and radiation tolerance. Nine FBIUs were built and integrated with the fiber optic physical layer consisting of the fiber cable plant (FCP) and star coupler assembly (SCA). Performance and environmental testing, including radiation exposure, was performed on selected FBIUs and the physical layer. The integrated system was demonstrated with a full motion color video image transfer across the bus while simultaneously performing utility functions with a fiber bus control module (FBCM) over a telemetry and control (T&C) bus, in this case AS1773.

  20. Vertically integrated photonic multichip module architecture for vision applications

    NASA Astrophysics Data System (ADS)

    Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong

    2000-05-01

    The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.

  1. SAR processing using SHARC signal processing systems

    NASA Astrophysics Data System (ADS)

    Huxtable, Barton D.; Jackson, Christopher R.; Skaron, Steve A.

    1998-09-01

    Synthetic aperture radar (SAR) is uniquely suited to help solve the Search and Rescue problem since it can be utilized either day or night and through both dense fog or thick cloud cover. Other papers in this session, and in this session in 1997, describe the various SAR image processing algorithms that are being developed and evaluated within the Search and Rescue Program. All of these approaches to using SAR data require substantial amounts of digital signal processing: for the SAR image formation, and possibly for the subsequent image processing. In recognition of the demanding processing that will be required for an operational Search and Rescue Data Processing System (SARDPS), NASA/Goddard Space Flight Center and NASA/Stennis Space Center are conducting a technology demonstration utilizing SHARC multi-chip modules from Boeing to perform SAR image formation processing.

  2. An abuttable CCD imager for visible and X-ray focal plane arrays

    NASA Technical Reports Server (NTRS)

    Burke, Barry E.; Mountain, Robert W.; Harrison, David C.; Bautz, Marshall W.; Doty, John P.

    1991-01-01

    A frame-transfer silicon charge-coupled-device (CCD) imager has been developed that can be closely abutted to other imagers on three sides of the imaging array. It is intended for use in multichip arrays. The device has 420 x 420 pixels in the imaging and frame-store regions and is constructed using a three-phase triple-polysilicon process. Particular emphasis has been placed on achieving low-noise charge detection for low-light-level imaging in the visible and maximum energy resolution for X-ray spectroscopic applications. Noise levels of 6 electrons at 1-MHz and less than 3 electrons at 100-kHz data rates have been achieved. Imagers have been fabricated on 1000-Ohm-cm material to maximize quantum efficiency and minimize split events in the soft X-ray regime.

  3. Consortia for Known Good Die (KGD), phase 1

    NASA Astrophysics Data System (ADS)

    Andrews, Marshall; Carey, David; Fellows, Mary M.; Gilg, Larry; Murphy, Cindy; Noddings, Chad; Pitts, Greg; Rathmell, Claude; Spooner, Charles

    1994-02-01

    This report describes the results of Phase 1 of the Infrastructure for KGD program at MCC. The objective of the work is to resolve the issues for supplying and procuring Known Good Die (KGD) in a way that fosters industry acceptance and confidence in Application Specific Electronic Modules (ASEM's for military systems) and MultiChip Modules (MCM's for commercial systems). This report is divided into four sections. Section 1 describes the technical assessment of proposed industry approaches to KGD implementation. Section 2 of the report contains an outline for the plan for industry and government cooperation for the demonstration, validation, and implementation of KGD methodologies identified in this Phase 1 study. Section 3 of the report contains the industry-generated requirements for KGD implementation. Section IV of the report contains the KGD specifications for TAB and flip chip IC's.

  4. Trade-offs between lens complexity and real estate utilization in a free-space multichip global interconnection module.

    PubMed

    Milojkovic, Predrag; Christensen, Marc P; Haney, Michael W

    2006-07-01

    The FAST-Net (Free-space Accelerator for Switching Terabit Networks) concept uses an array of wide-field-of-view imaging lenses to realize a high-density shuffle interconnect pattern across an array of smart-pixel integrated circuits. To simplify the optics we evaluated the efficiency gained in replacing spherical surfaces with aspherical surfaces by exploiting the large disparity between narrow vertical cavity surface emitting laser (VCSEL) beams and the wide field of view of the imaging optics. We then analyzed trade-offs between lens complexity and chip real estate utilization and determined that there exists an optimal numerical aperture for VCSELs that maximizes their area density. The results provide a general framework for the design of wide-field-of-view free-space interconnection systems that incorporate high-density VCSEL arrays.

  5. CVD diamond substrate for microelectronics. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burden, J.; Gat, R.

    1996-11-01

    Chemical Vapor Deposition (CVD) of diamond films has evolved dramatically in recent years, and commercial opportunities for diamond substrates in thermal management applications are promising. The objective of this technology transfer initiative (TTI) is for Applied Science and Technology, Inc. (ASTEX) and AlliedSignal Federal Manufacturing and Technologies (FM&T) to jointly develop and document the manufacturing processes and procedures required for the fabrication of multichip module circuits using CVD diamond substrates, with the major emphasis of the project concentrating on lapping/polishing prior to metallization. ASTEX would provide diamond films for the study, and FM&T would use its experience in lapping, polishing,more » and substrate metallization to perform secondary processing on the parts. The primary goal of the project was to establish manufacturing processes that lower the manufacturing cost sufficiently to enable broad commercialization of the technology.« less

  6. Programmable synaptic chip for electronic neural networks

    NASA Technical Reports Server (NTRS)

    Moopenn, A.; Langenbacher, H.; Thakoor, A. P.; Khanna, S. K.

    1988-01-01

    A binary synaptic matrix chip has been developed for electronic neural networks. The matrix chip contains a programmable 32X32 array of 'long channel' NMOSFET binary connection elements implemented in a 3-micron bulk CMOS process. Since the neurons are kept off-chip, the synaptic chip serves as a 'cascadable' building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silicon area. The performance of synaptic chip in a 32-neuron breadboard system in an associative memory test application is discussed.

  7. High temperature charge amplifier for geothermal applications

    DOEpatents

    Lindblom, Scott C.; Maldonado, Frank J.; Henfling, Joseph A.

    2015-12-08

    An amplifier circuit in a multi-chip module includes a charge to voltage converter circuit, a voltage amplifier a low pass filter and a voltage to current converter. The charge to voltage converter receives a signal representing an electrical charge and generates a voltage signal proportional to the input signal. The voltage amplifier receives the voltage signal from the charge to voltage converter, then amplifies the voltage signal by the gain factor to output an amplified voltage signal. The lowpass filter passes low frequency components of the amplified voltage signal and attenuates frequency components greater than a cutoff frequency. The voltage to current converter receives the output signal of the lowpass filter and converts the output signal to a current output signal; wherein an amplifier circuit output is selectable between the output signal of the lowpass filter and the current output signal.

  8. Compact illumination optic with three freeform surfaces for improved beam control.

    PubMed

    Sorgato, Simone; Mohedano, Rubén; Chaves, Julio; Hernández, Maikel; Blen, José; Grabovičkić, Dejan; Benítez, Pablo; Miñano, Juan Carlos; Thienpont, Hugo; Duerr, Fabian

    2017-11-27

    Multi-chip and large size LEDs dominate the lighting market in developed countries these days. Nevertheless, a general optical design method to create prescribed intensity patterns for this type of extended sources does not exist. We present a design strategy in which the source and the target pattern are described by means of "edge wavefronts" of the system. The goal is then finding an optic coupling these wavefronts, which in the current work is a monolithic part comprising up to three freeform surfaces calculated with the simultaneous multiple surface (SMS) method. The resulting optic fully controls, for the first time, three freeform wavefronts, one more than previous SMS designs. Simulations with extended LEDs demonstrate improved intensity tailoring capabilities, confirming the effectiveness of our method and suggesting that enhanced performance features can be achieved by controlling additional wavefronts.

  9. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  10. A comparative study of multivariable robustness analysis methods as applied to integrated flight and propulsion control

    NASA Technical Reports Server (NTRS)

    Schierman, John D.; Lovell, T. A.; Schmidt, David K.

    1993-01-01

    Three multivariable robustness analysis methods are compared and contrasted. The focus of the analysis is on system stability and performance robustness to uncertainty in the coupling dynamics between two interacting subsystems. Of particular interest is interacting airframe and engine subsystems, and an example airframe/engine vehicle configuration is utilized in the demonstration of these approaches. The singular value (SV) and structured singular value (SSV) analysis methods are compared to a method especially well suited for analysis of robustness to uncertainties in subsystem interactions. This approach is referred to here as the interacting subsystem (IS) analysis method. This method has been used previously to analyze airframe/engine systems, emphasizing the study of stability robustness. However, performance robustness is also investigated here, and a new measure of allowable uncertainty for acceptable performance robustness is introduced. The IS methodology does not require plant uncertainty models to measure the robustness of the system, and is shown to yield valuable information regarding the effects of subsystem interactions. In contrast, the SV and SSV methods allow for the evaluation of the robustness of the system to particular models of uncertainty, and do not directly indicate how the airframe (engine) subsystem interacts with the engine (airframe) subsystem.

  11. Adjustment of multi-CCD-chip-color-camera heads

    NASA Astrophysics Data System (ADS)

    Guyenot, Volker; Tittelbach, Guenther; Palme, Martin

    1999-09-01

    The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.

  12. Opportunities of CMOS-MEMS integration through LSI foundry and open facility

    NASA Astrophysics Data System (ADS)

    Mita, Yoshio; Lebrasseur, Eric; Okamoto, Yuki; Marty, Frédéfic; Setoguchi, Ryota; Yamada, Kentaro; Mori, Isao; Morishita, Satoshi; Imai, Yoshiaki; Hosaka, Kota; Hirakawa, Atsushi; Inoue, Shu; Kubota, Masanori; Denoual, Matthieu

    2017-06-01

    Since the 2000s, several countries have established micro- and nanofabrication platforms for the research and education community as national projects. By combining such platforms with VLSI multichip foundry services, various integrated devices, referred to as “CMOS-MEMS”, can be realized without constructing an entire cleanroom. In this paper, we summarize MEMS-last postprocess schemes for CMOS devices on a bulk silicon wafer as well as on a silicon-on-insulator (SOI) wafer using an open-access cleanroom of the Nanotechnology Platform of MEXT Japan. The integration devices presented in this article are free-standing structures and postprocess isolated LSI devices. Postprocess issues are identified with their solutions, such as the reactive ion etching (RIE) lag for dry release and the impact of the deep RIE (DRIE) postprocess on transistor characteristics. Integration with nonsilicon materials is proposed as one of the future directions.

  13. Argus: a 16-pixel millimeter-wave spectrometer for the Green Bank Telescope

    NASA Astrophysics Data System (ADS)

    Sieth, Matthew; Devaraj, Kiruthika; Voll, Patricia; Church, Sarah; Gawande, Rohit; Cleary, Kieran; Readhead, Anthony C. S.; Kangaslahti, Pekka; Samoska, Lorene; Gaier, Todd; Goldsmith, Paul F.; Harris, Andrew I.; Gundersen, Joshua O.; Frayer, David; White, Steve; Egan, Dennis; Reeves, Rodrigo

    2014-07-01

    We report on the development of Argus, a 16-pixel spectrometer, which will enable fast astronomical imaging over the 85-116 GHz band. Each pixel includes a compact heterodyne receiver module, which integrates two InP MMIC low-noise amplifiers, a coupled-line bandpass filter and a sub-harmonic Schottky diode mixer. The receiver signals are routed to and from the multi-chip MMIC modules with multilayer high frequency printed circuit boards, which includes LO splitters and IF amplifiers. Microstrip lines on flexible circuitry are used to transport signals between temperature stages. The spectrometer frontend is designed to be scalable, so that the array design can be reconfigured for future instruments with hundreds of pixels. Argus is scheduled to be commissioned at the Robert C. Byrd Green Bank Telescope in late 2014. Preliminary data for the first Argus pixels are presented.

  14. Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems.

    PubMed

    Park, Jongkil; Yu, Theodore; Joshi, Siddharth; Maier, Christoph; Cauwenberghs, Gert

    2017-10-01

    We present a hierarchical address-event routing (HiAER) architecture for scalable communication of neural and synaptic spike events between neuromorphic processors, implemented with five Xilinx Spartan-6 field-programmable gate arrays and four custom analog neuromophic integrated circuits serving 262k neurons and 262M synapses. The architecture extends the single-bus address-event representation protocol to a hierarchy of multiple nested buses, routing events across increasing scales of spatial distance. The HiAER protocol provides individually programmable axonal delay in addition to strength for each synapse, lending itself toward biologically plausible neural network architectures, and scales across a range of hierarchies suitable for multichip and multiboard systems in reconfigurable large-scale neuromorphic systems. We show approximately linear scaling of net global synaptic event throughput with number of routing nodes in the network, at 3.6×10 7 synaptic events per second per 16k-neuron node in the hierarchy.

  15. Modeling selective attention using a neuromorphic analog VLSI device.

    PubMed

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  16. Fourier plane image amplifier

    DOEpatents

    Hackel, Lloyd A.; Hermann, Mark R.; Dane, C. Brent; Tiszauer, Detlev H.

    1995-01-01

    A solid state laser is frequency tripled to 0.3 .mu.m. A small portion of the laser is split off and generates a Stokes seed in a low power oscillator. The low power output passes through a mask with the appropriate hole pattern. Meanwhile, the bulk of the laser output is focused into a larger stimulated Brillouin scattering (SBS) amplifier. The low power beam is directed through the same cell in the opposite direction. The majority of the amplification takes place at the focus which is the fourier transform plane of the mask image. The small holes occupy large area at the focus and thus are preferentially amplified. The amplified output is now imaged onto the multichip module where the holes are drilled. Because of the fourier plane amplifier, only .about.1/10th the power of a competitive system is needed. This concept allows less expensive masks to be used in the process and requires much less laser power.

  17. Fourier plane image amplifier

    DOEpatents

    Hackel, L.A.; Hermann, M.R.; Dane, C.B.; Tiszauer, D.H.

    1995-12-12

    A solid state laser is frequency tripled to 0.3 {micro}m. A small portion of the laser is split off and generates a Stokes seed in a low power oscillator. The low power output passes through a mask with the appropriate hole pattern. Meanwhile, the bulk of the laser output is focused into a larger stimulated Brillouin scattering (SBS) amplifier. The low power beam is directed through the same cell in the opposite direction. The majority of the amplification takes place at the focus which is the fourier transform plane of the mask image. The small holes occupy large area at the focus and thus are preferentially amplified. The amplified output is now imaged onto the multichip module where the holes are drilled. Because of the fourier plane amplifier, only about 1/10th the power of a competitive system is needed. This concept allows less expensive masks to be used in the process and requires much less laser power. 1 fig.

  18. Experimental study on the application of paraffin slurry to high density electronic package cooling

    NASA Astrophysics Data System (ADS)

    Cho, K.; Choi, M.

    Experiments were performed by using water and paraffin slurry to investigate thermal characteristics from a test multichip module. The parameters were the mass fraction of paraffin slurry (0, 2.5, 5, 7.5%), heat flux (10, 20, 30, 40W/cm2) and channel Reynolds numbers. The size of paraffin slurry particles was within 10-40μm. The local heat transfer coefficients for the paraffin slurry were larger than those for water. Thermally fully developed conditions were observed after the third or fourth row. The paraffin slurry with a mass fraction of 5% showed the most efficient cooling performance when the heat transfer and the pressure drop in the test section were considered simultaneously. A new correlation for the water and the paraffin slurry with a mass fraction of 5% was obtained for a channel Reynolds number over 5300.

  19. Implementation of the Timepix ASIC in the Scalable Readout System

    NASA Astrophysics Data System (ADS)

    Lupberger, M.; Desch, K.; Kaminski, J.

    2016-09-01

    We report on the development of electronics hardware, FPGA firmware and software to provide a flexible multi-chip readout of the Timepix ASIC within the framework of the Scalable Readout System (SRS). The system features FPGA-based zero-suppression and the possibility to read out up to 4×8 chips with a single Front End Concentrator (FEC). By operating several FECs in parallel, in principle an arbitrary number of chips can be read out, exploiting the scaling features of SRS. Specifically, we tested the system with a setup consisting of 160 Timepix ASICs, operated as GridPix devices in a large TPC field cage in a 1 T magnetic field at a DESY test beam facility providing an electron beam of up to 6 GeV. We discuss the design choices, the dedicated hardware components, the FPGA firmware as well as the performance of the system in the test beam.

  20. Flexible manufacturing for photonics device assembly

    NASA Technical Reports Server (NTRS)

    Lu, Shin-Yee; Pocha, Michael D.; Strand, Oliver T.; Young, K. David

    1994-01-01

    The assembly of photonics devices such as laser diodes, optical modulators, and opto-electronics multi-chip modules (OEMCM), usually requires the placement of micron size devices such as laser diodes, and sub-micron precision attachment between optical fibers and diodes or waveguide modulators (usually referred to as pigtailing). This is a very labor intensive process. Studies done by the opto-electronics (OE) industry have shown that 95 percent of the cost of a pigtailed photonic device is due to the use of manual alignment and bonding techniques, which is the current practice in industry. At Lawrence Livermore National Laboratory, we are working to reduce the cost of packaging OE devices through the use of automation. Our efforts are concentrated on several areas that are directly related to an automated process. This paper will focus on our progress in two of those areas, in particular, an automated fiber pigtailing machine and silicon micro-technology compatible with an automated process.

  1. Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares

    NASA Technical Reports Server (NTRS)

    Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.

    2012-01-01

    A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.

  2. Evaluation of Ares-I Control System Robustness to Uncertain Aerodynamics and Flex Dynamics

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; VanTassel, Chris; Bedrossian, Nazareth; Hall, Charles; Spanos, Pol

    2008-01-01

    This paper discusses the application of robust control theory to evaluate robustness of the Ares-I control systems. Three techniques for estimating upper and lower bounds of uncertain parameters which yield stable closed-loop response are used here: (1) Monte Carlo analysis, (2) mu analysis, and (3) characteristic frequency response analysis. All three methods are used to evaluate stability envelopes of the Ares-I control systems with uncertain aerodynamics and flex dynamics. The results show that characteristic frequency response analysis is the most effective of these methods for assessing robustness.

  3. Macro-meso-microsystems integration in LTCC : LDRD report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Smet, Dennis J.; Nordquist, Christopher Daniel; Turner, Timothy Shawn

    2007-03-01

    Low Temperature Cofired Ceramic (LTCC) has proven to be an enabling medium for microsystem technologies, because of its desirable electrical, physical, and chemical properties coupled with its capability for rapid prototyping and scalable manufacturing of components. LTCC is viewed as an extension of hybrid microcircuits, and in that function it enables development, testing, and deployment of silicon microsystems. However, its versatility has allowed it to succeed as a microsystem medium in its own right, with applications in non-microelectronic meso-scale devices and in a range of sensor devices. Applications include silicon microfluidic ''chip-and-wire'' systems and fluid grid array (FGA)/microfluidic multichip modulesmore » using embedded channels in LTCC, and cofired electro-mechanical systems with moving parts. Both the microfluidic and mechanical system applications are enabled by sacrificial volume materials (SVM), which serve to create and maintain cavities and separation gaps during the lamination and cofiring process. SVMs consisting of thermally fugitive or partially inert materials are easily incorporated. Recognizing the premium on devices that are cofired rather than assembled, we report on functional-as-released and functional-as-fired moving parts. Additional applications for cofired transparent windows, some as small as an optical fiber, are also described. The applications described help pave the way for widespread application of LTCC to biomedical, control, analysis, characterization, and radio frequency (RF) functions for macro-meso-microsystems.« less

  4. Robustness Analysis of Integrated LPV-FDI Filters and LTI-FTC System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Khong, Thuan H.; Shin, Jong-Yeob

    2007-01-01

    This paper proposes an analysis framework for robustness analysis of a nonlinear dynamics system that can be represented by a polynomial linear parameter varying (PLPV) system with constant bounded uncertainty. The proposed analysis framework contains three key tools: 1) a function substitution method which can convert a nonlinear system in polynomial form into a PLPV system, 2) a matrix-based linear fractional transformation (LFT) modeling approach, which can convert a PLPV system into an LFT system with the delta block that includes key uncertainty and scheduling parameters, 3) micro-analysis, which is a well known robust analysis tool for linear systems. The proposed analysis framework is applied to evaluating the performance of the LPV-fault detection and isolation (FDI) filters of the closed-loop system of a transport aircraft in the presence of unmodeled actuator dynamics and sensor gain uncertainty. The robustness analysis results are compared with nonlinear time simulations.

  5. Address-event-based platform for bioinspired spiking systems

    NASA Astrophysics Data System (ADS)

    Jiménez-Fernández, A.; Luján, C. D.; Linares-Barranco, A.; Gómez-Rodríguez, F.; Rivas, M.; Jiménez, G.; Civit, A.

    2007-05-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows a real-time virtual massive connectivity between huge number neurons, located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate "events" according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems, it is absolutely necessary to have a computer interface that allows (a) reading AER interchip traffic into the computer and visualizing it on the screen, and (b) converting conventional frame-based video stream in the computer into AER and injecting it at some point of the AER structure. This is necessary for test and debugging of complex AER systems. In the other hand, the use of a commercial personal computer implies to depend on software tools and operating systems that can make the system slower and un-robust. This paper addresses the problem of communicating several AER based chips to compose a powerful processing system. The problem was discussed in the Neuromorphic Engineering Workshop of 2006. The platform is based basically on an embedded computer, a powerful FPGA and serial links, to make the system faster and be stand alone (independent from a PC). A new platform is presented that allow to connect up to eight AER based chips to a Spartan 3 4000 FPGA. The FPGA is responsible of the network communication based in Address-Event and, at the same time, to map and transform the address space of the traffic to implement a pre-processing. A MMU microprocessor (Intel XScale 400MHz Gumstix Connex computer) is also connected to the FPGA to allow the platform to implement eventbased algorithms to interact to the AER system, like control algorithms, network connectivity, USB support, etc. The LVDS transceiver allows a bandwidth of up to 1.32 Gbps, around ~66 Mega events per second (Mevps).

  6. Modern CACSD using the Robust-Control Toolbox

    NASA Technical Reports Server (NTRS)

    Chiang, Richard Y.; Safonov, Michael G.

    1989-01-01

    The Robust-Control Toolbox is a collection of 40 M-files which extend the capability of PC/PRO-MATLAB to do modern multivariable robust control system design. Included are robust analysis tools like singular values and structured singular values, robust synthesis tools like continuous/discrete H(exp 2)/H infinity synthesis and Linear Quadratic Gaussian Loop Transfer Recovery methods and a variety of robust model reduction tools such as Hankel approximation, balanced truncation and balanced stochastic truncation, etc. The capabilities of the toolbox are described and illustated with examples to show how easily they can be used in practice. Examples include structured singular value analysis, H infinity loop-shaping and large space structure model reduction.

  7. Computer models and the evidence of anthropogenic climate change: An epistemology of variety-of-evidence inferences and robustness analysis.

    PubMed

    Vezér, Martin A

    2016-04-01

    To study climate change, scientists employ computer models, which approximate target systems with various levels of skill. Given the imperfection of climate models, how do scientists use simulations to generate knowledge about the causes of observed climate change? Addressing a similar question in the context of biological modelling, Levins (1966) proposed an account grounded in robustness analysis. Recent philosophical discussions dispute the confirmatory power of robustness, raising the question of how the results of computer modelling studies contribute to the body of evidence supporting hypotheses about climate change. Expanding on Staley's (2004) distinction between evidential strength and security, and Lloyd's (2015) argument connecting variety-of-evidence inferences and robustness analysis, I address this question with respect to recent challenges to the epistemology robustness analysis. Applying this epistemology to case studies of climate change, I argue that, despite imperfections in climate models, and epistemic constraints on variety-of-evidence reasoning and robustness analysis, this framework accounts for the strength and security of evidence supporting climatological inferences, including the finding that global warming is occurring and its primary causes are anthropogenic. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Robust Mediation Analysis Based on Median Regression

    PubMed Central

    Yuan, Ying; MacKinnon, David P.

    2014-01-01

    Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925

  9. Bayesian Inference and Application of Robust Growth Curve Models Using Student's "t" Distribution

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Lai, Keke; Lu, Zhenqiu; Tong, Xin

    2013-01-01

    Despite the widespread popularity of growth curve analysis, few studies have investigated robust growth curve models. In this article, the "t" distribution is applied to model heavy-tailed data and contaminated normal data with outliers for growth curve analysis. The derived robust growth curve models are estimated through Bayesian…

  10. Robust variance estimation with dependent effect sizes: practical considerations including a software tutorial in Stata and spss.

    PubMed

    Tanner-Smith, Emily E; Tipton, Elizabeth

    2014-03-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates. Copyright © 2013 John Wiley & Sons, Ltd.

  11. The Neptune/Triton Explorer Mission: A Concept Feasibility Study

    NASA Technical Reports Server (NTRS)

    Esper, Jaime

    2003-01-01

    Technological advances over the next 10 to 15 years promise to enable a number of smaller, more capable science missions to the outer planets. With the inception of miniaturized spacecraft for a wide range of applications, both in large clusters around Earth, and for deep space missions, NASA is currently in the process of redefining the way science is being gathered. Technologies such as 3-Dimensional Multi-Chip Modules, Micro-machined Electromechanical Devices, Multi Functional Structures, miniaturized transponders, miniaturized propulsion systems, variable emissivity thermal coatings, and artificial intelligence systems are currently in research and development, and are scheduled to fly (or have flown) in a number of missions. This study will leverage on these and other technologies in the design of a lightweight Neptune orbiter unlike any other that has been proposed to date. The Neptune/Triton Explorer (NExTEP) spacecraft uses solar electric earth gravity assist and aero capture maneuvers to achieve its intended target orbit. Either a Taurus or Delta-class launch vehicle may be used to accomplish the mission.

  12. Ion implantation enhanced metal-Si-metal photodetectors

    NASA Astrophysics Data System (ADS)

    Sharma, A. K.; Scott, K. A. M.; Brueck, S. R. J.; Zolper, J. C.; Myers, D. R.

    1994-05-01

    The quantum efficiency and frequency response of simple Ni-Si-Ni metal-semiconductor-metal (MSM) photodetectors at long wavelengths are significantly enhanced with a simple, ion-implantation step to create a highly absorbing region approx. 1 micron below the Si surface. The internal quantum efficiency is improved by a factor of approx. 3 at 860 nm (to 64%) and a full factor of ten at 1.06 microns (to 23%) as compared with otherwise identical unimplanted devices. Dark currents are only slightly affected by the implantation process and are as low as 630 pA for a 4.5-micron gap device at 10-V bias. Dramatic improvement in the impulse response is observed, 100 ps vs. 600 ps, also at 10-V bias and 4.5-micron gap, due to the elimination of carrier diffusion tails in the implanted devices. Due to its planar structure, this device is fully VLSI compatible. Potential applications include optical interconnections for local area networks and multi-chip modules.

  13. Microchannel cooling of face down bonded chips

    DOEpatents

    Bernhardt, Anthony F.

    1993-01-01

    Microchannel cooling is applied to flip-chip bonded integrated circuits, in a manner which maintains the advantages of flip-chip bonds, while overcoming the difficulties encountered in cooling the chips. The technique is suited to either multichip integrated circuit boards in a plane, or to stacks of circuit boards in a three dimensional interconnect structure. Integrated circuit chips are mounted on a circuit board using flip-chip or control collapse bonds. A microchannel structure is essentially permanently coupled with the back of the chip. A coolant delivery manifold delivers coolant to the microchannel structure, and a seal consisting of a compressible elastomer is provided between the coolant delivery manifold and the microchannel structure. The integrated circuit chip and microchannel structure are connected together to form a replaceable integrated circuit module which can be easily decoupled from the coolant delivery manifold and the circuit board. The coolant supply manifolds may be disposed between the circuit boards in a stack and coupled to supplies of coolant through a side of the stack.

  14. Autonomous micro and nano sensors for upstream oil and gas

    NASA Astrophysics Data System (ADS)

    Chapman, David; Trybula, Walt

    2015-06-01

    This paper describes the development of autonomous electronic micro and nanoscale sensor systems for very harsh downhole oilfield conditions and provides an overview of the operational requirements necessary to survive and make direct measurements of subsurface conditions. One of several significant developmental challenges is selecting appropriate technologies that are simultaneously miniaturize-able, integrate-able, harsh environment capable, and economically viable. The Advanced Energy Consortium (AEC) is employing a platform approach to developing and testing multi-chip, millimeter and micron-scale systems in a package at elevated temperature and pressure in API brine and oil analogs, with the future goal of miniaturized systems that enable the collection of previously unattainable data. The ultimate goal is to develop subsurface nanosensor systems that can be injected into oil and gas well bores, to gather and record data, providing an unparalleled level of direct reservoir characterization. This paper provides a status update on the research efforts and developmental successes at the AEC.

  15. Possibilities for mixed mode chip manufacturing in EUROPRACTICE

    NASA Astrophysics Data System (ADS)

    Das, C.

    1997-02-01

    EUROPRACTICE is an EC initiative under the ESPRIT programme which aims to stimulate the wider exploitation of state-of-the-art microelectronics technologies by European industry and to enhance European industrial competitiveness in the global market-place. Through EUROPRACTICE, the EC has created a range of Basic Services that offer users a cost-effective and flexible means of accessing three main microelectronics-based technologies: Application Specific Integrated Circuit (ASICs), Multi-Chip Modules (MCMs) and Microsystems. EUROPRACTICE Basic Services reduce the cost and risk for companies wishing to begin using these technologies. EUROPRACTICE offers a fully supported, low cost route for companies to design and fabricate ASICs for their individual applications. Low cost is achieved by consolidating designs from many users onto a single semiconductor wafer (MPW: Multi Project Wafer). The EUROPRACTICE IC Manufacturing Service (ICMS) offers a broad range of fabrication technologies including CMOS, BiCMOS and GaAs. The Service extends from enabling users to produce prototype ASICs for testing and evaluation, through to low-volume production runs.

  16. Flip-chip replacement within the constraints imposed by multilayer ceramic (MLC) modules

    NASA Astrophysics Data System (ADS)

    Puttlitz, Karl J.

    1984-01-01

    Economics often dictates that suitable module rework procedures be established to replace solder bump devices (flip chips) reflowed to multichip carriers. These operations are complicated, owing to various constraints such as the substrate's physical and mechanical properties, close proximity of surface features, etc. This paper describes the constraints and the methods to circumvent them. An order of preference based upon the degree of constraint is recommended to achieve device removal and subsequent site dress of the residual solder left on the substrate. It has been determined that rework (device replacement) can be successfully achieved in even highly constricted situations. This is illustrated by the example of utilizing a localized heating technique, hot gas, to remove solder from microsockets from which chips were previously removed. Microsockets are areas to which chips are reflowed to the top surface of IBM's densely populated multilayer ceramic (MLC) modules, thus forming the so-called controlled collapse chip connection or C-4. The microsocket patterns are thus identical to the chip footprint.

  17. Robust Variance Estimation with Dependent Effect Sizes: Practical Considerations Including a Software Tutorial in Stata and SPSS

    ERIC Educational Resources Information Center

    Tanner-Smith, Emily E.; Tipton, Elizabeth

    2014-01-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…

  18. Robustness analysis of non-ordinary Petri nets for flexible assembly systems

    NASA Astrophysics Data System (ADS)

    Hsieh, Fu-Shiung

    2010-05-01

    Non-ordinary controlled Petri nets (NCPNs) have the advantages to model flexible assembly systems in which multiple identical resources may be required to perform an operation. However, existing studies on NCPNs are still limited. For example, the robustness properties of NCPNs have not been studied. This motivates us to develop an analysis method for NCPNs. Robustness analysis concerns the ability for a system to maintain operation in the presence of uncertainties. It provides an alternative way to analyse a perturbed system without reanalysis. In our previous research, we have analysed the robustness properties of several subclasses of ordinary controlled Petri nets. To study the robustness properties of NCPNs, we augment NCPNs with an uncertainty model, which specifies an upper bound on the uncertainties for each reachable marking. The resulting PN models are called non-ordinary controlled Petri nets with uncertainties (NCPNU). Based on NCPNU, the problem is to characterise the maximal tolerable uncertainties for each reachable marking. The computational complexities to characterise maximal tolerable uncertainties for each reachable marking grow exponentially with the size of the nets. Instead of considering general NCPNU, we limit our scope to a subclass of PN models called non-ordinary controlled flexible assembly Petri net with uncertainties (NCFAPNU) for assembly systems and study its robustness. We will extend the robustness analysis to NCFAPNU. We identify two types of uncertainties under which the liveness of NCFAPNU can be maintained.

  19. GPS baseline configuration design based on robustness analysis

    NASA Astrophysics Data System (ADS)

    Yetkin, M.; Berber, M.

    2012-11-01

    The robustness analysis results obtained from a Global Positioning System (GPS) network are dramatically influenced by the configurationof the observed baselines. The selection of optimal GPS baselines may allow for a cost effective survey campaign and a sufficiently robustnetwork. Furthermore, using the approach described in this paper, the required number of sessions, the baselines to be observed, and thesignificance levels for statistical testing and robustness analysis can be determined even before the GPS campaign starts. In this study, wepropose a robustness criterion for the optimal design of geodetic networks, and present a very simple and efficient algorithm based on thiscriterion for the selection of optimal GPS baselines. We also show the relationship between the number of sessions and the non-centralityparameter. Finally, a numerical example is given to verify the efficacy of the proposed approach.

  20. Gap-metric-based robustness analysis of nonlinear systems with full and partial feedback linearisation

    NASA Astrophysics Data System (ADS)

    Al-Gburi, A.; Freeman, C. T.; French, M. C.

    2018-06-01

    This paper uses gap metric analysis to derive robustness and performance margins for feedback linearising controllers. Distinct from previous robustness analysis, it incorporates the case of output unstructured uncertainties, and is shown to yield general stability conditions which can be applied to both stable and unstable plants. It then expands on existing feedback linearising control schemes by introducing a more general robust feedback linearising control design which classifies the system nonlinearity into stable and unstable components and cancels only the unstable plant nonlinearities. This is done in order to preserve the stabilising action of the inherently stabilising nonlinearities. Robustness and performance margins are derived for this control scheme, and are expressed in terms of bounds on the plant nonlinearities and the accuracy of the cancellation of the unstable plant nonlinearity by the controller. Case studies then confirm reduced conservatism compared with standard methods.

  1. GWAR: robust analysis and meta-analysis of genome-wide association studies.

    PubMed

    Dimou, Niki L; Tsirigos, Konstantinos D; Elofsson, Arne; Bagos, Pantelis G

    2017-05-15

    In the context of genome-wide association studies (GWAS), there is a variety of statistical techniques in order to conduct the analysis, but, in most cases, the underlying genetic model is usually unknown. Under these circumstances, the classical Cochran-Armitage trend test (CATT) is suboptimal. Robust procedures that maximize the power and preserve the nominal type I error rate are preferable. Moreover, performing a meta-analysis using robust procedures is of great interest and has never been addressed in the past. The primary goal of this work is to implement several robust methods for analysis and meta-analysis in the statistical package Stata and subsequently to make the software available to the scientific community. The CATT under a recessive, additive and dominant model of inheritance as well as robust methods based on the Maximum Efficiency Robust Test statistic, the MAX statistic and the MIN2 were implemented in Stata. Concerning MAX and MIN2, we calculated their asymptotic null distributions relying on numerical integration resulting in a great gain in computational time without losing accuracy. All the aforementioned approaches were employed in a fixed or a random effects meta-analysis setting using summary data with weights equal to the reciprocal of the combined cases and controls. Overall, this is the first complete effort to implement procedures for analysis and meta-analysis in GWAS using Stata. A Stata program and a web-server are freely available for academic users at http://www.compgen.org/tools/GWAR. pbagos@compgen.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  2. Robust Control Systems.

    DTIC Science & Technology

    1981-12-01

    time control system algorithms that will perform adequately (i.e., at least maintain closed-loop system stability) when ucertain parameters in the...system design models vary significantly. Such a control algorithm is said to have stability robustness-or more simply is said to be "robust". This...cas6s above, the performance is analyzed using a covariance analysis. The development of all the controllers and the performance analysis algorithms is

  3. Robust Flutter Margin Analysis that Incorporates Flight Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Martin J.

    1998-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  4. Model reference tracking control of an aircraft: a robust adaptive approach

    NASA Astrophysics Data System (ADS)

    Tanyer, Ilker; Tatlicioglu, Enver; Zergeroglu, Erkan

    2017-05-01

    This work presents the design and the corresponding analysis of a nonlinear robust adaptive controller for model reference tracking of an aircraft that has parametric uncertainties in its system matrices and additive state- and/or time-dependent nonlinear disturbance-like terms in its dynamics. Specifically, robust integral of the sign of the error feedback term and an adaptive term is fused with a proportional integral controller. Lyapunov-based stability analysis techniques are utilised to prove global asymptotic convergence of the output tracking error. Extensive numerical simulations are presented to illustrate the performance of the proposed robust adaptive controller.

  5. Robust-mode analysis of hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.

    2017-04-01

    The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.

  6. Transportation Infrastructure Robustness : Joint Engineering and Economic Analysis

    DOT National Transportation Integrated Search

    2017-11-01

    The objectives of this study are to develop a methodology for assessing the robustness of transportation infrastructure facilities and assess the effect of damage to such facilities on travel demand and the facilities users welfare. The robustness...

  7. Enabling Rapid and Robust Structural Analysis During Conceptual Design

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Padula, Sharon L.; Li, Wu

    2015-01-01

    This paper describes a multi-year effort to add a structural analysis subprocess to a supersonic aircraft conceptual design process. The desired capabilities include parametric geometry, automatic finite element mesh generation, static and aeroelastic analysis, and structural sizing. The paper discusses implementation details of the new subprocess, captures lessons learned, and suggests future improvements. The subprocess quickly compares concepts and robustly handles large changes in wing or fuselage geometry. The subprocess can rank concepts with regard to their structural feasibility and can identify promising regions of the design space. The automated structural analysis subprocess is deemed robust and rapid enough to be included in multidisciplinary conceptual design and optimization studies.

  8. Analysis and improvements of Adaptive Particle Refinement (APR) through CPU time, accuracy and robustness considerations

    NASA Astrophysics Data System (ADS)

    Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.

    2018-02-01

    While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.

  9. Robustness analysis of multirate and periodically time varying systems

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.

    1991-01-01

    A new method for analyzing the stability and robustness of multirate and periodically time varying systems is presented. It is shown that a multirate or periodically time varying system can be transformed into an equivalent time invariant system. For a SISO system, traditional gain and phase margins can be found by direct application of the Nyquist criterion to this equivalent time invariant system. For a MIMO system, structured and unstructured singular values can be used to determine the system's robustness. The limitations and implications of utilizing this equivalent time invariant system for calculating gain and phase margins, and for estimating robustness via singular value analysis are discussed.

  10. Robust Optimization and Sensitivity Analysis with Multi-Objective Genetic Algorithms: Single- and Multi-Disciplinary Applications

    DTIC Science & Technology

    2007-01-01

    multi-disciplinary optimization with uncertainty. Robust optimization and sensitivity analysis is usually used when an optimization model has...formulation is introduced in Section 2.3. We briefly discuss several definitions used in the sensitivity analysis in Section 2.4. Following in...2.5. 2.4 SENSITIVITY ANALYSIS In this section, we discuss several definitions used in Chapter 5 for Multi-Objective Sensitivity Analysis . Inner

  11. RSRE: RNA structural robustness evaluator

    PubMed Central

    Shu, Wenjie; Zheng, Zhiqiang; Wang, Shengqi

    2007-01-01

    Biological robustness, defined as the ability to maintain stable functioning in the face of various perturbations, is an important and fundamental topic in current biology, and has become a focus of numerous studies in recent years. Although structural robustness has been explored in several types of RNA molecules, the origins of robustness are still controversial. Computational analysis results are needed to make up for the lack of evidence of robustness in natural biological systems. The RNA structural robustness evaluator (RSRE) web server presented here provides a freely available online tool to quantitatively evaluate the structural robustness of RNA based on the widely accepted definition of neutrality. Several classical structure comparison methods are employed; five randomization methods are implemented to generate control sequences; sub-optimal predicted structures can be optionally utilized to mitigate the uncertainty of secondary structure prediction. With a user-friendly interface, the web application is easy to use. Intuitive illustrations are provided along with the original computational results to facilitate analysis. The RSRE will be helpful in the wide exploration of RNA structural robustness and will catalyze our understanding of RNA evolution. The RSRE web server is freely available at http://biosrv1.bmi.ac.cn/RSRE/ or http://biotech.bmi.ac.cn/RSRE/. PMID:17567615

  12. Development of highly durable deep-ultraviolet AlGaN-based LED multichip array with hemispherical encapsulated structures using a selected resin through a detailed feasibility study

    NASA Astrophysics Data System (ADS)

    Nagai, Shoko; Yamada, Kiho; Hirano, Akira; Ippommatsu, Masamichi; Ito, Masahiro; Morishima, Naoki; Aosaki, Ko; Honda, Yoshio; Amano, Hiroshi; Akasaki, Isamu

    2016-08-01

    To replace mercury lamps with AlGaN-based deep-ultraviolet (DUV) LEDs, a simple and low-cost package with increased light extraction efficiency (LEE) is indispensable. Therefore, resin encapsulation is considered to be a key technology. However, the photochemical reactions induced by DUV light cause serious problems, and conventional resins cannot be used. In the former part of this study, a comparison of a silicone resin and fluorine polymers was carried out in terms of their suitability for encapsulation, and we concluded that only one of the fluorine polymers can be used for encapsulation. In the latter part, the endurance of encapsulation using the selected fluorine polymer was investigated, and we confirmed that the selected fluorine polymer can guarantee a lifetime of over 6,000 h at a wavelength of 265 nm. Furthermore, a 3 × 4 array module of encapsulated dies on a simple AlN submount was fabricated, demonstrating the possibility of W/cm2-class lighting.

  13. Integrated on-chip solid state capacitor based on vertically aligned carbon nanofibers, grown using a CMOS temperature compatible process

    NASA Astrophysics Data System (ADS)

    Saleem, Amin M.; Andersson, Rickard; Desmaris, Vincent; Enoksson, Peter

    2018-01-01

    Complete miniaturized on-chip integrated solid-state capacitors have been fabricated based on conformal coating of vertically aligned carbon nanofibers (VACNFs), using a CMOS temperature compatible microfabrication processes. The 5 μm long VACNFs, operating as electrode, are grown on a silicon substrate and conformally coated by aluminum oxide dielectric using atomic layer deposition (ALD) technique. The areal (footprint) capacitance density value of 11-15 nF/mm2 is realized with high reproducibility. The CMOS temperature compatible microfabrication, ultra-low profile (less than 7 μm thickness) and high capacitance density would enables direct integration of micro energy storage devices on the active CMOS chip, multi-chip package and passives on silicon or glass interposer. A model is developed to calculate the surface area of VACNFs and the effective capacitance from the devices. It is thereby shown that 71% of surface area of the VACNFs has contributed to the measured capacitance, and by using the entire area the capacitance can potentially be increased.

  14. A neuromorphic VLSI device for implementing 2-D selective attention systems.

    PubMed

    Indiveri, G

    2001-01-01

    Selective attention is a mechanism used to sequentially select and process salient subregions of the input space, while suppressing inputs arriving from nonsalient regions. By processing small amounts of sensory information in a serial fashion, rather than attempting to process all the sensory data in parallel, this mechanism overcomes the problem of flooding limited processing capacity systems with sensory inputs. It is found in many biological systems and can be a useful engineering tool for developing artificial systems that need to process in real-time sensory data. In this paper we present a neuromorphic hardware model of a selective attention mechanism implemented on a very large scale integration (VLSI) chip, using analog circuits. The chip makes use of a spike-based representation for receiving input signals, transmitting output signals and for shifting the selection of the attended input stimulus over time. It can be interfaced to neuromorphic sensors and actuators, for implementing multichip selective attention systems. We describe the characteristics of the circuits used in the architecture and present experimental data measured from the system.

  15. A CMOS ASIC Design for SiPM Arrays

    PubMed Central

    Dey, Samrat; Banks, Lushon; Chen, Shaw-Pin; Xu, Wenbin; Lewellen, Thomas K.; Miyaoka, Robert S.; Rudell, Jacques C.

    2012-01-01

    Our lab has previously reported on novel board-level readout electronics for an 8×8 silicon photomultiplier (SiPM) array featuring row/column summation technique to reduce the hardware requirements for signal processing. We are taking the next step by implementing a monolithic CMOS chip which is based on the row-column architecture. In addition, this paper explores the option of using diagonal summation as well as calibration to compensate for temperature and process variations. Further description of a timing pickoff signal which aligns all of the positioning (spatial channels) pulses in the array is described. The ASIC design is targeted to be scalable with the detector size and flexible to accommodate detectors from different vendors. This paper focuses on circuit implementation issues associated with the design of the ASIC to interface our Phase II MiCES FPGA board with a SiPM array. Moreover, a discussion is provided for strategies to eventually integrate all the analog and mixed-signal electronics with the SiPM, on either a single-silicon substrate or multi-chip module (MCM). PMID:24825923

  16. An application protocol for CAD to CAD transfer of electronic information

    NASA Technical Reports Server (NTRS)

    Azu, Charles C., Jr.

    1993-01-01

    The exchange of Computer Aided Design (CAD) information between dissimilar CAD systems is a problem. This is especially true for transferring electronics CAD information such as multi-chip module (MCM), hybrid microcircuit assembly (HMA), and printed circuit board (PCB) designs. Currently, there exists several neutral data formats for transferring electronics CAD information. These include IGES, EDIF, and DXF formats. All these formats have limitations for use in exchanging electronic data. In an attempt to overcome these limitations, the Navy's MicroCIM program implemented a project to transfer hybrid microcircuit design information between dissimilar CAD systems. The IGES (Initial Graphics Exchange Specification) format is used since it is well established within the CAD industry. The goal of the project is to have a complete transfer of microelectronic CAD information, using IGES, without any data loss. An Application Protocol (AP) is being developed to specify how hybrid microcircuit CAD information will be represented by IGES entity constructs. The AP defines which IGES data items are appropriate for describing HMA geometry, connectivity, and processing as well as HMA material characteristics.

  17. Development of a Comprehensive Digital Avionics Curriculum for the Aeronautical Engineer

    DTIC Science & Technology

    2006-03-01

    able to analyze and design aircraft and missile guidance and control systems, including feedback stabilization schemes and stochastic processes, using ...Uncertainty modeling for robust control; Robust closed-loop stability and performance; Robust H- infinity control; Robustness check using mu-analysis...Controlled feedback (reduces noise) 3. Statistical group response (reduce pressure toward conformity) When used as a tool to study a complex problem

  18. Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.

  19. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  20. What Is Robustness?: Problem Framing Challenges for Water Systems Planning Under Change

    NASA Astrophysics Data System (ADS)

    Herman, J. D.; Reed, P. M.; Zeff, H. B.; Characklis, G. W.

    2014-12-01

    Water systems planners have long recognized the need for robust solutions capable of withstanding deviations from the conditions for which they were designed. Faced with a set of alternatives to choose from—for example, resulting from a multi-objective optimization—existing analysis frameworks offer competing definitions of robustness under change. Robustness analyses have moved from expected utility to exploratory "bottom-up" approaches in which vulnerable scenarios are identified prior to assigning likelihoods; examples include Robust Decision Making (RDM), Decision Scaling, Info-Gap, and Many-Objective Robust Decision Making (MORDM). We propose a taxonomy of robustness frameworks to compare and contrast these approaches, based on their methods of (1) alternative selection, (2) sampling of states of the world, (3) quantification of robustness measures, and (4) identification of key uncertainties using sensitivity analysis. Using model simulations from recent work in multi-objective urban water supply portfolio planning, we illustrate the decision-relevant consequences that emerge from each of these choices. Results indicate that the methodological choices in the taxonomy lead to substantially different planning alternatives, underscoring the importance of an informed definition of robustness. We conclude with a set of recommendations for problem framing: that alternatives should be searched rather than prespecified; dominant uncertainties should be discovered rather than assumed; and that a multivariate satisficing measure of robustness allows stakeholders to achieve their problem-specific performance requirements. This work highlights the importance of careful problem formulation, and provides a common vocabulary to link the robustness frameworks widely used in the field of water systems planning.

  1. Robust extrema features for time-series data analysis.

    PubMed

    Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N

    2013-06-01

    The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.

  2. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers.

    PubMed

    Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.

  3. Robustness properties of discrete time regulators, LOG regulators and hybrid systems

    NASA Technical Reports Server (NTRS)

    Stein, G.; Athans, M.

    1979-01-01

    Robustness properites of sample-data LQ regulators are derived which show that these regulators have fundamentally inferior uncertainty tolerances when compared to their continuous-time counterparts. Results are also presented in stability theory, multivariable frequency domain analysis, LQG robustness, and mathematical representations of hybrid systems.

  4. Benchmarking of a treatment planning system for spot scanning proton therapy: Comparison and analysis of robustness to setup errors of photon IMRT and proton SFUD treatment plans of base of skull meningioma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.

    Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less

  5. Robust, Causal, and Incremental Approaches to Investigating Linguistic Adaptation

    PubMed Central

    Roberts, Seán G.

    2018-01-01

    This paper discusses the maximum robustness approach for studying cases of adaptation in language. We live in an age where we have more data on more languages than ever before, and more data to link it with from other domains. This should make it easier to test hypotheses involving adaptation, and also to spot new patterns that might be explained by adaptation. However, there is not much discussion of the overall approach to research in this area. There are outstanding questions about how to formalize theories, what the criteria are for directing research and how to integrate results from different methods into a clear assessment of a hypothesis. This paper addresses some of those issues by suggesting an approach which is causal, incremental and robust. It illustrates the approach with reference to a recent claim that dry environments select against the use of precise contrasts in pitch. Study 1 replicates a previous analysis of the link between humidity and lexical tone with an alternative dataset and finds that it is not robust. Study 2 performs an analysis with a continuous measure of tone and finds no significant correlation. Study 3 addresses a more recent analysis of the link between humidity and vowel use and finds that it is robust, though the effect size is small and the robustness of the measurement of vowel use is low. Methodological robustness of the general theory is addressed by suggesting additional approaches including iterated learning, a historical case study, corpus studies, and studying individual speech. PMID:29515487

  6. Robustness analysis of a green chemistry-based model for the classification of silver nanoparticles synthesis processes

    EPA Science Inventory

    This paper proposes a robustness analysis based on Multiple Criteria Decision Aiding (MCDA). The ensuing model was used to assess the implementation of green chemistry principles in the synthesis of silver nanoparticles. Its recommendations were also compared to an earlier develo...

  7. Robustness of Type I Error and Power in Set Correlation Analysis of Contingency Tables.

    ERIC Educational Resources Information Center

    Cohen, Jacob; Nee, John C. M.

    1990-01-01

    The analysis of contingency tables via set correlation allows the assessment of subhypotheses involving contrast functions of the categories of the nominal scales. The robustness of such methods with regard to Type I error and statistical power was studied via a Monte Carlo experiment. (TJH)

  8. Model Uncertainty and Robustness: A Computational Framework for Multimodel Analysis

    ERIC Educational Resources Information Center

    Young, Cristobal; Holsteen, Katherine

    2017-01-01

    Model uncertainty is pervasive in social science. A key question is how robust empirical results are to sensible changes in model specification. We present a new approach and applied statistical software for computational multimodel analysis. Our approach proceeds in two steps: First, we estimate the modeling distribution of estimates across all…

  9. An improved method for bivariate meta-analysis when within-study correlations are unknown.

    PubMed

    Hong, Chuan; D Riley, Richard; Chen, Yong

    2018-03-01

    Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the sample size is relatively small, we recommend the use of the robust method under the working independence assumption. We illustrate the proposed method through 2 meta-analyses. Copyright © 2017 John Wiley & Sons, Ltd.

  10. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the performance of a candidate plan with the performance of other candidate plans across a large ensemble of plausible futures. Initial results suggest that the simplest satisficing metric, inspired by the signal to noise ratio, results in very risk averse solutions. Other satisficing metrics, which handle the average performance and the dispersion around the average separately, provide substantial additional insights into the trade off between the average performance, and the dispersion around this average. In contrast, the regret-based metrics enhance insight into the relative merits of candidate plans, while being less clear on the average performance or the dispersion around this performance. These results suggest that it is beneficial to use multiple robustness metrics when doing a robust decision analysis study. Haasnoot, M., J. H. Kwakkel, W. E. Walker and J. Ter Maat (2013). "Dynamic Adaptive Policy Pathways: A New Method for Crafting Robust Decisions for a Deeply Uncertain World." Global Environmental Change 23(2): 485-498. Kwakkel, J. H., M. Haasnoot and W. E. Walker (2014). "Developing Dynamic Adaptive Policy Pathways: A computer-assisted approach for developing adaptive strategies for a deeply uncertain world." Climatic Change.

  11. Robust tracking control of a magnetically suspended rigid body

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.; Cox, David E.

    1994-01-01

    This study is an application of H-infinity and micro-synthesis for designing robust tracking controllers for the Large Angle Magnetic Suspension Test Facility. The modeling, design, analysis, simulation, and testing of a control law that guarantees tracking performance under external disturbances and model uncertainties is investigated. The type of uncertainties considered and the tracking performance metric used is discussed. This study demonstrates the tradeoff between tracking performance at low frequencies and robustness at high frequencies. Two sets of controllers were designed and tested. The first set emphasized performance over robustness, while the second set traded off performance for robustness. Comparisons of simulation and test results are also included. Current simulation and experimental results indicate that reasonably good robust tracking performance can be attained for this system using multivariable robust control approach.

  12. Using Public Data for Comparative Proteome Analysis in Precision Medicine Programs.

    PubMed

    Hughes, Christopher S; Morin, Gregg B

    2018-03-01

    Maximizing the clinical utility of information obtained in longitudinal precision medicine programs would benefit from robust comparative analyses to known information to assess biological features of patient material toward identifying the underlying features driving their disease phenotype. Herein, the potential for utilizing publically deposited mass-spectrometry-based proteomics data to perform inter-study comparisons of cell-line or tumor-tissue materials is investigated. To investigate the robustness of comparison between MS-based proteomics studies carried out with different methodologies, deposited data representative of label-free (MS1) and isobaric tagging (MS2 and MS3 quantification) are utilized. In-depth quantitative proteomics data acquired from analysis of ovarian cancer cell lines revealed the robust recapitulation of observable gene expression dynamics between individual studies carried out using significantly different methodologies. The observed signatures enable robust inter-study clustering of cell line samples. In addition, the ability to classify and cluster tumor samples based on observed gene expression trends when using a single patient sample is established. With this analysis, relevant gene expression dynamics are obtained from a single patient tumor, in the context of a precision medicine analysis, by leveraging a large cohort of repository data as a comparator. Together, these data establish the potential for state-of-the-art MS-based proteomics data to serve as resources for robust comparative analyses in precision medicine applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    PubMed

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  14. Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.

    PubMed

    Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli

    2015-05-01

    2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.

  15. Analysis and Design of Launch Vehicle Flight Control Systems

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Du, Wei; Whorton, Mark

    2008-01-01

    This paper describes the fundamental principles of launch vehicle flight control analysis and design. In particular, the classical concept of "drift-minimum" and "load-minimum" control principles is re-examined and its performance and stability robustness with respect to modeling uncertainties and a gimbal angle constraint is discussed. It is shown that an additional feedback of angle-of-attack or lateral acceleration can significantly improve the overall performance and robustness, especially in the presence of unexpected large wind disturbance. Non-minimum-phase structural filtering of "unstably interacting" bending modes of large flexible launch vehicles is also shown to be effective and robust.

  16. Analysis of Infrared Signature Variation and Robust Filter-Based Supersonic Target Detection

    PubMed Central

    Sun, Sun-Gu; Kim, Kyung-Tae

    2014-01-01

    The difficulty of small infrared target detection originates from the variations of infrared signatures. This paper presents the fundamental physics of infrared target variations and reports the results of variation analysis of infrared images acquired using a long wave infrared camera over a 24-hour period for different types of backgrounds. The detection parameters, such as signal-to-clutter ratio were compared according to the recording time, temperature and humidity. Through variation analysis, robust target detection methodologies are derived by controlling thresholds and designing a temporal contrast filter to achieve high detection rate and low false alarm rate. Experimental results validate the robustness of the proposed scheme by applying it to the synthetic and real infrared sequences. PMID:24672290

  17. Respiratory motion correction in dynamic MRI using robust data decomposition registration - application to DCE-MRI.

    PubMed

    Hamy, Valentin; Dikaios, Nikolaos; Punwani, Shonit; Melbourne, Andrew; Latifoltojar, Arash; Makanyanga, Jesica; Chouhan, Manil; Helbren, Emma; Menys, Alex; Taylor, Stuart; Atkinson, David

    2014-02-01

    Motion correction in Dynamic Contrast Enhanced (DCE-) MRI is challenging because rapid intensity changes can compromise common (intensity based) registration algorithms. In this study we introduce a novel registration technique based on robust principal component analysis (RPCA) to decompose a given time-series into a low rank and a sparse component. This allows robust separation of motion components that can be registered, from intensity variations that are left unchanged. This Robust Data Decomposition Registration (RDDR) is demonstrated on both simulated and a wide range of clinical data. Robustness to different types of motion and breathing choices during acquisition is demonstrated for a variety of imaged organs including liver, small bowel and prostate. The analysis of clinically relevant regions of interest showed both a decrease of error (15-62% reduction following registration) in tissue time-intensity curves and improved areas under the curve (AUC60) at early enhancement. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Investigation on changes of modularity and robustness by edge-removal mutations in signaling networks.

    PubMed

    Truong, Cong-Doan; Kwon, Yung-Keun

    2017-12-21

    Biological networks consisting of molecular components and interactions are represented by a graph model. There have been some studies based on that model to analyze a relationship between structural characteristics and dynamical behaviors in signaling network. However, little attention has been paid to changes of modularity and robustness in mutant networks. In this paper, we investigated the changes of modularity and robustness by edge-removal mutations in three signaling networks. We first observed that both the modularity and robustness increased on average in the mutant network by the edge-removal mutations. However, the modularity change was negatively correlated with the robustness change. This implies that it is unlikely that both the modularity and the robustness values simultaneously increase by the edge-removal mutations. Another interesting finding is that the modularity change was positively correlated with the degree, the number of feedback loops, and the edge betweenness of the removed edges whereas the robustness change was negatively correlated with them. We note that these results were consistently observed in randomly structure networks. Additionally, we identified two groups of genes which are incident to the highly-modularity-increasing and the highly-robustness-decreasing edges with respect to the edge-removal mutations, respectively, and observed that they are likely to be central by forming a connected component of a considerably large size. The gene-ontology enrichment of each of these gene groups was significantly different from the rest of genes. Finally, we showed that the highly-robustness-decreasing edges can be promising edgetic drug-targets, which validates the usefulness of our analysis. Taken together, the analysis of changes of robustness and modularity against edge-removal mutations can be useful to unravel novel dynamical characteristics underlying in signaling networks.

  19. A Merged IQC/SOS Theory for Analysis and Synthesis of Nonlinear Control Systems

    DTIC Science & Technology

    2015-06-23

    constraints. As mentioned previously, this enables new applications of IQCs to analyze the robustness of time-varying and nonlinear systems . This...enables new applications of IQCs to analyze the robustness of time-varying and nonlinear systems . This section considers the analysis of nonlinear systems ...AFRL-AFOSR-VA-TR-2016-0008 A Merged IQC/SOS Theory for Analysis and Synthesis of Nonlinear Control Systems Gary Balas REGENTS OF THE UNIVERSITY OF

  20. Miniature X-Ray Bone Densitometer

    NASA Technical Reports Server (NTRS)

    Charles, Harry K., Jr.

    1999-01-01

    The purpose of the Dual Energy X-ray Absorptiometry (DEXA) project is to design, build, and test an advanced X-ray absorptiometry scanner capable of being used to monitor the deleterious effects of weightlessness on the human musculoskeletal system during prolonged spaceflight. The instrument is based on the principles of dual energy x-ray absorptiometry and is designed not only to measure bone, muscle, and fat masses but also to generate structural information about these tissues so that the effects on mechanical integrity may be assessed using biomechanical principles. A skeletal strength assessment could be particularly important for an astronaut embarking on a remote planet where the consequences of a fragility fracture may be catastrophic. The scanner will employ multiple projection images about the long axis of the scanned subject to provide geometric properties in three dimensions, suitable for a three-dimensional structural analysis of the scanned region. The instrument will employ advanced fabrication techniques to minimize volume and mass (100 kg current target with a long-term goal of 60 kg) of the scanner as appropriate for the space environment, while maintaining the required mechanical stability for high precision measurement. The unit will have the precision required to detect changes in bone mass and geometry as small as 1% and changes in muscle mass as small as 5%. As the system evolves, advanced electronic fabrication technologies such as chip-on-board and multichip modules will be combined with commercial (off-the-shelf) parts to produce a reliable, integrated system which not only minimizes size and weight, but, because of its simplicity, is also cost effective to build and maintain. Additionally, the system is being designed to minimize power consumption. Methods of heat dissipation and mechanical stowage (for the unit when not in use) are being optimized for the space environment.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testingmore » and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.« less

  2. Robustness surfaces of complex networks

    NASA Astrophysics Data System (ADS)

    Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis

    2014-09-01

    Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.

  3. Robustness surfaces of complex networks.

    PubMed

    Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis

    2014-09-02

    Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.

  4. Developing Uncertainty Models for Robust Flutter Analysis Using Ground Vibration Test Data

    NASA Technical Reports Server (NTRS)

    Potter, Starr; Lind, Rick; Kehoe, Michael W. (Technical Monitor)

    2001-01-01

    A ground vibration test can be used to obtain information about structural dynamics that is important for flutter analysis. Traditionally, this information#such as natural frequencies of modes#is used to update analytical models used to predict flutter speeds. The ground vibration test can also be used to obtain uncertainty models, such as natural frequencies and their associated variations, that can update analytical models for the purpose of predicting robust flutter speeds. Analyzing test data using the -norm, rather than the traditional 2-norm, is shown to lead to a minimum-size uncertainty description and, consequently, a least-conservative robust flutter speed. This approach is demonstrated using ground vibration test data for the Aerostructures Test Wing. Different norms are used to formulate uncertainty models and their associated robust flutter speeds to evaluate which norm is least conservative.

  5. Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection

    ERIC Educational Resources Information Center

    Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas

    2011-01-01

    Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…

  6. Topological robustness analysis of protein interaction networks reveals key targets for overcoming chemotherapy resistance in glioma

    NASA Astrophysics Data System (ADS)

    Azevedo, Hátylas; Moreira-Filho, Carlos Alberto

    2015-11-01

    Biological networks display high robustness against random failures but are vulnerable to targeted attacks on central nodes. Thus, network topology analysis represents a powerful tool for investigating network susceptibility against targeted node removal. Here, we built protein interaction networks associated with chemoresistance to temozolomide, an alkylating agent used in glioma therapy, and analyzed their modular structure and robustness against intentional attack. These networks showed functional modules related to DNA repair, immunity, apoptosis, cell stress, proliferation and migration. Subsequently, network vulnerability was assessed by means of centrality-based attacks based on the removal of node fractions in descending orders of degree, betweenness, or the product of degree and betweenness. This analysis revealed that removing nodes with high degree and high betweenness was more effective in altering networks’ robustness parameters, suggesting that their corresponding proteins may be particularly relevant to target temozolomide resistance. In silico data was used for validation and confirmed that central nodes are more relevant for altering proliferation rates in temozolomide-resistant glioma cell lines and for predicting survival in glioma patients. Altogether, these results demonstrate how the analysis of network vulnerability to topological attack facilitates target prioritization for overcoming cancer chemoresistance.

  7. Gradient descent for robust kernel-based regression

    NASA Astrophysics Data System (ADS)

    Guo, Zheng-Chu; Hu, Ting; Shi, Lei

    2018-06-01

    In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.

  8. Robustness of meta-analyses in finding gene × environment interactions

    PubMed Central

    Shi, Gang; Nehorai, Arye

    2017-01-01

    Meta-analyses that synthesize statistical evidence across studies have become important analytical tools for genetic studies. Inspired by the success of genome-wide association studies of the genetic main effect, researchers are searching for gene × environment interactions. Confounders are routinely included in the genome-wide gene × environment interaction analysis as covariates; however, this does not control for any confounding effects on the results if covariate × environment interactions are present. We carried out simulation studies to evaluate the robustness to the covariate × environment confounder for meta-regression and joint meta-analysis, which are two commonly used meta-analysis methods for testing the gene × environment interaction or the genetic main effect and interaction jointly. Here we show that meta-regression is robust to the covariate × environment confounder while joint meta-analysis is subject to the confounding effect with inflated type I error rates. Given vast sample sizes employed in genome-wide gene × environment interaction studies, non-significant covariate × environment interactions at the study level could substantially elevate the type I error rate at the consortium level. When covariate × environment confounders are present, type I errors can be controlled in joint meta-analysis by including the covariate × environment terms in the analysis at the study level. Alternatively, meta-regression can be applied, which is robust to potential covariate × environment confounders. PMID:28362796

  9. Investigation of progressive failure robustness and alternate load paths for damage tolerant structures

    NASA Astrophysics Data System (ADS)

    Marhadi, Kun Saptohartyadi

    Structural optimization for damage tolerance under various unforeseen damage scenarios is computationally challenging. It couples non-linear progressive failure analysis with sampling-based stochastic analysis of random damage. The goal of this research was to understand the relationship between alternate load paths available in a structure and its damage tolerance, and to use this information to develop computationally efficient methods for designing damage tolerant structures. Progressive failure of a redundant truss structure subjected to small random variability was investigated to identify features that correlate with robustness and predictability of the structure's progressive failure. The identified features were used to develop numerical surrogate measures that permit computationally efficient deterministic optimization to achieve robustness and predictability of progressive failure. Analysis of damage tolerance on designs with robust progressive failure indicated that robustness and predictability of progressive failure do not guarantee damage tolerance. Damage tolerance requires a structure to redistribute its load to alternate load paths. In order to investigate the load distribution characteristics that lead to damage tolerance in structures, designs with varying degrees of damage tolerance were generated using brute force stochastic optimization. A method based on principal component analysis was used to describe load distributions (alternate load paths) in the structures. Results indicate that a structure that can develop alternate paths is not necessarily damage tolerant. The alternate load paths must have a required minimum load capability. Robustness analysis of damage tolerant optimum designs indicates that designs are tailored to specified damage. A design Optimized under one damage specification can be sensitive to other damages not considered. Effectiveness of existing load path definitions and characterizations were investigated for continuum structures. A load path definition using a relative compliance change measure (U* field) was demonstrated to be the most useful measure of load path. This measure provides quantitative information on load path trajectories and qualitative information on the effectiveness of the load path. The use of the U* description of load paths in optimizing structures for effective load paths was investigated.

  10. Robust L1-norm two-dimensional linear discriminant analysis.

    PubMed

    Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang

    2015-05-01

    In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. NASA Tech Briefs, June 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Device for Measuring Low Flow Speed in a Duct, Measuring Thermal Conductivity of a Small Insulation Sample, Alignment Jig for the Precise Measurement of THz Radiation, Autoignition Chamber for Remote Testing of Pyrotechnic Devices, Microwave Power Combiners for Signals of Arbitrary Amplitude, Synthetic Foveal Imaging Technology, Airborne Antenna System for Minimum-Cycle-Slip GPS Reception, Improved Starting Materials for Back-Illuminated Imagers, Multi-Modulator for Bandwidth-Efficient Communication, Some Improvements in Utilization of Flash Memory Devices, GPS/MEMS IMU/Microprocessor Board for Navigation, T/R Multi-Chip MMIC Modules for 150 GHz, Pneumatic Haptic Interfaces, Device Acquires and Retains Rock or Ice Samples, Cryogenic Feedthrough Test Rig, Improved Assembly for Gas Shielding During Welding or Brazing, Two-Step Plasma Process for Cleaning Indium Bonding Bumps, Tool for Crimping Flexible Circuit Leads, Yb14MnSb11 as a High-Efficiency Thermoelectric Material, Polyimide-Foam/Aerogel Composites for Thermal Insulation, Converting CSV Files to RKSML Files, Service Management Database for DSN Equipment, Chemochromic Hydrogen Leak Detectors, Compatibility of Segments of Thermoelectric Generators, Complementary Barrier Infrared Detector, JPL Greenland Moulin Exploration Probe, Ultra-Lightweight Self-Deployable Nanocomposite Structure for Habitat Applications, and Room-Temperature Ionic Liquids for Electrochemical Capacitors.

  12. High-performance packaging for monolithic microwave and millimeter-wave integrated circuits

    NASA Technical Reports Server (NTRS)

    Shalkhauser, K. A.; Li, K.; Shih, Y. C.

    1992-01-01

    Packaging schemes were developed that provide low-loss, hermetic enclosure for advanced monolithic microwave and millimeter-wave integrated circuits (MMICs). The package designs are based on a fused quartz substrate material that offers improved radio frequency (RF) performance through 44 gigahertz (GHz). The small size and weight of the packages make them appropriate for a variety of applications, including phased array antenna systems. Packages were designed in two forms; one for housing a single MMIC chip, the second in the form of a multi-chip phased array module. The single chip array module was developed in three separate sizes, for chips of different geometry and frequency requirements. The phased array module was developed to address packaging directly for antenna applications, and includes transmission line and interconnect structures to support multi-element operation. All packages are fabricated using fused quartz substrate materials. As part of the packaging effort, a test fixture was developed to interface the single chip packages to conventional laboratory instrumentation for characterization of the packaged devices. The package and test fixture designs were both developed in a generic sense, optimizing performance for a wide range of possible applications and devices.

  13. Towards co-packaging of photonics and microelectronics in existing manufacturing facilities

    NASA Astrophysics Data System (ADS)

    Janta-Polczynski, Alexander; Cyr, Elaine; Bougie, Jerome; Drouin, Alain; Langlois, Richard; Childers, Darrell; Takenobu, Shotaro; Taira, Yoichi; Lichoulas, Ted W.; Kamlapurkar, Swetha; Engelmann, Sebastian; Fortier, Paul; Boyer, Nicolas; Barwicz, Tymon

    2018-02-01

    The impact of integrated photonics on optical interconnects is currently muted by challenges in photonic packaging and in the dense integration of photonic modules with microelectronic components on printed circuit boards. Single mode optics requires tight alignment tolerance for optical coupling and maintaining this alignment in a cost-efficient package can be challenging during thermal excursions arising from downstream microelectronic assembly processes. In addition, the form factor of typical fiber connectors is incompatible with the dense module integration expected on printed circuit boards. We have implemented novel approaches to interfacing photonic chips to standard optical fibers. These leverage standard high throughput microelectronic assembly tooling and self-alignment techniques resulting in photonic packaging that is scalable in manufacturing volume and in the number of optical IOs per chip. In addition, using dense optical fiber connectors with space-efficient latching of fiber patch cables results in compact module size and efficient board integration, bringing the optics closer to the logic chip to alleviate bandwidth bottlenecks. This packaging direction is also well suited for embedding optics in multi-chip modules, including both photonic and microelectronic chips. We discuss the challenges and rewards in this type of configuration such as thermal management and signal integrity.

  14. Highly Efficient Small Form Factor LED Retrofit Lamp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steven Allen; Fred Palmer; Ming Li

    2011-09-11

    This report summarizes work to develop a high efficiency LED-based MR16 lamp downlight at OSRAM SYLVANIA under US Department of Energy contract DE-EE0000611. A new multichip LED package, electronic driver, and reflector optic were developed for these lamps. At steady-state, the lamp luminous flux was 409 lumens (lm), luminous efficacy of 87 lumens per watt (LPW), CRI (Ra) of 87, and R9 of 85 at a correlated color temperature (CCT) of 3285K. The LED alone achieved 120 lumens per watt efficacy and 600 lumen flux output at 25 C. The driver had 90% electrical conversion efficiency while maintaining excellent powermore » quality with power factor >0.90 at a power of only 5 watts. Compared to similar existing MR16 lamps using LED sources, these lamps had much higher efficacy and color quality. The objective of this work was to demonstrate a LED-based MR16 retrofit lamp for replacement of 35W halogen MR16 lamps having (1) luminous flux of 500 lumens, (2) luminous efficacy of 100 lumens per watt, (3) beam angle less than 40{sup o} and center beam candlepower of at least 1000 candelas, and (4) excellent color quality.« less

  15. Micromechanical die attachment surcharge

    DOEpatents

    Filter, William F.; Hohimer, John P.

    2002-01-01

    An attachment structure is disclosed for attaching a die to a supporting substrate without the use of adhesives or solder. The attachment structure, which can be formed by micromachining, functions purely mechanically in utilizing a plurality of shaped pillars (e.g. round, square or polygonal and solid, hollow or slotted) that are formed on one of the die or supporting substrate and which can be urged into contact with various types of mating structures including other pillars, a deformable layer or a plurality of receptacles that are formed on the other of the die or supporting substrate, thereby forming a friction bond that holds the die to the supporting substrate. The attachment structure can further include an alignment structure for precise positioning of the die and supporting substrate to facilitate mounting the die to the supporting substrate. The attachment structure has applications for mounting semiconductor die containing a microelectromechanical (MEM) device, a microsensor or an integrated circuit (IC), and can be used to form a multichip module. The attachment structure is particularly useful for mounting die containing released MEM devices since these devices are fragile and can otherwise be damaged or degraded by adhesive or solder mounting.

  16. Optomechanical Design and Characterization of a Printed-Circuit-Board-Based Free-Space Optical Interconnect Package

    NASA Astrophysics Data System (ADS)

    Zheng, Xuezhe; Marchand, Philippe J.; Huang, Dawei; Kibar, Osman; Ozkan, Nur S. E.; Esener, Sadik C.

    1999-09-01

    We present a proof of concept and a feasibility demonstration of a practical packaging approach in which free-space optical interconnects (FSOI s) can be integrated simply on electronic multichip modules (MCM s) for intra-MCM board interconnects. Our system-level packaging architecture is based on a modified folded 4 f imaging system that has been implemented with only off-the-shelf optics, conventional electronic packaging, and passive-assembly techniques to yield a potentially low-cost and manufacturable packaging solution. The prototypical system as built supports 48 independent FSOI channels with 8 separate laser and detector chips, for which each chip consists of a one-dimensional array of 12 devices. All the chips are assembled on a single substrate that consists of a printed circuit board or a ceramic MCM. Optical link channel efficiencies of greater than 90% and interchannel cross talk of less than 20 dB at low frequency have been measured. The system is compact at only 10 in. 3 (25.4 cm 3 ) and is scalable, as it can easily accommodate additional chips as well as two-dimensional optoelectronic device arrays for increased interconnection density.

  17. When Can Categorical Variables Be Treated as Continuous? A Comparison of Robust Continuous and Categorical SEM Estimation Methods under Suboptimal Conditions

    ERIC Educational Resources Information Center

    Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria

    2012-01-01

    A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…

  18. Robust dynamic inversion controller design and analysis (using the X-38 vehicle as a case study)

    NASA Astrophysics Data System (ADS)

    Ito, Daigoro

    A new way to approach robust Dynamic Inversion controller synthesis is addressed in this paper. A Linear Quadratic Gaussian outer-loop controller improves the robustness of a Dynamic Inversion inner-loop controller in the presence of uncertainties. Desired dynamics are given by the dynamic compensator, which shapes the loop. The selected dynamics are based on both performance and stability robustness requirements. These requirements are straightforwardly formulated as frequency-dependent singular value bounds during synthesis of the controller. Performance and robustness of the designed controller is tested using a worst case time domain quadratic index, which is a simple but effective way to measure robustness due to parameter variation. Using this approach, a lateral-directional controller for the X-38 vehicle is designed and its robustness to parameter variations and disturbances is analyzed. It is found that if full state measurements are available, the performance of the designed lateral-directional control system, measured by the chosen cost function, improves by approximately a factor of four. Also, it is found that the designed system is stable up to a parametric variation of 1.65 standard deviation with the set of uncertainty considered. The system robustness is determined to be highly sensitive to the dihedral derivative and the roll damping coefficients. The controller analysis is extended to the nonlinear system where both control input displacements and rates are bounded. In this case, the considered nonlinear system is stable up to 48.1° in bank angle and 1.59° in sideslip angle variations, indicating it is more sensitive to variations in sideslip angle than in bank angle. This nonlinear approach is further extended for the actuator failure mode analysis. The results suggest that the designed system maintains a high level of stability in the event of aileron failure. However, only 35% or less of the original stability range is maintained for the rudder failure case. Overall, this combination of controller synthesis and robustness criteria compares well with the mu-synthesis technique. It also is readily accessible to the practicing engineer, in terms of understanding and use.

  19. Robustness analysis of elastoplastic structure subjected to double impulse

    NASA Astrophysics Data System (ADS)

    Kanno, Yoshihiro; Takewaki, Izuru

    2016-11-01

    The double impulse has extensively been used to evaluate the critical response of an elastoplastic structure against a pulse-type input, including near-fault earthquake ground motions. In this paper, we propose a robustness assessment method for elastoplastic single-degree-of-freedom structures subjected to the double impulse input. Uncertainties in the initial velocity of the input, as well as the natural frequency and the strength of the structure, are considered. As fundamental properties of the structural robustness, we show monotonicity of the robustness measure with respect to the natural frequency. In contrast, we show that robustness is not necessarily improved even if the structural strength is increased. Moreover, the robustness preference between two structures with different values of structural strength can possibly reverse when the performance requirement is changed.

  20. Practical robustness measures in multivariable control system analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Lehtomaki, N. A.

    1981-01-01

    The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.

  1. The Robustness Analysis of Wireless Sensor Networks under Uncertain Interference

    PubMed Central

    Deng, Changjian

    2013-01-01

    Based on the complex network theory, robustness analysis of condition monitoring wireless sensor network under uncertain interference is present. In the evolution of the topology of sensor networks, the density weighted algebraic connectivity is taken into account, and the phenomenon of removing and repairing the link and node in the network is discussed. Numerical simulation is conducted to explore algebraic connectivity characteristics and network robustness performance. It is found that nodes density has the effect on algebraic connectivity distribution in the random graph model; high density nodes carry more connections, use more throughputs, and may be more unreliable. Moreover, the results show that, when network should be more error tolerant or robust by repairing nodes or adding new nodes, the network should be better clustered in median and high scale wireless sensor networks and be meshing topology in small scale networks. PMID:24363613

  2. A comparative robustness evaluation of feedforward neurofilters

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Merrill, Walter

    1993-01-01

    A comparative performance and robustness analysis is provided for feedforward neurofilters trained with back propagation to filter additive white noise. The signals used in this analysis are simulated pitch rate responses to typical pilot command inputs for a modern fighter aircraft model. Various configurations of nonlinear and linear neurofilters are trained to estimate exact signal values from input sequences of noisy sampled signal values. In this application, nonlinear neurofiltering is found to be more efficient than linear neurofiltering in removing the noise from responses of the nominal vehicle model, whereas linear neurofiltering is found to be more robust in the presence of changes in the vehicle dynamics. The possibility of enhancing neurofiltering through hybrid architectures based on linear and nonlinear neuroprocessing is therefore suggested as a way of taking advantage of the robustness of linear neurofiltering, while maintaining the nominal performance advantage of nonlinear neurofiltering.

  3. On-Line Robust Modal Stability Prediction using Wavelet Processing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.

  4. Uncertainty analysis and robust trajectory linearization control of a flexible air-breathing hypersonic vehicle

    NASA Astrophysics Data System (ADS)

    Pu, Zhiqiang; Tan, Xiangmin; Fan, Guoliang; Yi, Jianqiang

    2014-08-01

    Flexible air-breathing hypersonic vehicles feature significant uncertainties which pose huge challenges to robust controller designs. In this paper, four major categories of uncertainties are analyzed, that is, uncertainties associated with flexible effects, aerodynamic parameter variations, external environmental disturbances, and control-oriented modeling errors. A uniform nonlinear uncertainty model is explored for the first three uncertainties which lumps all uncertainties together and consequently is beneficial for controller synthesis. The fourth uncertainty is additionally considered in stability analysis. Based on these analyses, the starting point of the control design is to decompose the vehicle dynamics into five functional subsystems. Then a robust trajectory linearization control (TLC) scheme consisting of five robust subsystem controllers is proposed. In each subsystem controller, TLC is combined with the extended state observer (ESO) technique for uncertainty compensation. The stability of the overall closed-loop system with the four aforementioned uncertainties and additional singular perturbations is analyzed. Particularly, the stability of nonlinear ESO is also discussed from a Liénard system perspective. At last, simulations demonstrate the great control performance and the uncertainty rejection ability of the robust scheme.

  5. Thermotaxis is a Robust Mechanism for Thermoregulation in C. elegans Nematodes

    PubMed Central

    Ramot, Daniel; MacInnis, Bronwyn L.; Lee, Hau-Chen; Goodman, Miriam B.

    2013-01-01

    Many biochemical networks are robust to variations in network or stimulus parameters. Although robustness is considered an important design principle of such networks, it is not known whether this principle also applies to higher-level biological processes such as animal behavior. In thermal gradients, C. elegans uses thermotaxis to bias its movement along the direction of the gradient. Here we develop a detailed, quantitative map of C. elegans thermotaxis and use these data to derive a computational model of thermotaxis in the soil, a natural environment of C. elegans. This computational analysis indicates that thermotaxis enables animals to avoid temperatures at which they cannot reproduce, to limit excursions from their adapted temperature, and to remain relatively close to the surface of the soil, where oxygen is abundant. Furthermore, our analysis reveals that this mechanism is robust to large variations in the parameters governing both worm locomotion and temperature fluctuations in the soil. We suggest that, similar to biochemical networks, animals evolve behavioral strategies that are robust, rather than strategies that rely on fine-tuning of specific behavioral parameters. PMID:19020047

  6. Robustness surfaces of complex networks

    PubMed Central

    Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis

    2014-01-01

    Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared. PMID:25178402

  7. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  8. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  9. Identification and robust control of an experimental servo motor.

    PubMed

    Adam, E J; Guestrin, E D

    2002-04-01

    In this work, the design of a robust controller for an experimental laboratory-scale position control system based on a dc motor drive as well as the corresponding identification and robust stability analysis are presented. In order to carry out the robust design procedure, first, a classic closed-loop identification technique is applied and then, the parametrization by internal model control is used. The model uncertainty is evaluated under both parametric and global representation. For the latter case, an interesting discussion about the conservativeness of this description is presented by means of a comparison between the uncertainty disk and the critical perturbation radius approaches. Finally, conclusions about the performance of the experimental system with the robust controller are discussed using comparative graphics of the controlled variable and the Nyquist stability margin as a robustness measurement.

  10. The Influence Function of Principal Component Analysis by Self-Organizing Rule.

    PubMed

    Higuchi; Eguchi

    1998-07-28

    This article is concerned with a neural network approach to principal component analysis (PCA). An algorithm for PCA by the self-organizing rule has been proposed and its robustness observed through the simulation study by Xu and Yuille (1995). In this article, the robustness of the algorithm against outliers is investigated by using the theory of influence function. The influence function of the principal component vector is given in an explicit form. Through this expression, the method is shown to be robust against any directions orthogonal to the principal component vector. In addition, a statistic generated by the self-organizing rule is proposed to assess the influence of data in PCA.

  11. Sample manipulation and data assembly for robust microcrystal synchrotron crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Gongrui; Fuchs, Martin R.; Shi, Wuxian

    With the recent developments in microcrystal handling, synchrotron microdiffraction beamline instrumentation and data analysis, microcrystal crystallography with crystal sizes of less than 10 µm is appealing at synchrotrons. However, challenges remain in sample manipulation and data assembly for robust microcrystal synchrotron crystallography. Here, the development of micro-sized polyimide well-mounts for the manipulation of microcrystals of a few micrometres in size and the implementation of a robust data-analysis method for the assembly of rotational microdiffraction data sets from many microcrystals are described. Here, the method demonstrates that microcrystals may be routinely utilized for the acquisition and assembly of complete data setsmore » from synchrotron microdiffraction beamlines.« less

  12. Sample manipulation and data assembly for robust microcrystal synchrotron crystallography

    DOE PAGES

    Guo, Gongrui; Fuchs, Martin R.; Shi, Wuxian; ...

    2018-04-19

    With the recent developments in microcrystal handling, synchrotron microdiffraction beamline instrumentation and data analysis, microcrystal crystallography with crystal sizes of less than 10 µm is appealing at synchrotrons. However, challenges remain in sample manipulation and data assembly for robust microcrystal synchrotron crystallography. Here, the development of micro-sized polyimide well-mounts for the manipulation of microcrystals of a few micrometres in size and the implementation of a robust data-analysis method for the assembly of rotational microdiffraction data sets from many microcrystals are described. Here, the method demonstrates that microcrystals may be routinely utilized for the acquisition and assembly of complete data setsmore » from synchrotron microdiffraction beamlines.« less

  13. Efficient and robust analysis of complex scattering data under noise in microwave resonators.

    PubMed

    Probst, S; Song, F B; Bushev, P A; Ustinov, A V; Weides, M

    2015-02-01

    Superconducting microwave resonators are reliable circuits widely used for detection and as test devices for material research. A reliable determination of their external and internal quality factors is crucial for many modern applications, which either require fast measurements or operate in the single photon regime with small signal to noise ratios. Here, we use the circle fit technique with diameter correction and provide a step by step guide for implementing an algorithm for robust fitting and calibration of complex resonator scattering data in the presence of noise. The speedup and robustness of the analysis are achieved by employing an algebraic rather than an iterative fit technique for the resonance circle.

  14. Mechanical Unloading of Mouse Bone in Microgravity Significantly Alters Cell Cycle Gene Set Expression

    NASA Astrophysics Data System (ADS)

    Blaber, Elizabeth; Dvorochkin, Natalya; Almeida, Eduardo; Kaplan, Warren; Burns, Brnedan

    2012-07-01

    Spaceflight factors, including microgravity and space radiation, have many detrimental short-term effects on human physiology, including muscle and bone degradation, and immune system dysfunction. The long-term progression of these physiological effects is still poorly understood, and a serious concern for long duration spaceflight missions. We hypothesized that some of the degenerative effects of spaceflight may be caused in part by an inability of stem cells to proliferate and differentiate normally resulting in an impairment of tissue regenerative processes. Furthermore, we hypothesized that long-term bone tissue degeneration in space may be mediated by activation of the p53 signaling network resulting in cell cycle arrest and/or apoptosis in osteoprogenitors. In our analyses we found that spaceflight caused significant bone loss in the weight-bearing bones of mice with a 6.3% reduction in bone volume and 11.9% decrease in bone thickness associated with increased osteoclastic activity. Along with this rapid bone loss we also observed alterations in the cell cycle characterized by an increase in the Cdkn1a/p21 cell cycle arrest molecule independent of Trp53. Overexpression of Cdkn1a/p21 was localized to osteoblasts lining the periosteal surface of the femur and chondrocytes in the head of the femur, suggesting an inhibition of proliferation in two key regenerative cell types of the femur in response to spaceflight. Additionally we found overexpression of several matrix degradation molecules including MMP-1a, 3 and 10, of which MMP-10 was localized to osteocytes within the shaft of the femur. This, in conjunction with 40 nm resolution synchrotron nano-Computed Tomography (nano-CT) observations of an increase in osteocyte lacunae cross-sectional area, perimeter and a decrease in circularity indicates a potential role for osteocytic osteolysis in the observed bone degeneration in spaceflight. To further investigate the genetic response of bone to mechanical unloading in spaceflight, we conducted genome wide microarray analysis of total RNA isolated from the mouse pelvis. Specifically, 16 week old mice were subjected to 15 days spaceflight onboard NASA's STS-131 space shuttle mission. The pelvis of the mice was dissected, the bone marrow was flushed and the bones were briefly stored in RNAlater. The pelvii were then homogenized, and RNA was isolated using TRIzol. RNA concentration and quality was measured using a Nanodrop spectrometer, and 0.8% agarose gel electrophoresis. Samples of cDNA were analyzed using an Affymetrix GeneChip\\S Gene 1.0 ST (Sense Target) Array System for Mouse and GenePattern Software. We normalized the ST gene arrays using Robust Multichip Average (RMA) normalization, which summarizes perfectly matched spots on the array through the median polish algorithm, rather than normalizing according to mismatched spots. We also used Limma for statistical analysis, using the BioConductor Limma Library by Gordon Smyth, and differential expression analysis to identify genes with significant changes in expression between the two experimental conditions. Finally we used GSEApreRanked for Gene Set Enrichment Analysis (GSEA), with Kolmogorov-Smirnov style statistics to identify groups of genes that are regulated together using the t-statistics derived from Limma. Preliminary results show that 6,603 genes expressed in pelvic bone had statistically significant alterations in spaceflight compared to ground controls. These prominently included cell cycle arrest molecules p21, and p18, cell survival molecule Crbp1, and cell cycle molecules cyclin D1, and Cdk1. Additionally, GSEA results indicated alterations in molecular targets of cyclin D1 and Cdk4, senescence pathways resulting from abnormal laminin maturation, cell-cell contacts via E-cadherin, and several pathways relating to protein translation and metabolism. In total 111 gene sets out of 2,488, about 4%, showed statistically significant set alterations. These alterations indicate significant impairment of normal cellular function in the mechanically unloaded environment of space and could provide important genetic insight into the observed uncoupling of bone formation and resorption in space.

  15. Rocket Design for the Future

    NASA Technical Reports Server (NTRS)

    Follett, William W.; Rajagopal, Raj

    2001-01-01

    The focus of the AA MDO team is to reduce product development cost through the capture and automation of best design and analysis practices and through increasing the availability of low-cost, high-fidelity analysis. Implementation of robust designs reduces costs associated with the Test-Fall-Fix cycle. RD is currently focusing on several technologies to improve the design process, including optimization and robust design, expert and rule-based systems, and collaborative technologies.

  16. Robust control charts in industrial production of olive oil

    NASA Astrophysics Data System (ADS)

    Grilo, Luís M.; Mateus, Dina M. R.; Alves, Ana C.; Grilo, Helena L.

    2014-10-01

    Acidity is one of the most important variables in the quality analysis and characterization of olive oil. During the industrial production we use individuals and moving range charts to monitor this variable, which is not always normal distributed. After a brief exploratory data analysis, where we use the bootstrap method, we construct control charts, before and after a Box-Cox transformation, and compare their robustness and performance.

  17. Closed-Loop Evaluation of an Integrated Failure Identification and Fault Tolerant Control System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine; Khong, thuan

    2006-01-01

    Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems developed for failure detection, identification, and reconfiguration, as well as upset recovery, need to be evaluated over broad regions of the flight envelope or under extreme flight conditions, and should include various sources of uncertainty. To apply formal robustness analysis, formulation of linear fractional transformation (LFT) models of complex parameter-dependent systems is required, which represent system uncertainty due to parameter uncertainty and actuator faults. This paper describes a detailed LFT model formulation procedure from the nonlinear model of a transport aircraft by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The closed-loop system is evaluated over the entire flight envelope based on the generated LFT model which can cover nonlinear dynamics. The robustness analysis results of the closed-loop fault tolerant control system of a transport aircraft are presented. A reliable flight envelope (safe flight regime) is also calculated from the robust performance analysis results, over which the closed-loop system can achieve the desired performance of command tracking and failure detection.

  18. Contour plot assessment of existing meta-analyses confirms robust association of statin use and acute kidney injury risk.

    PubMed

    Chevance, Aurélie; Schuster, Tibor; Steele, Russell; Ternès, Nils; Platt, Robert W

    2015-10-01

    Robustness of an existing meta-analysis can justify decisions on whether to conduct an additional study addressing the same research question. We illustrate the graphical assessment of the potential impact of an additional study on an existing meta-analysis using published data on statin use and the risk of acute kidney injury. A previously proposed graphical augmentation approach is used to assess the sensitivity of the current test and heterogeneity statistics extracted from existing meta-analysis data. In addition, we extended the graphical augmentation approach to assess potential changes in the pooled effect estimate after updating a current meta-analysis and applied the three graphical contour definitions to data from meta-analyses on statin use and acute kidney injury risk. In the considered example data, the pooled effect estimates and heterogeneity indices demonstrated to be considerably robust to the addition of a future study. Supportingly, for some previously inconclusive meta-analyses, a study update might yield statistically significant kidney injury risk increase associated with higher statin exposure. The illustrated contour approach should become a standard tool for the assessment of the robustness of meta-analyses. It can guide decisions on whether to conduct additional studies addressing a relevant research question. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Review of LMIs, Interior Point Methods, Complexity Theory, and Robustness Analysis

    NASA Technical Reports Server (NTRS)

    Mesbahi, M.

    1996-01-01

    From end of intro: ...We would like to show that for certain problems in systems and control theory, there exist algorithms for which corresponding (xi) can be viewed as a certain measure of robustness, e.g., stability margin.

  20. A robust control scheme for flexible arms with friction in the joints

    NASA Technical Reports Server (NTRS)

    Rattan, Kuldip S.; Feliu, Vicente; Brown, H. Benjamin, Jr.

    1988-01-01

    A general control scheme to control flexible arms with friction in the joints is proposed in this paper. This scheme presents the advantage of being robust in the sense that it minimizes the effects of the Coulomb friction existing in the motor and the effects of changes in the dynamic friction coefficient. A justification of the robustness properties of the scheme is given in terms of the sensitivity analysis.

  1. Empirical analysis of RNA robustness and evolution using high-throughput sequencing of ribozyme reactions.

    PubMed

    Hayden, Eric J

    2016-08-15

    RNA molecules provide a realistic but tractable model of a genotype to phenotype relationship. This relationship has been extensively investigated computationally using secondary structure prediction algorithms. Enzymatic RNA molecules, or ribozymes, offer access to genotypic and phenotypic information in the laboratory. Advancements in high-throughput sequencing technologies have enabled the analysis of sequences in the lab that now rivals what can be accomplished computationally. This has motivated a resurgence of in vitro selection experiments and opened new doors for the analysis of the distribution of RNA functions in genotype space. A body of computational experiments has investigated the persistence of specific RNA structures despite changes in the primary sequence, and how this mutational robustness can promote adaptations. This article summarizes recent approaches that were designed to investigate the role of mutational robustness during the evolution of RNA molecules in the laboratory, and presents theoretical motivations, experimental methods and approaches to data analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.

    2013-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R.M. 2007. Robust estimation of the variogram by residual maximum likelihood. Geoderma 140: 62-72. Richardson, A.M. and Welsh, A.H. 1995. Robust restricted maximum likelihood in mixed linear models. Biometrics 51: 1429-1439. Welsh, A.H. and Richardson, A.M. 1997. Approaches to the robust estimation of mixed models. In: Handbook of Statistics Vol. 15, Elsevier, pp. 343-384.

  3. Multiplex social ecological network analysis reveals how social changes affect community robustness more than resource depletion.

    PubMed

    Baggio, Jacopo A; BurnSilver, Shauna B; Arenas, Alex; Magdanz, James S; Kofinas, Gary P; De Domenico, Manlio

    2016-11-29

    Network analysis provides a powerful tool to analyze complex influences of social and ecological structures on community and household dynamics. Most network studies of social-ecological systems use simple, undirected, unweighted networks. We analyze multiplex, directed, and weighted networks of subsistence food flows collected in three small indigenous communities in Arctic Alaska potentially facing substantial economic and ecological changes. Our analysis of plausible future scenarios suggests that changes to social relations and key households have greater effects on community robustness than changes to specific wild food resources.

  4. How Robust is Your System Resilience?

    NASA Astrophysics Data System (ADS)

    Homayounfar, M.; Muneepeerakul, R.

    2017-12-01

    Robustness and resilience are concepts in system thinking that have grown in importance and popularity. For many complex social-ecological systems, however, robustness and resilience are difficult to quantify and the connections and trade-offs between them difficult to study. Most studies have either focused on qualitative approaches to discuss their connections or considered only one of them under particular classes of disturbances. In this study, we present an analytical framework to address the linkage between robustness and resilience more systematically. Our analysis is based on a stylized dynamical model that operationalizes a widely used concept framework for social-ecological systems. The model enables us to rigorously define robustness and resilience and consequently investigate their connections. The results reveal the tradeoffs among performance, robustness, and resilience. They also show how the nature of the such tradeoffs varies with the choices of certain policies (e.g., taxation and investment in public infrastructure), internal stresses and external disturbances.

  5. Design and analysis issues in quantitative proteomics studies.

    PubMed

    Karp, Natasha A; Lilley, Kathryn S

    2007-09-01

    Quantitative proteomics is the comparison of distinct proteomes which enables the identification of protein species which exhibit changes in expression or post-translational state in response to a given stimulus. Many different quantitative techniques are being utilized and generate large datasets. Independent of the technique used, these large datasets need robust data analysis to ensure valid conclusions are drawn from such studies. Approaches to address the problems that arise with large datasets are discussed to give insight into the types of statistical analyses of data appropriate for the various experimental strategies that can be employed by quantitative proteomic studies. This review also highlights the importance of employing a robust experimental design and highlights various issues surrounding the design of experiments. The concepts and examples discussed within will show how robust design and analysis will lead to confident results that will ensure quantitative proteomics delivers.

  6. Revisiting an old concept: the coupled oscillator model for VCD. Part 2: implications of the generalised coupled oscillator mechanism for the VCD robustness concept.

    PubMed

    Nicu, Valentin Paul

    2016-08-03

    Using two illustrative examples it is shown that the generalised coupled oscillator (GCO) mechanism implies that the stability of the VCD sign computed for a given normal mode is not reflected by the magnitude of the ratio ζ between the rotational strength and dipole strength of the respective mode, i.e., the VCD robustness criterium proposed by Góbi and Magyarfalvi. The performed VCD GCO analysis brings further insight into the GCO mechanism and also into the VCD robustness concept. First, it shows that the GCO mechanism can be interpreted as a VCD resonance enhancement mechanism, i.e. very large VCD signals can be observed when the interacting molecular fragments are in favourable orientation. Second, it shows that the uncertainties observed in the computed VCD signs are associated to uncertainties in the relative orientation of the coupled oscillator fragments and/or to uncertainties in the predicted nuclear displacement vectors, i.e. not uncertainties in the computed magnetic dipole transition moments as was originally assumed. Since it is able to identify such situations easily, the VCD GCO analysis can be used as a VCD robustness analysis.

  7. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  8. Robustness of fit indices to outliers and leverage observations in structural equation modeling.

    PubMed

    Yuan, Ke-Hai; Zhong, Xiaoling

    2013-06-01

    Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  9. Robust Global Image Registration Based on a Hybrid Algorithm Combining Fourier and Spatial Domain Techniques

    DTIC Science & Technology

    2012-09-01

    Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain techniques Peter N. Crabtree, Collin Seanor...00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain...demonstrate performance of a hybrid algorithm . These results are from analysis of a set of images of an ISO 12233 [12] resolution chart captured in the

  10. Efficient Computation of Info-Gap Robustness for Finite Element Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers anmore » alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.« less

  11. Trading Robustness Requirements in Mars Entry Trajectory Design

    NASA Technical Reports Server (NTRS)

    Lafleur, Jarret M.

    2009-01-01

    One of the most important metrics characterizing an atmospheric entry trajectory in preliminary design is the size of its predicted landing ellipse. Often, requirements for this ellipse are set early in design and significantly influence both the expected scientific return from a particular mission and the cost of development. Requirements typically specify a certain probability level (6-level) for the prescribed ellipse, and frequently this latter requirement is taken at 36. However, searches for the justification of 36 as a robustness requirement suggest it is an empirical rule of thumb borrowed from non-aerospace fields. This paper presents an investigation into the sensitivity of trajectory performance to varying robustness (6-level) requirements. The treatment of robustness as a distinct objective is discussed, and an analysis framework is presented involving the manipulation of design variables to effect trades between performance and robustness objectives. The scenario for which this method is illustrated is the ballistic entry of an MSL-class Mars entry vehicle. Here, the design variable is entry flight path angle, and objectives are parachute deploy altitude performance and error ellipse robustness. Resulting plots show the sensitivities between these objectives and trends in the entry flight path angles required to design to these objectives. Relevance to the trajectory designer is discussed, as are potential steps for further development and use of this type of analysis.

  12. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  13. Analysis, calculation and utilization of the k-balance attribute in interdependent networks

    NASA Astrophysics Data System (ADS)

    Liu, Zheng; Li, Qing; Wang, Dan; Xu, Mingwei

    2018-05-01

    Interdependent networks, where two networks depend on each other, are becoming more and more significant in modern systems. From previous work, it can be concluded that interdependent networks are more vulnerable than a single network. The robustness in interdependent networks deserves special attention. In this paper, we propose a metric of robustness from a new perspective-the balance. First, we define the balance-coefficient of the interdependent system. Based on precise analysis and derivation, we prove some significant theories and provide an efficient algorithm to compute the balance-coefficient. Finally, we propose an optimal solution to reduce the balance-coefficient to enhance the robustness of the given system. Comprehensive experiments confirm the efficiency of our algorithms.

  14. Robustness of statistical tests for multiplicative terms in the additive main effects and multiplicative interaction model for cultivar trials.

    PubMed

    Piepho, H P

    1995-03-01

    The additive main effects multiplicative interaction model is frequently used in the analysis of multilocation trials. In the analysis of such data it is of interest to decide how many of the multiplicative interaction terms are significant. Several tests for this task are available, all of which assume that errors are normally distributed with a common variance. This paper investigates the robustness of several tests (Gollob, F GH1, FGH2, FR)to departures from these assumptions. It is concluded that, because of its better robustness, the F Rtest is preferable. If the other tests are to be used, preliminary tests for the validity of assumptions should be performed.

  15. Robustness Regions for Dichotomous Decisions.

    ERIC Educational Resources Information Center

    Vijn, Pieter; Molenaar, Ivo W.

    1981-01-01

    In the case of dichotomous decisions, the total set of all assumptions/specifications for which the decision would have been the same is the robustness region. Inspection of this (data-dependent) region is a form of sensitivity analysis which may lead to improved decision making. (Author/BW)

  16. Multiobjective robust design of the double wishbone suspension system based on particle swarm optimization.

    PubMed

    Cheng, Xianfu; Lin, Yuqun

    2014-01-01

    The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.

  17. Linear, multivariable robust control with a mu perspective

    NASA Technical Reports Server (NTRS)

    Packard, Andy; Doyle, John; Balas, Gary

    1993-01-01

    The structured singular value is a linear algebra tool developed to study a particular class of matrix perturbation problems arising in robust feedback control of multivariable systems. These perturbations are called linear fractional, and are a natural way to model many types of uncertainty in linear systems, including state-space parameter uncertainty, multiplicative and additive unmodeled dynamics uncertainty, and coprime factor and gap metric uncertainty. The structured singular value theory provides a natural extension of classical SISO robustness measures and concepts to MIMO systems. The structured singular value analysis, coupled with approximate synthesis methods, make it possible to study the tradeoff between performance and uncertainty that occurs in all feedback systems. In MIMO systems, the complexity of the spatial interactions in the loop gains make it difficult to heuristically quantify the tradeoffs that must occur. This paper examines the role played by the structured singular value (and its computable bounds) in answering these questions, as well as its role in the general robust, multivariable control analysis and design problem.

  18. Quality by Design: Multidimensional exploration of the design space in high performance liquid chromatography method development for better robustness before validation.

    PubMed

    Monks, K; Molnár, I; Rieger, H-J; Bogáti, B; Szabó, E

    2012-04-06

    Robust HPLC separations lead to fewer analysis failures and better method transfer as well as providing an assurance of quality. This work presents the systematic development of an optimal, robust, fast UHPLC method for the simultaneous assay of two APIs of an eye drop sample and their impurities, in accordance with Quality by Design principles. Chromatography software is employed to effectively generate design spaces (Method Operable Design Regions), which are subsequently employed to determine the final method conditions and to evaluate robustness prior to validation. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Robust model predictive control for constrained continuous-time nonlinear systems

    NASA Astrophysics Data System (ADS)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  20. Robust Magnetotelluric Impedance Estimation

    NASA Astrophysics Data System (ADS)

    Sutarno, D.

    2010-12-01

    Robust magnetotelluric (MT) response function estimators are now in standard use by the induction community. Properly devised and applied, these have ability to reduce the influence of unusual data (outliers). The estimators always yield impedance estimates which are better than the conventional least square (LS) estimation because the `real' MT data almost never satisfy the statistical assumptions of Gaussian distribution and stationary upon which normal spectral analysis is based. This paper discuses the development and application of robust estimation procedures which can be classified as M-estimators to MT data. Starting with the description of the estimators, special attention is addressed to the recent development of a bounded-influence robust estimation, including utilization of the Hilbert Transform (HT) operation on causal MT impedance functions. The resulting robust performances are illustrated using synthetic as well as real MT data.

  1. Developing robust recurrence plot analysis techniques for investigating infant respiratory patterns.

    PubMed

    Terrill, Philip I; Wilson, Stephen; Suresh, Sadasivam; Cooper, David M

    2007-01-01

    Recurrence plot analysis is a useful non-linear analysis tool. There are still no well formalised procedures for carrying out this analysis on measured physiological data, and systemising analysis is often difficult. In this paper, the recurrence based embedding is compared to radius based embedding by studying a logistic attractor and measured breathing data collected from sleeping human infants. Recurrence based embedding appears to be a more robust method of carrying out a recurrence analysis when attractor size is likely to be different between datasets. In the infant breathing data, the radius measure calculated at a fixed recurrence, scaled by average respiratory period, allows the accurate discrimination of active sleep from quiet sleep states (AUC=0.975, Sn=098, Sp=0.94).

  2. A robust optimization model for distribution and evacuation in the disaster response phase

    NASA Astrophysics Data System (ADS)

    Fereiduni, Meysam; Shahanaghi, Kamran

    2017-03-01

    Natural disasters, such as earthquakes, affect thousands of people and can cause enormous financial loss. Therefore, an efficient response immediately following a natural disaster is vital to minimize the aforementioned negative effects. This research paper presents a network design model for humanitarian logistics which will assist in location and allocation decisions for multiple disaster periods. At first, a single-objective optimization model is presented that addresses the response phase of disaster management. This model will help the decision makers to make the most optimal choices in regard to location, allocation, and evacuation simultaneously. The proposed model also considers emergency tents as temporary medical centers. To cope with the uncertainty and dynamic nature of disasters, and their consequences, our multi-period robust model considers the values of critical input data in a set of various scenarios. Second, because of probable disruption in the distribution infrastructure (such as bridges), the Monte Carlo simulation is used for generating related random numbers and different scenarios; the p-robust approach is utilized to formulate the new network. The p-robust approach can predict possible damages along pathways and among relief bases. We render a case study of our robust optimization approach for Tehran's plausible earthquake in region 1. Sensitivity analysis' experiments are proposed to explore the effects of various problem parameters. These experiments will give managerial insights and can guide DMs under a variety of conditions. Then, the performances of the "robust optimization" approach and the "p-robust optimization" approach are evaluated. Intriguing results and practical insights are demonstrated by our analysis on this comparison.

  3. Guaranteeing robustness of structural condition monitoring to environmental variability

    NASA Astrophysics Data System (ADS)

    Van Buren, Kendra; Reilly, Jack; Neal, Kyle; Edwards, Harry; Hemez, François

    2017-01-01

    Advances in sensor deployment and computational modeling have allowed significant strides to be recently made in the field of Structural Health Monitoring (SHM). One widely used SHM strategy is to perform a vibration analysis where a model of the structure's pristine (undamaged) condition is compared with vibration response data collected from the physical structure. Discrepancies between model predictions and monitoring data can be interpreted as structural damage. Unfortunately, multiple sources of uncertainty must also be considered in the analysis, including environmental variability, unknown model functional forms, and unknown values of model parameters. Not accounting for these sources of uncertainty can lead to false-positives or false-negatives in the structural condition assessment. To manage the uncertainty, we propose a robust SHM methodology that combines three technologies. A time series algorithm is trained using "baseline" data to predict the vibration response, compare predictions to actual measurements collected on a potentially damaged structure, and calculate a user-defined damage indicator. The second technology handles the uncertainty present in the problem. An analysis of robustness is performed to propagate this uncertainty through the time series algorithm and obtain the corresponding bounds of variation of the damage indicator. The uncertainty description and robustness analysis are both inspired by the theory of info-gap decision-making. Lastly, an appropriate "size" of the uncertainty space is determined through physical experiments performed in laboratory conditions. Our hypothesis is that examining how the uncertainty space changes throughout time might lead to superior diagnostics of structural damage as compared to only monitoring the damage indicator. This methodology is applied to a portal frame structure to assess if the strategy holds promise for robust SHM. (Publication approved for unlimited, public release on October-28-2015, LA-UR-15-28442, unclassified.)

  4. Robustness analysis of non-ordinary Petri nets for flexible assembly/disassembly processes based on structural decomposition

    NASA Astrophysics Data System (ADS)

    Hsieh, Fu-Shiung

    2011-03-01

    Design of robust supervisory controllers for manufacturing systems with unreliable resources has received significant attention recently. Robustness analysis provides an alternative way to analyse a perturbed system to quickly respond to resource failures. Although we have analysed the robustness properties of several subclasses of ordinary Petri nets (PNs), analysis for non-ordinary PNs has not been done. Non-ordinary PNs have weighted arcs and have the advantage to compactly model operations requiring multiple parts or resources. In this article, we consider a class of flexible assembly/disassembly manufacturing systems and propose a non-ordinary flexible assembly/disassembly Petri net (NFADPN) model for this class of systems. As the class of flexible assembly/disassembly manufacturing systems can be regarded as the integration and interactions of a set of assembly/disassembly subprocesses, a bottom-up approach is adopted in this article to construct the NFADPN models. Due to the routing flexibility in NFADPN, there may exist different ways to accomplish the tasks. To characterise different ways to accomplish the tasks, we propose the concept of completely connected subprocesses. As long as there exists a set of completely connected subprocesses for certain type of products, the production of that type of products can still be maintained without requiring the whole NFADPN to be live. To take advantage of the alternative routes without enforcing liveness for the whole system, we generalise the concept of persistent production proposed to NFADPN. We propose a condition for persistent production based on the concept of completely connected subprocesses. We extend robustness analysis to NFADPN by exploiting its structure. We identify several patterns of resource failures and characterise the conditions to maintain operation in the presence of resource failures.

  5. Diagnostics of Robust Growth Curve Modeling Using Student's "t" Distribution

    ERIC Educational Resources Information Center

    Tong, Xin; Zhang, Zhiyong

    2012-01-01

    Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors…

  6. Robust MOE Detector for DS-CDMA Systems with Signature Waveform Mismatch

    NASA Astrophysics Data System (ADS)

    Lin, Tsui-Tsai

    In this letter, a decision-directed MOE detector with excellent robustness against signature waveform mismatch is proposed for DS-CDMA systems. Both the theoretic analysis and computer simulation results demonstrate that the proposed detector can provide better SINR performance than that of conventional detectors.

  7. Designing Phononic Crystals with Wide and Robust Band Gaps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, Zian; Chen, Yanyu; Yang, Haoxiang

    Here, phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with widemore » and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.« less

  8. Designing Phononic Crystals with Wide and Robust Band Gaps

    DOE PAGES

    Jia, Zian; Chen, Yanyu; Yang, Haoxiang; ...

    2018-04-16

    Here, phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with widemore » and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.« less

  9. Designing Phononic Crystals with Wide and Robust Band Gaps

    NASA Astrophysics Data System (ADS)

    Jia, Zian; Chen, Yanyu; Yang, Haoxiang; Wang, Lifeng

    2018-04-01

    Phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with wide and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.

  10. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  11. Characterizing uncertain sea-level rise projections to support investment decisions.

    PubMed

    Sriver, Ryan L; Lempert, Robert J; Wikman-Svahn, Per; Keller, Klaus

    2018-01-01

    Many institutions worldwide are considering how to include uncertainty about future changes in sea-levels and storm surges into their investment decisions regarding large capital infrastructures. Here we examine how to characterize deeply uncertain climate change projections to support such decisions using Robust Decision Making analysis. We address questions regarding how to confront the potential for future changes in low probability but large impact flooding events due to changes in sea-levels and storm surges. Such extreme events can affect investments in infrastructure but have proved difficult to consider in such decisions because of the deep uncertainty surrounding them. This study utilizes Robust Decision Making methods to address two questions applied to investment decisions at the Port of Los Angeles: (1) Under what future conditions would a Port of Los Angeles decision to harden its facilities against extreme flood scenarios at the next upgrade pass a cost-benefit test, and (2) Do sea-level rise projections and other information suggest such conditions are sufficiently likely to justify such an investment? We also compare and contrast the Robust Decision Making methods with a full probabilistic analysis. These two analysis frameworks result in similar investment recommendations for different idealized future sea-level projections, but provide different information to decision makers and envision different types of engagement with stakeholders. In particular, the full probabilistic analysis begins by aggregating the best scientific information into a single set of joint probability distributions, while the Robust Decision Making analysis identifies scenarios where a decision to invest in near-term response to extreme sea-level rise passes a cost-benefit test, and then assembles scientific information of differing levels of confidence to help decision makers judge whether or not these scenarios are sufficiently likely to justify making such investments. Results highlight the highly-localized and context dependent nature of applying Robust Decision Making methods to inform investment decisions.

  12. Characterizing uncertain sea-level rise projections to support investment decisions

    PubMed Central

    Lempert, Robert J.; Wikman-Svahn, Per; Keller, Klaus

    2018-01-01

    Many institutions worldwide are considering how to include uncertainty about future changes in sea-levels and storm surges into their investment decisions regarding large capital infrastructures. Here we examine how to characterize deeply uncertain climate change projections to support such decisions using Robust Decision Making analysis. We address questions regarding how to confront the potential for future changes in low probability but large impact flooding events due to changes in sea-levels and storm surges. Such extreme events can affect investments in infrastructure but have proved difficult to consider in such decisions because of the deep uncertainty surrounding them. This study utilizes Robust Decision Making methods to address two questions applied to investment decisions at the Port of Los Angeles: (1) Under what future conditions would a Port of Los Angeles decision to harden its facilities against extreme flood scenarios at the next upgrade pass a cost-benefit test, and (2) Do sea-level rise projections and other information suggest such conditions are sufficiently likely to justify such an investment? We also compare and contrast the Robust Decision Making methods with a full probabilistic analysis. These two analysis frameworks result in similar investment recommendations for different idealized future sea-level projections, but provide different information to decision makers and envision different types of engagement with stakeholders. In particular, the full probabilistic analysis begins by aggregating the best scientific information into a single set of joint probability distributions, while the Robust Decision Making analysis identifies scenarios where a decision to invest in near-term response to extreme sea-level rise passes a cost-benefit test, and then assembles scientific information of differing levels of confidence to help decision makers judge whether or not these scenarios are sufficiently likely to justify making such investments. Results highlight the highly-localized and context dependent nature of applying Robust Decision Making methods to inform investment decisions. PMID:29414978

  13. Risk, Robustness and Water Resources Planning Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Borgomeo, Edoardo; Mortazavi-Naeini, Mohammad; Hall, Jim W.; Guillod, Benoit P.

    2018-03-01

    Risk-based water resources planning is based on the premise that water managers should invest up to the point where the marginal benefit of risk reduction equals the marginal cost of achieving that benefit. However, this cost-benefit approach may not guarantee robustness under uncertain future conditions, for instance under climatic changes. In this paper, we expand risk-based decision analysis to explore possible ways of enhancing robustness in engineered water resources systems under different risk attitudes. Risk is measured as the expected annual cost of water use restrictions, while robustness is interpreted in the decision-theoretic sense as the ability of a water resource system to maintain performance—expressed as a tolerable risk of water use restrictions—under a wide range of possible future conditions. Linking risk attitudes with robustness allows stakeholders to explicitly trade-off incremental increases in robustness with investment costs for a given level of risk. We illustrate the framework through a case study of London's water supply system using state-of-the -art regional climate simulations to inform the estimation of risk and robustness.

  14. Relationship of cranial robusticity to cranial form, geography and climate in Homo sapiens.

    PubMed

    Baab, Karen L; Freidline, Sarah E; Wang, Steven L; Hanson, Timothy

    2010-01-01

    Variation in cranial robusticity among modern human populations is widely acknowledged but not well-understood. While the use of "robust" cranial traits in hominin systematics and phylogeny suggests that these characters are strongly heritable, this hypothesis has not been tested. Alternatively, cranial robusticity may be a response to differences in diet/mastication or it may be an adaptation to cold, harsh environments. This study quantifies the distribution of cranial robusticity in 14 geographically widespread human populations, and correlates this variation with climatic variables, neutral genetic distances, cranial size, and cranial shape. With the exception of the occipital torus region, all traits were positively correlated with each other, suggesting that they should not be treated as individual characters. While males are more robust than females within each of the populations, among the independent variables (cranial shape, size, climate, and neutral genetic distances), only shape is significantly correlated with inter-population differences in robusticity. Two-block partial least-squares analysis was used to explore the relationship between cranial shape (captured by three-dimensional landmark data) and robusticity across individuals. Weak support was found for the hypothesis that robusticity was related to mastication as the shape associated with greater robusticity was similar to that described for groups that ate harder-to-process diets. Specifically, crania with more prognathic faces, expanded glabellar and occipital regions, and (slightly) longer skulls were more robust than those with rounder vaults and more orthognathic faces. However, groups with more mechanically demanding diets (hunter-gatherers) were not always more robust than groups practicing some form of agriculture.

  15. Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan

    2013-01-01

    The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…

  16. The Robustness of LISREL Estimates in Structural Equation Models with Categorical Variables.

    ERIC Educational Resources Information Center

    Ethington, Corinna A.

    1987-01-01

    This study examined the effect of type of correlation matrix on the robustness of LISREL maximum likelihood and unweighted least squares structural parameter estimates for models with categorical variables. The analysis of mixed matrices produced estimates that closely approximated the model parameters except where dichotomous variables were…

  17. IMPLICATIONS OF USING ROBUST BAYESIAN ANALYSIS TO REPRESENT DIVERSE SOURCES OF UNCERTAINTY IN INTEGRATED ASSESSMENT

    EPA Science Inventory

    In our previous research, we showed that robust Bayesian methods can be used in environmental modeling to define a set of probability distributions for key parameters that captures the effects of expert disagreement, ambiguity, or ignorance. This entire set can then be update...

  18. Worst-Case Flutter Margins from F/A-18 Aircraft Aeroelastic Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty

    1997-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, micron, computes a stability margin which directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The micron margins are robust margins which indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 SRA using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  19. On the robustness of a Bayes estimate. [in reliability theory

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.

  20. Position Accuracy Analysis of a Robust Vision-Based Navigation

    NASA Astrophysics Data System (ADS)

    Gaglione, S.; Del Pizzo, S.; Troisi, S.; Angrisano, A.

    2018-05-01

    Using images to determine camera position and attitude is a consolidated method, very widespread for application like UAV navigation. In harsh environment, where GNSS could be degraded or denied, image-based positioning could represent a possible candidate for an integrated or alternative system. In this paper, such method is investigated using a system based on single camera and 3D maps. A robust estimation method is proposed in order to limit the effect of blunders or noisy measurements on position solution. The proposed approach is tested using images collected in an urban canyon, where GNSS positioning is very unaccurate. A previous photogrammetry survey has been performed to build the 3D model of tested area. The position accuracy analysis is performed and the effect of the robust method proposed is validated.

  1. Reciprocity Between Robustness of Period and Plasticity of Phase in Biological Clocks

    NASA Astrophysics Data System (ADS)

    Hatakeyama, Tetsuhiro S.; Kaneko, Kunihiko

    2015-11-01

    Circadian clocks exhibit the robustness of period and plasticity of phase against environmental changes such as temperature and nutrient conditions. Thus far, however, it is unclear how both are simultaneously achieved. By investigating distinct models of circadian clocks, we demonstrate reciprocity between robustness and plasticity: higher robustness in the period implies higher plasticity in the phase, where changes in period and in phase follow a linear relationship with a negative coefficient. The robustness of period is achieved by the adaptation on the limit cycle via a concentration change of a buffer molecule, whose temporal change leads to a phase shift following a shift of the limit-cycle orbit in phase space. Generality of reciprocity in clocks with the adaptation mechanism is confirmed with theoretical analysis of simple models, while biological significance is discussed.

  2. New robust statistical procedures for the polytomous logistic regression models.

    PubMed

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  3. Robustness of reduced-order multivariable state-space self-tuning controller

    NASA Technical Reports Server (NTRS)

    Yuan, Zhuzhi; Chen, Zengqiang

    1994-01-01

    In this paper, we present a quantitative analysis of the robustness of a reduced-order pole-assignment state-space self-tuning controller for a multivariable adaptive control system whose order of the real process is higher than that of the model used in the controller design. The result of stability analysis shows that, under a specific bounded modelling error, the adaptively controlled closed-loop real system via the reduced-order state-space self-tuner is BIBO stable in the presence of unmodelled dynamics.

  4. LL-37 Recruits Immunosuppressive Regulatory T Cells to Ovarian Tumors

    DTIC Science & Technology

    2009-11-01

    receptor. Western blot analysis of MSC lysates showed that ERK-1 and -2 are robustly phosphorylated beginning 10 minutes after LL-37 treatment and...Carretero, Escamez et al. 2008; von Haussen, Koczulla et al. 2008). Western blot analysis of LL-37-treated SK-OV-3 cell lysates showed the robust...mesenchymal stem cells in the treatment of gliomas ." Cancer Res 65(8): 3307-18. Studeny, M., F. C. Marini, et al. (2004). "Mesenchymal stem cells: potential

  5. Robust analysis of an underwater navigational strategy in electrically heterogeneous corridors.

    PubMed

    Dimble, Kedar D; Ranganathan, Badri N; Keshavan, Jishnu; Humbert, J Sean

    2016-08-01

    Obstacles and other global stimuli provide relevant navigational cues to a weakly electric fish. In this work, robust analysis of a control strategy based on electrolocation for performing obstacle avoidance in electrically heterogeneous corridors is presented and validated. Static output feedback control is shown to achieve the desired goal of reflexive obstacle avoidance in such environments in simulation and experimentation. The proposed approach is computationally inexpensive and readily implementable on a small scale underwater vehicle, making underwater autonomous navigation feasible in real-time.

  6. Feasible logic Bell-state analysis with linear optics

    PubMed Central

    Zhou, Lan; Sheng, Yu-Bo

    2016-01-01

    We describe a feasible logic Bell-state analysis protocol by employing the logic entanglement to be the robust concatenated Greenberger-Horne-Zeilinger (C-GHZ) state. This protocol only uses polarization beam splitters and half-wave plates, which are available in current experimental technology. We can conveniently identify two of the logic Bell states. This protocol can be easily generalized to the arbitrary C-GHZ state analysis. We can also distinguish two N-logic-qubit C-GHZ states. As the previous theory and experiment both showed that the C-GHZ state has the robustness feature, this logic Bell-state analysis and C-GHZ state analysis may be essential for linear-optical quantum computation protocols whose building blocks are logic-qubit entangled state. PMID:26877208

  7. Feasible logic Bell-state analysis with linear optics.

    PubMed

    Zhou, Lan; Sheng, Yu-Bo

    2016-02-15

    We describe a feasible logic Bell-state analysis protocol by employing the logic entanglement to be the robust concatenated Greenberger-Horne-Zeilinger (C-GHZ) state. This protocol only uses polarization beam splitters and half-wave plates, which are available in current experimental technology. We can conveniently identify two of the logic Bell states. This protocol can be easily generalized to the arbitrary C-GHZ state analysis. We can also distinguish two N-logic-qubit C-GHZ states. As the previous theory and experiment both showed that the C-GHZ state has the robustness feature, this logic Bell-state analysis and C-GHZ state analysis may be essential for linear-optical quantum computation protocols whose building blocks are logic-qubit entangled state.

  8. Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiao, Xiangmin; Einstein, Daniel R.; Dyedov, Volodymyr

    2010-03-24

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stabilitymore » and consistency tests. These concepts lend themselves to robust numerical techniques including eigenvalue analysis, weighted least squares approximations, and numerical minimization, resulting in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods.« less

  9. Robust demarcation of basal cell carcinoma by dependent component analysis-based segmentation of multi-spectral fluorescence images.

    PubMed

    Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2010-07-02

    This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude. Copyright 2010 Elsevier B.V. All rights reserved.

  10. CMOS-compatible InP/InGaAs digital photoreceiver

    DOEpatents

    Lovejoy, Michael L.; Rose, Benny H.; Craft, David C.; Enquist, Paul M.; Slater, Jr., David B.

    1997-01-01

    A digital photoreceiver is formed monolithically on an InP semiconductor substrate and comprises a p-i-n photodetector formed from a plurality of InP/InGaAs layers deposited by an epitaxial growth process and an adjacent heterojunction bipolar transistor (HBT) amplifier formed from the same InP/InGaAs layers. The photoreceiver amplifier operates in a large-signal mode to convert a detected photocurrent signal into an amplified output capable of directly driving integrated circuits such as CMOS. In combination with an optical transmitter, the photoreceiver may be used to establish a short-range channel of digital optical communications between integrated circuits with applications to multi-chip modules (MCMs). The photoreceiver may also be used with fiber optic coupling for establishing longer-range digital communications (i.e. optical interconnects) between distributed computers or the like. Arrays of digital photoreceivers may be formed on a common substrate for establishing a plurality of channels of digital optical communication, with each photoreceiver being spaced by less than about 1 mm and consuming less than about 20 mW of power, and preferably less than about 10 mW. Such photoreceiver arrays are useful for transferring huge amounts of digital data between integrated circuits at bit rates of up to about 1000 Mb/s or more.

  11. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  12. SpaceWire Driver Software for Special DSPs

    NASA Technical Reports Server (NTRS)

    Clark, Douglas; Lux, James; Nishimoto, Kouji; Lang, Minh

    2003-01-01

    A computer program provides a high-level C-language interface to electronics circuitry that controls a SpaceWire interface in a system based on a space qualified version of the ADSP-21020 digital signal processor (DSP). SpaceWire is a spacecraft-oriented standard for packet-switching data-communication networks that comprise nodes connected through bidirectional digital serial links that utilize low-voltage differential signaling (LVDS). The software is tailored to the SMCS-332 application-specific integrated circuit (ASIC) (also available as the TSS901E), which provides three highspeed (150 Mbps) serial point-to-point links compliant with the proposed Institute of Electrical and Electronics Engineers (IEEE) Standard 1355.2 and equivalent European Space Agency (ESA) Standard ECSS-E-50-12. In the specific application of this software, the SpaceWire ASIC was combined with the DSP processor, memory, and control logic in a Multi-Chip Module DSP (MCM-DSP). The software is a collection of low-level driver routines that provide a simple message-passing application programming interface (API) for software running on the DSP. Routines are provided for interrupt-driven access to the two styles of interface provided by the SMCS: (1) the "word at a time" conventional host interface (HOCI); and (2) a higher performance "dual port memory" style interface (COMI).

  13. CMOS-compatible InP/InGaAs digital photoreceiver

    DOEpatents

    Lovejoy, M.L.; Rose, B.H.; Craft, D.C.; Enquist, P.M.; Slater, D.B. Jr.

    1997-11-04

    A digital photoreceiver is formed monolithically on an InP semiconductor substrate and comprises a p-i-n photodetector formed from a plurality of InP/InGaAs layers deposited by an epitaxial growth process and an adjacent heterojunction bipolar transistor (HBT) amplifier formed from the same InP/InGaAs layers. The photoreceiver amplifier operates in a large-signal mode to convert a detected photocurrent signal into an amplified output capable of directly driving integrated circuits such as CMOS. In combination with an optical transmitter, the photoreceiver may be used to establish a short-range channel of digital optical communications between integrated circuits with applications to multi-chip modules (MCMs). The photoreceiver may also be used with fiber optic coupling for establishing longer-range digital communications (i.e. optical interconnects) between distributed computers or the like. Arrays of digital photoreceivers may be formed on a common substrate for establishing a plurality of channels of digital optical communication, with each photoreceiver being spaced by less than about 1 mm and consuming less than about 20 mW of power, and preferably less than about 10 mW. Such photoreceiver arrays are useful for transferring huge amounts of digital data between integrated circuits at bit rates of up to about 1,000 Mb/s or more. 4 figs.

  14. MEMS packaging: state of the art and future trends

    NASA Astrophysics Data System (ADS)

    Bossche, Andre; Cotofana, Carmen V. B.; Mollinger, Jeff R.

    1998-07-01

    Now that the technology for Integrated sensor and MEMS devices has become sufficiently mature to allow mass production, it is expected that the prices of bare chips will drop dramatically. This means that the package prices will become a limiting factor in market penetration, unless low cost packaging solutions become available. This paper will discuss the developments in packaging technology. Both single-chip and multi-chip packaging solutions will be addressed. It first starts with a discussion on the different requirements that have to be met; both from a device point of view (open access paths to the environment, vacuum cavities, etc.) and from the application point of view (e.g. environmental hostility). Subsequently current technologies are judged on their applicability for MEMS and sensor packaging and a forecast is given for future trends. It is expected that the large majority of sensing devices will be applied in relative friendly environments for which plastic packages would suffice. Therefore, on the short term an important role is foreseen for recently developed plastic packaging techniques such as precision molding and precision dispensing. Just like in standard electronic packaging, complete wafer level packaging methods for sensing devices still have a long way to go before they can compete with the highly optimized and automated plastic packaging processes.

  15. A High-Density, High-Efficiency, Isolated On-Board Vehicle Battery Charger Utilizing Silicon Carbide Power Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, B; Barkley, A; Cole, Z

    2014-05-01

    This paper presents an isolated on-board vehicular battery charger that utilizes silicon carbide (SiC) power devices to achieve high density and high efficiency for application in electric vehicles (EVs) and plug-in hybrid EVs (PHEVs). The proposed level 2 charger has a two-stage architecture where the first stage is a bridgeless boost ac-dc converter and the second stage is a phase-shifted full-bridge isolated dc-dc converter. The operation of both topologies is presented and the specific advantages gained through the use of SiC power devices are discussed. The design of power stage components, the packaging of the multichip power module, and themore » system-level packaging is presented with a primary focus on system density and a secondary focus on system efficiency. In this work, a hardware prototype is developed and a peak system efficiency of 95% is measured while operating both power stages with a switching frequency of 200 kHz. A maximum output power of 6.1 kW results in a volumetric power density of 5.0 kW/L and a gravimetric power density of 3.8 kW/kg when considering the volume and mass of the system including a case.« less

  16. Prototype Parts of a Digital Beam-Forming Wide-Band Receiver

    NASA Technical Reports Server (NTRS)

    Kaplan, Steven B.; Pylov, Sergey V.; Pambianchi, Michael

    2003-01-01

    Some prototype parts of a digital beamforming (DBF) receiver that would operate at multigigahertz carrier frequencies have been developed. The beam-forming algorithm in a DBF receiver processes signals from multiple antenna elements with appropriate time delays and weighting factors chosen to enhance the reception of signals from a specific direction while suppressing signals from other directions. Such a receiver would be used in the directional reception of weak wideband signals -- for example, spread-spectrum signals from a low-power transmitter on an Earth-orbiting spacecraft or other distant source. The prototype parts include superconducting components on integrated-circuit chips, and a multichip module (MCM), within which the chips are to be packaged and connected via special inter-chip-communication circuits. The design and the underlying principle of operation are based on the use of the rapid single-flux quantum (RSFQ) family of logic circuits to obtain the required processing speed and signal-to-noise ratio. RSFQ circuits are superconducting circuits that exploit the Josephson effect. They are well suited for this application, having been proven to perform well in some circuits at frequencies above 100 GHz. In order to maintain the superconductivity needed for proper functioning of the RSFQ circuits, the MCM must be kept in a cryogenic environment during operation.

  17. Low energy CMOS for space applications

    NASA Technical Reports Server (NTRS)

    Panwar, Ramesh; Alkalaj, Leon

    1992-01-01

    The current focus of NASA's space flight programs reflects a new thrust towards smaller, less costly, and more frequent space missions, when compared to missions such as Galileo, Magellan, or Cassini. Recently, the concept of a microspacecraft was proposed. In this concept, a small, compact spacecraft that weighs tens of kilograms performs focused scientific objectives such as imaging. Similarly, a Mars Lander micro-rover project is under study that will allow miniature robots weighing less than seven kilograms to explore the Martian surface. To bring the microspacecraft and microrover ideas to fruition, one will have to leverage compact 3D multi-chip module-based multiprocessors (MCM) technologies. Low energy CMOS will become increasingly important because of the thermodynamic considerations in cooling compact 3D MCM implementations and also from considerations of the power budget for space applications. In this paper, we show how the operating voltage is related to the threshold voltage of the CMOS transistors for accomplishing a task in VLSI with minimal energy. We also derive expressions for the noise margins at the optimal operating point. We then look at a low voltage CMOS (LVCMOS) technology developed at Stanford University which improves the power consumption over conventional CMOS by a couple of orders of magnitude and consider the suitability of the technology for space applications by characterizing its SEU immunity.

  18. Boundedness and global robust stability analysis of delayed complex-valued neural networks with interval parameter uncertainties.

    PubMed

    Song, Qiankun; Yu, Qinqin; Zhao, Zhenjiang; Liu, Yurong; Alsaadi, Fuad E

    2018-07-01

    In this paper, the boundedness and robust stability for a class of delayed complex-valued neural networks with interval parameter uncertainties are investigated. By using Homomorphic mapping theorem, Lyapunov method and inequality techniques, sufficient condition to guarantee the boundedness of networks and the existence, uniqueness and global robust stability of equilibrium point is derived for the considered uncertain neural networks. The obtained robust stability criterion is expressed in complex-valued LMI, which can be calculated numerically using YALMIP with solver of SDPT3 in MATLAB. An example with simulations is supplied to show the applicability and advantages of the acquired result. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Adjustment of Adaptive Gain with Bounded Linear Stability Analysis to Improve Time-Delay Margin for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.

  20. VCD Robustness of the Amide-I and Amide-II Vibrational Modes of Small Peptide Models.

    PubMed

    Góbi, Sándor; Magyarfalvi, Gábor; Tarczay, György

    2015-09-01

    The rotational strengths and the robustness values of amide-I and amide-II vibrational modes of For(AA)n NHMe (where AA is Val, Asn, Asp, or Cys, n = 1-5 for Val and Asn; n = 1 for Asp and Cys) model peptides with α-helix and β-sheet backbone conformations were computed by density functional methods. The robustness results verify empirical rules drawn from experiments and from computed rotational strengths linking amide-I and amide-II patterns in the vibrational circular dichroism (VCD) spectra of peptides with their backbone structures. For peptides with at least three residues (n ≥ 3) these characteristic patterns from coupled amide vibrational modes have robust signatures. For shorter peptide models many vibrational modes are nonrobust, and the robust modes can be dependent on the residues or on their side chain conformations in addition to backbone conformations. These robust VCD bands, however, provide information for the detailed structural analysis of these smaller systems. © 2015 Wiley Periodicals, Inc.

  1. Research on robust optimization of emergency logistics network considering the time dependence characteristic

    NASA Astrophysics Data System (ADS)

    WANG, Qingrong; ZHU, Changfeng; LI, Ying; ZHANG, Zhengkun

    2017-06-01

    Considering the time dependence of emergency logistic network and complexity of the environment that the network exists in, in this paper the time dependent network optimization theory and robust discrete optimization theory are combined, and the emergency logistics dynamic network optimization model with characteristics of robustness is built to maximize the timeliness of emergency logistics. On this basis, considering the complexity of dynamic network and the time dependence of edge weight, an improved ant colony algorithm is proposed to realize the coupling of the optimization algorithm and the network time dependence and robustness. Finally, a case study has been carried out in order to testify validity of this robustness optimization model and its algorithm, and the value of different regulation factors was analyzed considering the importance of the value of the control factor in solving the optimal path. Analysis results show that this model and its algorithm above-mentioned have good timeliness and strong robustness.

  2. Principal coordinate analysis assisted chromatographic analysis of bacterial cell wall collection: A robust classification approach.

    PubMed

    Kumar, Keshav; Cava, Felipe

    2018-04-10

    In the present work, Principal coordinate analysis (PCoA) is introduced to develop a robust model to classify the chromatographic data sets of peptidoglycan sample. PcoA captures the heterogeneity present in the data sets by using the dissimilarity matrix as input. Thus, in principle, it can even capture the subtle differences in the bacterial peptidoglycan composition and can provide a more robust and fast approach for classifying the bacterial collection and identifying the novel cell wall targets for further biological and clinical studies. The utility of the proposed approach is successfully demonstrated by analysing the two different kind of bacterial collections. The first set comprised of peptidoglycan sample belonging to different subclasses of Alphaproteobacteria. Whereas, the second set that is relatively more intricate for the chemometric analysis consist of different wild type Vibrio Cholerae and its mutants having subtle differences in their peptidoglycan composition. The present work clearly proposes a useful approach that can classify the chromatographic data sets of chromatographic peptidoglycan samples having subtle differences. Furthermore, present work clearly suggest that PCoA can be a method of choice in any data analysis workflow. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Breakdown of interdependent directed networks.

    PubMed

    Liu, Xueming; Stanley, H Eugene; Gao, Jianxi

    2016-02-02

    Increasing evidence shows that real-world systems interact with one another via dependency connectivities. Failing connectivities are the mechanism behind the breakdown of interacting complex systems, e.g., blackouts caused by the interdependence of power grids and communication networks. Previous research analyzing the robustness of interdependent networks has been limited to undirected networks. However, most real-world networks are directed, their in-degrees and out-degrees may be correlated, and they are often coupled to one another as interdependent directed networks. To understand the breakdown and robustness of interdependent directed networks, we develop a theoretical framework based on generating functions and percolation theory. We find that for interdependent Erdős-Rényi networks the directionality within each network increases their vulnerability and exhibits hybrid phase transitions. We also find that the percolation behavior of interdependent directed scale-free networks with and without degree correlations is so complex that two criteria are needed to quantify and compare their robustness: the percolation threshold and the integrated size of the giant component during an entire attack process. Interestingly, we find that the in-degree and out-degree correlations in each network layer increase the robustness of interdependent degree heterogeneous networks that most real networks are, but decrease the robustness of interdependent networks with homogeneous degree distribution and with strong coupling strengths. Moreover, by applying our theoretical analysis to real interdependent international trade networks, we find that the robustness of these real-world systems increases with the in-degree and out-degree correlations, confirming our theoretical analysis.

  4. Reliability of simulated robustness testing in fast liquid chromatography, using state-of-the-art column technology, instrumentation and modelling software.

    PubMed

    Kormány, Róbert; Fekete, Jenő; Guillarme, Davy; Fekete, Szabolcs

    2014-02-01

    The goal of this study was to evaluate the accuracy of simulated robustness testing using commercial modelling software (DryLab) and state-of-the-art stationary phases. For this purpose, a mixture of amlodipine and its seven related impurities was analyzed on short narrow bore columns (50×2.1mm, packed with sub-2μm particles) providing short analysis times. The performance of commercial modelling software for robustness testing was systematically compared to experimental measurements and DoE based predictions. We have demonstrated that the reliability of predictions was good, since the predicted retention times and resolutions were in good agreement with the experimental ones at the edges of the design space. In average, the retention time relative errors were <1.0%, while the predicted critical resolution errors were comprised between 6.9 and 17.2%. Because the simulated robustness testing requires significantly less experimental work than the DoE based predictions, we think that robustness could now be investigated in the early stage of method development. Moreover, the column interchangeability, which is also an important part of robustness testing, was investigated considering five different C8 and C18 columns packed with sub-2μm particles. Again, thanks to modelling software, we proved that the separation was feasible on all columns within the same analysis time (less than 4min), by proper adjustments of variables. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Using dynamic mode decomposition to extract cyclic behavior in the stock market

    NASA Astrophysics Data System (ADS)

    Hua, Jia-Chen; Roy, Sukesh; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2016-04-01

    The presence of cyclic expansions and contractions in the economy has been known for over a century. The work reported here searches for similar cyclic behavior in stock valuations. The variations are subtle and can only be extracted through analysis of price variations of a large number of stocks. Koopman mode analysis is a natural approach to establish such collective oscillatory behavior. The difficulty is that even non-cyclic and stochastic constituents of a finite data set may be interpreted as a sum of periodic motions. However, deconvolution of these irregular dynamical facets may be expected to be non-robust, i.e., to depend on specific data set. We propose an approach to differentiate robust and non-robust features in a time series; it is based on identifying robust features with reproducible Koopman modes, i.e., those that persist between distinct sub-groupings of the data. Our analysis of stock data discovered four reproducible modes, one of which has period close to the number of trading days/year. To the best of our knowledge these cycles were not reported previously. It is particularly interesting that the cyclic behaviors persisted through the great recession even though phase relationships between stocks within the modes evolved in the intervening period.

  6. Data depth based clustering analysis

    DOE PAGES

    Jeong, Myeong -Hun; Cai, Yaping; Sullivan, Clair J.; ...

    2016-01-01

    Here, this paper proposes a new algorithm for identifying patterns within data, based on data depth. Such a clustering analysis has an enormous potential to discover previously unknown insights from existing data sets. Many clustering algorithms already exist for this purpose. However, most algorithms are not affine invariant. Therefore, they must operate with different parameters after the data sets are rotated, scaled, or translated. Further, most clustering algorithms, based on Euclidean distance, can be sensitive to noises because they have no global perspective. Parameter selection also significantly affects the clustering results of each algorithm. Unlike many existing clustering algorithms, themore » proposed algorithm, called data depth based clustering analysis (DBCA), is able to detect coherent clusters after the data sets are affine transformed without changing a parameter. It is also robust to noises because using data depth can measure centrality and outlyingness of the underlying data. Further, it can generate relatively stable clusters by varying the parameter. The experimental comparison with the leading state-of-the-art alternatives demonstrates that the proposed algorithm outperforms DBSCAN and HDBSCAN in terms of affine invariance, and exceeds or matches the ro-bustness to noises of DBSCAN or HDBSCAN. The robust-ness to parameter selection is also demonstrated through the case study of clustering twitter data.« less

  7. Robust Stability Analysis of the Space Launch System Control Design: A Singular Value Approach

    NASA Technical Reports Server (NTRS)

    Pei, Jing; Newsome, Jerry R.

    2015-01-01

    Classical stability analysis consists of breaking the feedback loops one at a time and determining separately how much gain or phase variations would destabilize the stable nominal feedback system. For typical launch vehicle control design, classical control techniques are generally employed. In addition to stability margins, frequency domain Monte Carlo methods are used to evaluate the robustness of the design. However, such techniques were developed for Single-Input-Single-Output (SISO) systems and do not take into consideration the off-diagonal terms in the transfer function matrix of Multi-Input-Multi-Output (MIMO) systems. Robust stability analysis techniques such as H(sub infinity) and mu are applicable to MIMO systems but have not been adopted as standard practices within the launch vehicle controls community. This paper took advantage of a simple singular-value-based MIMO stability margin evaluation method based on work done by Mukhopadhyay and Newsom and applied it to the SLS high-fidelity dynamics model. The method computes a simultaneous multi-loop gain and phase margin that could be related back to classical margins. The results presented in this paper suggest that for the SLS system, traditional SISO stability margins are similar to the MIMO margins. This additional level of verification provides confidence in the robustness of the control design.

  8. Parenchymal texture analysis in digital mammography: robust texture feature identification and equivalence across devices.

    PubMed

    Keller, Brad M; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J; Zheng, Yuanjie; Ray, Shonket; Gee, James C; Maidment, Andrew D A; Kontos, Despina

    2015-04-01

    An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges-Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., [Formula: see text]) and with a larger offset length (i.e., [Formula: see text]), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment.

  9. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  10. Microscopy image segmentation tool: Robust image data analysis

    NASA Astrophysics Data System (ADS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  11. Robust model-based analysis of single-particle tracking experiments with Spot-On

    PubMed Central

    Grimm, Jonathan B; Lavis, Luke D

    2018-01-01

    Single-particle tracking (SPT) has become an important method to bridge biochemistry and cell biology since it allows direct observation of protein binding and diffusion dynamics in live cells. However, accurately inferring information from SPT studies is challenging due to biases in both data analysis and experimental design. To address analysis bias, we introduce ‘Spot-On’, an intuitive web-interface. Spot-On implements a kinetic modeling framework that accounts for known biases, including molecules moving out-of-focus, and robustly infers diffusion constants and subpopulations from pooled single-molecule trajectories. To minimize inherent experimental biases, we implement and validate stroboscopic photo-activation SPT (spaSPT), which minimizes motion-blur bias and tracking errors. We validate Spot-On using experimentally realistic simulations and show that Spot-On outperforms other methods. We then apply Spot-On to spaSPT data from live mammalian cells spanning a wide range of nuclear dynamics and demonstrate that Spot-On consistently and robustly infers subpopulation fractions and diffusion constants. PMID:29300163

  12. Robust model-based analysis of single-particle tracking experiments with Spot-On.

    PubMed

    Hansen, Anders S; Woringer, Maxime; Grimm, Jonathan B; Lavis, Luke D; Tjian, Robert; Darzacq, Xavier

    2018-01-04

    Single-particle tracking (SPT) has become an important method to bridge biochemistry and cell biology since it allows direct observation of protein binding and diffusion dynamics in live cells. However, accurately inferring information from SPT studies is challenging due to biases in both data analysis and experimental design. To address analysis bias, we introduce 'Spot-On', an intuitive web-interface. Spot-On implements a kinetic modeling framework that accounts for known biases, including molecules moving out-of-focus, and robustly infers diffusion constants and subpopulations from pooled single-molecule trajectories. To minimize inherent experimental biases, we implement and validate stroboscopic photo-activation SPT (spaSPT), which minimizes motion-blur bias and tracking errors. We validate Spot-On using experimentally realistic simulations and show that Spot-On outperforms other methods. We then apply Spot-On to spaSPT data from live mammalian cells spanning a wide range of nuclear dynamics and demonstrate that Spot-On consistently and robustly infers subpopulation fractions and diffusion constants. © 2018, Hansen et al.

  13. Project risk management in the construction of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Titarenko, Boris; Hasnaoui, Amir; Titarenko, Roman; Buzuk, Liliya

    2018-03-01

    This paper shows the project risk management methods, which allow to better identify risks in the construction of high-rise buildings and to manage them throughout the life cycle of the project. One of the project risk management processes is a quantitative analysis of risks. The quantitative analysis usually includes the assessment of the potential impact of project risks and their probabilities. This paper shows the most popular methods of risk probability assessment and tries to indicate the advantages of the robust approach over the traditional methods. Within the framework of the project risk management model a robust approach of P. Huber is applied and expanded for the tasks of regression analysis of project data. The suggested algorithms used to assess the parameters in statistical models allow to obtain reliable estimates. A review of the theoretical problems of the development of robust models built on the methodology of the minimax estimates was done and the algorithm for the situation of asymmetric "contamination" was developed.

  14. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS

    PubMed Central

    Einstein, Daniel R.; Dyedov, Vladimir

    2010-01-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  15. Multilayer Perceptron for Robust Nonlinear Interval Regression Analysis Using Genetic Algorithms

    PubMed Central

    2014-01-01

    On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets. PMID:25110755

  16. Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.

    PubMed

    Hu, Yi-Chung

    2014-01-01

    On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.

  17. What Climate Information Do Water Managers Need to Make Robust, Long-Term Plans?

    NASA Astrophysics Data System (ADS)

    Duran, R.; Lempert, R.; Groves, D.

    2008-12-01

    What climate information do water managers need to respond to threat of climate change? Southern California's Inland Empire Utilities Agency (IEUA) completed a long-range water resource management plan in 2005 that addressed expected economic and population growth in their service region, but did not consider the potential impacts of climate change. Using a robust decision making (RDM) approach for policy under deep uncertainty, we recently worked with IEUA to conduct a climate-change vulnerability and response options analysis of the agency's long-range plans. This analysis suggests that IEUA is vulnerable to future climate change, but can significantly reduce this vulnerability by increasing their near-term conservation programs and careful monitoring and updating to adjust their plan in the years ahead. In addition to helping IEUA, this analysis provides important guidance on the types of climate and other information that can be most useful for water managers as they attempt to take robust, near-term actions to increaase their resilience to climate change.

  18. High-Throughput Histopathological Image Analysis via Robust Cell Segmentation and Hashing

    PubMed Central

    Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting

    2015-01-01

    Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells. PMID:26599156

  19. A probabilistic approach to aircraft design emphasizing stability and control uncertainties

    NASA Astrophysics Data System (ADS)

    Delaurentis, Daniel Andrew

    In order to address identified deficiencies in current approaches to aerospace systems design, a new method has been developed. This new method for design is based on the premise that design is a decision making activity, and that deterministic analysis and synthesis can lead to poor, or misguided decision making. This is due to a lack of disciplinary knowledge of sufficient fidelity about the product, to the presence of uncertainty at multiple levels of the aircraft design hierarchy, and to a failure to focus on overall affordability metrics as measures of goodness. Design solutions are desired which are robust to uncertainty and are based on the maximum knowledge possible. The new method represents advances in the two following general areas. 1. Design models and uncertainty. The research performed completes a transition from a deterministic design representation to a probabilistic one through a modeling of design uncertainty at multiple levels of the aircraft design hierarchy, including: (1) Consistent, traceable uncertainty classification and representation; (2) Concise mathematical statement of the Probabilistic Robust Design problem; (3) Variants of the Cumulative Distribution Functions (CDFs) as decision functions for Robust Design; (4) Probabilistic Sensitivities which identify the most influential sources of variability. 2. Multidisciplinary analysis and design. Imbedded in the probabilistic methodology is a new approach for multidisciplinary design analysis and optimization (MDA/O), employing disciplinary analysis approximations formed through statistical experimentation and regression. These approximation models are a function of design variables common to the system level as well as other disciplines. For aircraft, it is proposed that synthesis/sizing is the proper avenue for integrating multiple disciplines. Research hypotheses are translated into a structured method, which is subsequently tested for validity. Specifically, the implementation involves the study of the relaxed static stability technology for a supersonic commercial transport aircraft. The probabilistic robust design method is exercised resulting in a series of robust design solutions based on different interpretations of "robustness". Insightful results are obtained and the ability of the method to expose trends in the design space are noted as a key advantage.

  20. GPU-accelerated automatic identification of robust beam setups for proton and carbon-ion radiotherapy

    NASA Astrophysics Data System (ADS)

    Ammazzalorso, F.; Bednarz, T.; Jelen, U.

    2014-03-01

    We demonstrate acceleration on graphic processing units (GPU) of automatic identification of robust particle therapy beam setups, minimizing negative dosimetric effects of Bragg peak displacement caused by treatment-time patient positioning errors. Our particle therapy research toolkit, RobuR, was extended with OpenCL support and used to implement calculation on GPU of the Port Homogeneity Index, a metric scoring irradiation port robustness through analysis of tissue density patterns prior to dose optimization and computation. Results were benchmarked against an independent native CPU implementation. Numerical results were in agreement between the GPU implementation and native CPU implementation. For 10 skull base cases, the GPU-accelerated implementation was employed to select beam setups for proton and carbon ion treatment plans, which proved to be dosimetrically robust, when recomputed in presence of various simulated positioning errors. From the point of view of performance, average running time on the GPU decreased by at least one order of magnitude compared to the CPU, rendering the GPU-accelerated analysis a feasible step in a clinical treatment planning interactive session. In conclusion, selection of robust particle therapy beam setups can be effectively accelerated on a GPU and become an unintrusive part of the particle therapy treatment planning workflow. Additionally, the speed gain opens new usage scenarios, like interactive analysis manipulation (e.g. constraining of some setup) and re-execution. Finally, through OpenCL portable parallelism, the new implementation is suitable also for CPU-only use, taking advantage of multiple cores, and can potentially exploit types of accelerators other than GPUs.

  1. Robustness Analysis and Reliable Flight Regime Estimation of an Integrated Resilent Control System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine

    2008-01-01

    Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. As a part of the validation process, this paper describes an analysis method for determining a reliable flight regime in the flight envelope within which an integrated resilent control system can achieve the desired performance of tracking command signals and detecting additive faults in the presence of parameter uncertainty and unmodeled dynamics. To calculate a reliable flight regime, a structured singular value analysis method is applied to analyze the closed-loop system over the entire flight envelope. To use the structured singular value analysis method, a linear fractional transform (LFT) model of a transport aircraft longitudinal dynamics is developed over the flight envelope by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The developed LFT model can capture original nonlinear dynamics over the flight envelope with the ! block which contains key varying parameters: angle of attack and velocity, and real parameter uncertainty: aerodynamic coefficient uncertainty and moment of inertia uncertainty. Using the developed LFT model and a formal robustness analysis method, a reliable flight regime is calculated for a transport aircraft closed-loop system.

  2. Validation of a robust proteomic analysis carried out on formalin-fixed paraffin-embedded tissues of the pancreas obtained from mouse and human.

    PubMed

    Kojima, Kyoko; Bowersock, Gregory J; Kojima, Chinatsu; Klug, Christopher A; Grizzle, William E; Mobley, James A

    2012-11-01

    A number of reports have recently emerged with focus on extraction of proteins from formalin-fixed paraffin-embedded (FFPE) tissues for MS analysis; however, reproducibility and robustness as compared to flash frozen controls is generally overlooked. The goal of this study was to identify and validate a practical and highly robust approach for the proteomics analysis of FFPE tissues. FFPE and matched frozen pancreatic tissues obtained from mice (n = 8) were analyzed using 1D-nanoLC-MS(MS)(2) following work up with commercially available kits. The chosen approach for FFPE tissues was found to be highly comparable to that of frozen. In addition, the total number of unique peptides identified between the two groups was highly similar, with 958 identified for FFPE and 1070 identified for frozen, with protein identifications that corresponded by approximately 80%. This approach was then applied to archived human FFPE pancreatic cancer specimens (n = 11) as compared to uninvolved tissues (n = 8), where 47 potential pancreatic ductal adenocarcinoma markers were identified as significantly increased, of which 28 were previously reported. Further, these proteins share strongly overlapping pathway associations to pancreatic cancer that include estrogen receptor α. Together, these data support the validation of an approach for the proteomic analysis of FFPE tissues that is straightforward and highly robust, which can also be effectively applied toward translational studies of disease. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Robustness analysis of bogie suspension components Pareto optimised values

    NASA Astrophysics Data System (ADS)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  4. Aural Classification and Temporal Robustness

    DTIC Science & Technology

    2010-11-01

    Canada – Atlantique ; novembre 2010. Contexte : Le présent projet vise le développement d’un classificateur robuste qui utilise des...10 4.2.2.2 Discriminant score . . . . . . . . . . . . . . . . . . . 11 4.2.3 Principal component analysis . . . . . . . . . . . . . . . . . . . 13 ...allows class separation. . . . . . . . . . . . 13 Figure 7: Hypothetical clutter and target pdfs and posterior probabilties shown as surfaces

  5. Fuzzy robust credibility-constrained programming for environmental management and planning.

    PubMed

    Zhang, Yimei; Hang, Guohe

    2010-06-01

    In this study, a fuzzy robust credibility-constrained programming (FRCCP) is developed and applied to the planning for waste management systems. It incorporates the concepts of credibility-based chance-constrained programming and robust programming within an optimization framework. The developed method can reflect uncertainties presented as possibility-density by fuzzy-membership functions. Fuzzy credibility constraints are transformed to the crisp equivalents with different credibility levels, and ordinary fuzzy inclusion constraints are determined by their robust deterministic constraints by setting a-cut levels. The FRCCP method can provide different system costs under different credibility levels (lambda). From the results of sensitivity analyses, the operation cost of the landfill is a critical parameter. For the management, any factors that would induce cost fluctuation during landfilling operation would deserve serious observation and analysis. By FRCCP, useful solutions can be obtained to provide decision-making support for long-term planning of solid waste management systems. It could be further enhanced through incorporating methods of inexact analysis into its framework. It can also be applied to other environmental management problems.

  6. Robust linear discriminant analysis with distance based estimators

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  7. Control of a small working robot on a large flexible manipulator for suppressing vibrations: Development of a robust control law for flexible robot and it's stability analysis

    NASA Technical Reports Server (NTRS)

    Soo, Han Lee

    1991-01-01

    Researchers developed a robust control law for slow motions for the accurate trajectory control of a flexible robot. The control law does not need larger velocity gains than position gains, which some researchers need to ensure the stability of a rigid robot. Initial experimentation for the Small Articulated Manipulator (SAM) shows that control laws that use smaller velocity gains are more robust to signal noise than the control laws that use larger velocity gains. Researchers analyzed the stability of the composite control law, the robust control for the slow motion, and the strain rate feedback for the fast control. The stability analysis was done by using a quadratic Liapunov function. Researchers found that the flexible motion of links could be controlled by relating the input force to the flexible signals which are sensed at the near tip of each link. The signals are contaminated by the time delayed input force. However, the effect of the time delayed input force can be reduced by giving a certain configuration to the SAM.

  8. Robust stability of fractional order polynomials with complicated uncertainty structure

    PubMed Central

    Şenol, Bilal; Pekař, Libor

    2017-01-01

    The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173

  9. Inherent robustness of discrete-time adaptive control systems

    NASA Technical Reports Server (NTRS)

    Ma, C. C. H.

    1986-01-01

    Global stability robustness with respect to unmodeled dynamics, arbitrary bounded internal noise, as well as external disturbance is shown to exist for a class of discrete-time adaptive control systems when the regressor vectors of these systems are persistently exciting. Although fast adaptation is definitely undesirable, so far as attaining the greatest amount of global stability robustness is concerned, slow adaptation is shown to be not necessarily beneficial. The entire analysis in this paper holds for systems with slowly varying return difference matrices; the plants in these systems need not be slowly varying.

  10. Finite-time robust stabilization of uncertain delayed neural networks with discontinuous activations via delayed feedback control.

    PubMed

    Wang, Leimin; Shen, Yi; Sheng, Yin

    2016-04-01

    This paper is concerned with the finite-time robust stabilization of delayed neural networks (DNNs) in the presence of discontinuous activations and parameter uncertainties. By using the nonsmooth analysis and control theory, a delayed controller is designed to realize the finite-time robust stabilization of DNNs with discontinuous activations and parameter uncertainties, and the upper bound of the settling time functional for stabilization is estimated. Finally, two examples are provided to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Learning and robustness to catch-and-release fishing in a shark social network

    PubMed Central

    Brown, Culum; Planes, Serge

    2017-01-01

    Individuals can play different roles in maintaining connectivity and social cohesion in animal populations and thereby influence population robustness to perturbations. We performed a social network analysis in a reef shark population to assess the vulnerability of the global network to node removal under different scenarios. We found that the network was generally robust to the removal of nodes with high centrality. The network appeared also highly robust to experimental fishing. Individual shark catchability decreased as a function of experience, as revealed by comparing capture frequency and site presence. Altogether, these features suggest that individuals learnt to avoid capture, which ultimately increased network robustness to experimental catch-and-release. Our results also suggest that some caution must be taken when using capture–recapture models often used to assess population size as assumptions (such as equal probabilities of capture and recapture) may be violated by individual learning to escape recapture. PMID:28298593

  12. Enhanced robust fractional order proportional-plus-integral controller based on neural network for velocity control of permanent magnet synchronous motor.

    PubMed

    Zhang, Bitao; Pi, YouGuo

    2013-07-01

    The traditional integer order proportional-integral-differential (IO-PID) controller is sensitive to the parameter variation or/and external load disturbance of permanent magnet synchronous motor (PMSM). And the fractional order proportional-integral-differential (FO-PID) control scheme based on robustness tuning method is proposed to enhance the robustness. But the robustness focuses on the open-loop gain variation of controlled plant. In this paper, an enhanced robust fractional order proportional-plus-integral (ERFOPI) controller based on neural network is proposed. The control law of the ERFOPI controller is acted on a fractional order implement function (FOIF) of tracking error but not tracking error directly, which, according to theory analysis, can enhance the robust performance of system. Tuning rules and approaches, based on phase margin, crossover frequency specification and robustness rejecting gain variation, are introduced to obtain the parameters of ERFOPI controller. And the neural network algorithm is used to adjust the parameter of FOIF. Simulation and experimental results show that the method proposed in this paper not only achieve favorable tracking performance, but also is robust with regard to external load disturbance and parameter variation. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  13. SiC Multi-Chip Power Modules as Power-System Building Blocks

    NASA Technical Reports Server (NTRS)

    Lostetter, Alexander; Franks, Steven

    2007-01-01

    The term "SiC MCPMs" (wherein "MCPM" signifies "multi-chip power module") denotes electronic power-supply modules containing multiple silicon carbide power devices and silicon-on-insulator (SOI) control integrated-circuit chips. SiC MCPMs are being developed as building blocks of advanced expandable, reconfigurable, fault-tolerant power-supply systems. Exploiting the ability of SiC semiconductor devices to operate at temperatures, breakdown voltages, and current densities significantly greater than those of conventional Si devices, the designs of SiC MCPMs and of systems comprising multiple SiC MCPMs are expected to afford a greater degree of miniaturization through stacking of modules with reduced requirements for heat sinking. Moreover, the higher-temperature capabilities of SiC MCPMs could enable operation in environments hotter than Si-based power systems can withstand. The stacked SiC MCPMs in a given system can be electrically connected in series, parallel, or a series/parallel combination to increase the overall power-handling capability of the system. In addition to power connections, the modules have communication connections. The SOI controllers in the modules communicate with each other as nodes of a decentralized control network, in which no single controller exerts overall command of the system. Control functions effected via the network include synchronization of switching of power devices and rapid reconfiguration of power connections to enable the power system to continue to supply power to a load in the event of failure of one of the modules. In addition to serving as building blocks of reliable power-supply systems, SiC MCPMs could be augmented with external control circuitry to make them perform additional power-handling functions as needed for specific applications: typical functions could include regulating voltages, storing energy, and driving motors. Because identical SiC MCPM building blocks could be utilized in a variety of ways, the cost and difficulty of designing new, highly reliable power systems would be reduced considerably. Several prototype DC-to-DC power-converter modules containing SiC power-switching devices were designed and built to demonstrate the feasibility of the SiC MCPM concept. In anticipation of a future need for operation at high temperature, the circuitry in the modules includes high-temperature inductors and capacitors. These modules were designed to be stacked to construct a system of four modules electrically connected in series and/or parallel. The packaging of the modules is designed to satisfy requirements for series and parallel interconnection among modules, high power density, high thermal efficiency, small size, and light weight. Each module includes four output power connectors two for serial and two for parallel output power connections among the modules. Each module also includes two signal connectors, electrically isolated from the power connectors, that afford four zones for signal interconnections among the SOI controllers. Finally, each module includes two input power connectors, through which it receives power from an in-line power bus. This design feature is included in anticipation of a custom-designed power bus incorporating sockets compatible with snap-on type connectors to enable rapid replacement of failed modules.

  14. Robust vortex lines, vortex rings, and hopfions in three-dimensional Bose-Einstein condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisset, R. N.; Wang, Wenlong; Ticknor, Christopher

    Performing a systematic Bogoliubov–de Gennes spectral analysis, we illustrate that stationary vortex lines, vortex rings, and more exotic states, such as hopfions, are robust in three-dimensional atomic Bose-Einstein condensates, for large parameter intervals. Importantly, we find that the hopfion can be stabilized in a simple parabolic trap, without the need for trap rotation or inhomogeneous interactions. We supplement our spectral analysis by studying the dynamics of such stationary states; we find them to be robust against significant perturbations of the initial state. In the unstable regimes, we not only identify the unstable mode, such as a quadrupolar or hexapolar mode,more » but we also observe the corresponding instability dynamics. Moreover, deep in the Thomas-Fermi regime, we investigate the particlelike behavior of vortex rings and hopfions.« less

  15. Robust vortex lines, vortex rings, and hopfions in three-dimensional Bose-Einstein condensates

    DOE PAGES

    Bisset, R. N.; Wang, Wenlong; Ticknor, Christopher; ...

    2015-12-07

    Performing a systematic Bogoliubov–de Gennes spectral analysis, we illustrate that stationary vortex lines, vortex rings, and more exotic states, such as hopfions, are robust in three-dimensional atomic Bose-Einstein condensates, for large parameter intervals. Importantly, we find that the hopfion can be stabilized in a simple parabolic trap, without the need for trap rotation or inhomogeneous interactions. We supplement our spectral analysis by studying the dynamics of such stationary states; we find them to be robust against significant perturbations of the initial state. In the unstable regimes, we not only identify the unstable mode, such as a quadrupolar or hexapolar mode,more » but we also observe the corresponding instability dynamics. Moreover, deep in the Thomas-Fermi regime, we investigate the particlelike behavior of vortex rings and hopfions.« less

  16. Control design and robustness analysis of a ball and plate system by using polynomial chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colón, Diego; Balthazar, José M.; Reis, Célia A. dos

    2014-12-10

    In this paper, we present a mathematical model of a ball and plate system, a control law and analyze its robustness properties by using the polynomial chaos method. The ball rolls without slipping. There is an auxiliary robot vision system that determines the bodies' positions and velocities, and is used for control purposes. The actuators are to orthogonal DC motors, that changes the plate's angles with the ground. The model is a extension of the ball and beam system and is highly nonlinear. The system is decoupled in two independent equations for coordinates x and y. Finally, the resulting nonlinearmore » closed loop systems are analyzed by the polynomial chaos methodology, which considers that some system parameters are random variables, and generates statistical data that can be used in the robustness analysis.« less

  17. Control design and robustness analysis of a ball and plate system by using polynomial chaos

    NASA Astrophysics Data System (ADS)

    Colón, Diego; Balthazar, José M.; dos Reis, Célia A.; Bueno, Átila M.; Diniz, Ivando S.; de S. R. F. Rosa, Suelia

    2014-12-01

    In this paper, we present a mathematical model of a ball and plate system, a control law and analyze its robustness properties by using the polynomial chaos method. The ball rolls without slipping. There is an auxiliary robot vision system that determines the bodies' positions and velocities, and is used for control purposes. The actuators are to orthogonal DC motors, that changes the plate's angles with the ground. The model is a extension of the ball and beam system and is highly nonlinear. The system is decoupled in two independent equations for coordinates x and y. Finally, the resulting nonlinear closed loop systems are analyzed by the polynomial chaos methodology, which considers that some system parameters are random variables, and generates statistical data that can be used in the robustness analysis.

  18. Robust passivity analysis for discrete-time recurrent neural networks with mixed delays

    NASA Astrophysics Data System (ADS)

    Huang, Chuan-Kuei; Shu, Yu-Jeng; Chang, Koan-Yuh; Shou, Ho-Nien; Lu, Chien-Yu

    2015-02-01

    This article considers the robust passivity analysis for a class of discrete-time recurrent neural networks (DRNNs) with mixed time-delays and uncertain parameters. The mixed time-delays that consist of both the discrete time-varying and distributed time-delays in a given range are presented, and the uncertain parameters are norm-bounded. The activation functions are assumed to be globally Lipschitz continuous. Based on new bounding technique and appropriate type of Lyapunov functional, a sufficient condition is investigated to guarantee the existence of the desired robust passivity condition for the DRNNs, which can be derived in terms of a family of linear matrix inequality (LMI). Some free-weighting matrices are introduced to reduce the conservatism of the criterion by using the bounding technique. A numerical example is given to illustrate the effectiveness and applicability.

  19. Analysis of gene network robustness based on saturated fixed point attractors

    PubMed Central

    2014-01-01

    The analysis of gene network robustness to noise and mutation is important for fundamental and practical reasons. Robustness refers to the stability of the equilibrium expression state of a gene network to variations of the initial expression state and network topology. Numerical simulation of these variations is commonly used for the assessment of robustness. Since there exists a great number of possible gene network topologies and initial states, even millions of simulations may be still too small to give reliable results. When the initial and equilibrium expression states are restricted to being saturated (i.e., their elements can only take values 1 or −1 corresponding to maximum activation and maximum repression of genes), an analytical gene network robustness assessment is possible. We present this analytical treatment based on determination of the saturated fixed point attractors for sigmoidal function models. The analysis can determine (a) for a given network, which and how many saturated equilibrium states exist and which and how many saturated initial states converge to each of these saturated equilibrium states and (b) for a given saturated equilibrium state or a given pair of saturated equilibrium and initial states, which and how many gene networks, referred to as viable, share this saturated equilibrium state or the pair of saturated equilibrium and initial states. We also show that the viable networks sharing a given saturated equilibrium state must follow certain patterns. These capabilities of the analytical treatment make it possible to properly define and accurately determine robustness to noise and mutation for gene networks. Previous network research conclusions drawn from performing millions of simulations follow directly from the results of our analytical treatment. Furthermore, the analytical results provide criteria for the identification of model validity and suggest modified models of gene network dynamics. The yeast cell-cycle network is used as an illustration of the practical application of this analytical treatment. PMID:24650364

  20. Network robustness assessed within a dual connectivity framework: joint dynamics of the Active and Idle Networks.

    PubMed

    Tejedor, Alejandro; Longjas, Anthony; Zaliapin, Ilya; Ambroj, Samuel; Foufoula-Georgiou, Efi

    2017-08-17

    Network robustness against attacks has been widely studied in fields as diverse as the Internet, power grids and human societies. But current definition of robustness is only accounting for half of the story: the connectivity of the nodes unaffected by the attack. Here we propose a new framework to assess network robustness, wherein the connectivity of the affected nodes is also taken into consideration, acknowledging that it plays a crucial role in properly evaluating the overall network robustness in terms of its future recovery from the attack. Specifically, we propose a dual perspective approach wherein at any instant in the network evolution under attack, two distinct networks are defined: (i) the Active Network (AN) composed of the unaffected nodes and (ii) the Idle Network (IN) composed of the affected nodes. The proposed robustness metric considers both the efficiency of destroying the AN and that of building-up the IN. We show, via analysis of well-known prototype networks and real world data, that trade-offs between the efficiency of Active and Idle Network dynamics give rise to surprising robustness crossovers and re-rankings, which can have significant implications for decision making.

  1. The robustness of ecosystems to the species loss of community

    NASA Astrophysics Data System (ADS)

    Cai, Qing; Liu, Jiming

    2016-10-01

    To study the robustness of ecosystems is crucial to promote the sustainable development of human society. This paper aims to analyze the robustness of ecosystems from an interesting viewpoint of the species loss of community. Unlike the existing definitions, we first introduce the notion of a community as a population of species belonging to the same trophic level. We then put forward a novel multiobjective optimization model which can be utilized to discover community structures from arbitrary unipartite networks. Because an ecosystem is commonly represented as a multipartite network, we further introduce a mechanism of competition among species whereby a multipartite network is transformed into a unipartite signed network without loss of species interaction information. Finally, we examine three strategies to test the robustness of an ecosystem. Our experiments indicate that ecosystems are robust to random species loss of community but fragile to target ones. We also investigate the relationships between the robustness of an ecosystem and that of its community composed network both to species loss. Our experiments indicate that the robustness analysis of a large-scale ecosystem to species loss may be akin to that of its community composed network which is usually small in size.

  2. Multi-criteria robustness analysis of metro networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiangrong; Koç, Yakup; Derrible, Sybil; Ahmad, Sk Nasir; Pino, Willem J. A.; Kooij, Robert E.

    2017-05-01

    Metros (heavy rail transit systems) are integral parts of urban transportation systems. Failures in their operations can have serious impacts on urban mobility, and measuring their robustness is therefore critical. Moreover, as physical networks, metros can be viewed as topological entities, and as such they possess measurable network properties. In this article, by using network science and graph theory, we investigate ten theoretical and four numerical robustness metrics and their performance in quantifying the robustness of 33 metro networks under random failures or targeted attacks. We find that the ten theoretical metrics capture two distinct aspects of robustness of metro networks. First, several metrics place an emphasis on alternative paths. Second, other metrics place an emphasis on the length of the paths. To account for all aspects, we standardize all ten indicators and plot them on radar diagrams to assess the overall robustness for metro networks. Overall, we find that Tokyo and Rome are the most robust networks. Rome benefits from short transferring and Tokyo has a significant number of transfer stations, both in the city center and in the peripheral area of the city, promoting both a higher number of alternative paths and overall relatively short path-lengths.

  3. Dynamic robustness of knowledge collaboration network of open source product development community

    NASA Astrophysics Data System (ADS)

    Zhou, Hong-Li; Zhang, Xiao-Dong

    2018-01-01

    As an emergent innovative design style, open source product development communities are characterized by a self-organizing, mass collaborative, networked structure. The robustness of the community is critical to its performance. Using the complex network modeling method, the knowledge collaboration network of the community is formulated, and the robustness of the network is systematically and dynamically studied. The characteristics of the network along the development period determine that its robustness should be studied from three time stages: the start-up, development and mature stages of the network. Five kinds of user-loss pattern are designed, to assess the network's robustness under different situations in each of these three time stages. Two indexes - the largest connected component and the network efficiency - are used to evaluate the robustness of the community. The proposed approach is applied in an existing open source car design community. The results indicate that the knowledge collaboration networks show different levels of robustness in different stages and different user loss patterns. Such analysis can be applied to provide protection strategies for the key users involved in knowledge dissemination and knowledge contribution at different stages of the network, thereby promoting the sustainable and stable development of the open source community.

  4. Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.

  5. A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.

    2008-01-01

    Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…

  6. Sensitivity Analysis of Empirical Results on Civil War Onset

    ERIC Educational Resources Information Center

    Hegre, Havard; Sambanis, Nicholas

    2006-01-01

    In the literature on civil war onset, several empirical results are not robust or replicable across studies. Studies use different definitions of civil war and analyze different time periods, so readers cannot easily determine if differences in empirical results are due to those factors or if most empirical results are just not robust. The authors…

  7. Robust synchronization control scheme of a population of nonlinear stochastic synthetic genetic oscillators under intrinsic and extrinsic molecular noise via quorum sensing.

    PubMed

    Chen, Bor-Sen; Hsu, Chih-Yuan

    2012-10-26

    Collective rhythms of gene regulatory networks have been a subject of considerable interest for biologists and theoreticians, in particular the synchronization of dynamic cells mediated by intercellular communication. Synchronization of a population of synthetic genetic oscillators is an important design in practical applications, because such a population distributed over different host cells needs to exploit molecular phenomena simultaneously in order to emerge a biological phenomenon. However, this synchronization may be corrupted by intrinsic kinetic parameter fluctuations and extrinsic environmental molecular noise. Therefore, robust synchronization is an important design topic in nonlinear stochastic coupled synthetic genetic oscillators with intrinsic kinetic parameter fluctuations and extrinsic molecular noise. Initially, the condition for robust synchronization of synthetic genetic oscillators was derived based on Hamilton Jacobi inequality (HJI). We found that if the synchronization robustness can confer enough intrinsic robustness to tolerate intrinsic parameter fluctuation and extrinsic robustness to filter the environmental noise, then robust synchronization of coupled synthetic genetic oscillators is guaranteed. If the synchronization robustness of a population of nonlinear stochastic coupled synthetic genetic oscillators distributed over different host cells could not be maintained, then robust synchronization could be enhanced by external control input through quorum sensing molecules. In order to simplify the analysis and design of robust synchronization of nonlinear stochastic synthetic genetic oscillators, the fuzzy interpolation method was employed to interpolate several local linear stochastic coupled systems to approximate the nonlinear stochastic coupled system so that the HJI-based synchronization design problem could be replaced by a simple linear matrix inequality (LMI)-based design problem, which could be solved with the help of LMI toolbox in MATLAB easily. If the synchronization robustness criterion, i.e. the synchronization robustness ≥ intrinsic robustness + extrinsic robustness, then the stochastic coupled synthetic oscillators can be robustly synchronized in spite of intrinsic parameter fluctuation and extrinsic noise. If the synchronization robustness criterion is violated, external control scheme by adding inducer can be designed to improve synchronization robustness of coupled synthetic genetic oscillators. The investigated robust synchronization criteria and proposed external control method are useful for a population of coupled synthetic networks with emergent synchronization behavior, especially for multi-cellular, engineered networks.

  8. Robust synchronization control scheme of a population of nonlinear stochastic synthetic genetic oscillators under intrinsic and extrinsic molecular noise via quorum sensing

    PubMed Central

    2012-01-01

    Background Collective rhythms of gene regulatory networks have been a subject of considerable interest for biologists and theoreticians, in particular the synchronization of dynamic cells mediated by intercellular communication. Synchronization of a population of synthetic genetic oscillators is an important design in practical applications, because such a population distributed over different host cells needs to exploit molecular phenomena simultaneously in order to emerge a biological phenomenon. However, this synchronization may be corrupted by intrinsic kinetic parameter fluctuations and extrinsic environmental molecular noise. Therefore, robust synchronization is an important design topic in nonlinear stochastic coupled synthetic genetic oscillators with intrinsic kinetic parameter fluctuations and extrinsic molecular noise. Results Initially, the condition for robust synchronization of synthetic genetic oscillators was derived based on Hamilton Jacobi inequality (HJI). We found that if the synchronization robustness can confer enough intrinsic robustness to tolerate intrinsic parameter fluctuation and extrinsic robustness to filter the environmental noise, then robust synchronization of coupled synthetic genetic oscillators is guaranteed. If the synchronization robustness of a population of nonlinear stochastic coupled synthetic genetic oscillators distributed over different host cells could not be maintained, then robust synchronization could be enhanced by external control input through quorum sensing molecules. In order to simplify the analysis and design of robust synchronization of nonlinear stochastic synthetic genetic oscillators, the fuzzy interpolation method was employed to interpolate several local linear stochastic coupled systems to approximate the nonlinear stochastic coupled system so that the HJI-based synchronization design problem could be replaced by a simple linear matrix inequality (LMI)-based design problem, which could be solved with the help of LMI toolbox in MATLAB easily. Conclusion If the synchronization robustness criterion, i.e. the synchronization robustness ≥ intrinsic robustness + extrinsic robustness, then the stochastic coupled synthetic oscillators can be robustly synchronized in spite of intrinsic parameter fluctuation and extrinsic noise. If the synchronization robustness criterion is violated, external control scheme by adding inducer can be designed to improve synchronization robustness of coupled synthetic genetic oscillators. The investigated robust synchronization criteria and proposed external control method are useful for a population of coupled synthetic networks with emergent synchronization behavior, especially for multi-cellular, engineered networks. PMID:23101662

  9. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  10. Health-related fitness profiles in adolescents with complex congenital heart disease.

    PubMed

    Klausen, Susanne Hwiid; Wetterslev, Jørn; Søndergaard, Lars; Andersen, Lars L; Mikkelsen, Ulla Ramer; Dideriksen, Kasper; Zoffmann, Vibeke; Moons, Philip

    2015-04-01

    This study investigates whether subgroups of different health-related fitness (HrF) profiles exist among girls and boys with complex congenital heart disease (ConHD) and how these are associated with lifestyle behaviors. We measured the cardiorespiratory fitness, muscle strength, and body composition of 158 adolescents aged 13-16 years with previous surgery for a complex ConHD. Data on lifestyle behaviors were collected concomitantly between October 2010 and April 2013. A cluster analysis was conducted to identify profiles with similar HrF. For comparisons between clusters, multivariate analyses of covariance were used to test the differences in lifestyle behaviors. Three distinct profiles were formed: (1) Robust (43, 27%; 20 girls and 23 boys); (2) Moderately Robust (85, 54%; 37 girls and 48 boys); and (3) Less robust (30, 19%; 9 girls and 21 boys). The participants in the Robust clusters reported leading a physically active lifestyle and participants in the Less robust cluster reported leading a sedentary lifestyle. Diagnoses were evenly distributed between clusters. The cluster analysis attributed some of the variability in cardiorespiratory fitness among adolescents with complex ConHD to lifestyle behaviors and physical activity. Profiling of HrF offers a valuable new option in the management of person-centered health promotion. Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  11. Applying robust variant of Principal Component Analysis as a damage detector in the presence of outliers

    NASA Astrophysics Data System (ADS)

    Gharibnezhad, Fahit; Mujica, Luis E.; Rodellar, José

    2015-01-01

    Using Principal Component Analysis (PCA) for Structural Health Monitoring (SHM) has received considerable attention over the past few years. PCA has been used not only as a direct method to identify, classify and localize damages but also as a significant primary step for other methods. Despite several positive specifications that PCA conveys, it is very sensitive to outliers. Outliers are anomalous observations that can affect the variance and the covariance as vital parts of PCA method. Therefore, the results based on PCA in the presence of outliers are not fully satisfactory. As a main contribution, this work suggests the use of robust variant of PCA not sensitive to outliers, as an effective way to deal with this problem in SHM field. In addition, the robust PCA is compared with the classical PCA in the sense of detecting probable damages. The comparison between the results shows that robust PCA can distinguish the damages much better than using classical one, and even in many cases allows the detection where classic PCA is not able to discern between damaged and non-damaged structures. Moreover, different types of robust PCA are compared with each other as well as with classical counterpart in the term of damage detection. All the results are obtained through experiments with an aircraft turbine blade using piezoelectric transducers as sensors and actuators and adding simulated damages.

  12. Robust allocation of a defensive budget considering an attacker's private information.

    PubMed

    Nikoofal, Mohammad E; Zhuang, Jun

    2012-05-01

    Attackers' private information is one of the main issues in defensive resource allocation games in homeland security. The outcome of a defense resource allocation decision critically depends on the accuracy of estimations about the attacker's attributes. However, terrorists' goals may be unknown to the defender, necessitating robust decisions by the defender. This article develops a robust-optimization game-theoretical model for identifying optimal defense resource allocation strategies for a rational defender facing a strategic attacker while the attacker's valuation of targets, being the most critical attribute of the attacker, is unknown but belongs to bounded distribution-free intervals. To our best knowledge, no previous research has applied robust optimization in homeland security resource allocation when uncertainty is defined in bounded distribution-free intervals. The key features of our model include (1) modeling uncertainty in attackers' attributes, where uncertainty is characterized by bounded intervals; (2) finding the robust-optimization equilibrium for the defender using concepts dealing with budget of uncertainty and price of robustness; and (3) applying the proposed model to real data. © 2011 Society for Risk Analysis.

  13. Stochastic Noise and Synchronisation during Dictyostelium Aggregation Make cAMP Oscillations Robust

    PubMed Central

    Kim, Jongrae; Heslop-Harrison, Pat; Postlethwaite, Ian; Bates, Declan G

    2007-01-01

    Stable and robust oscillations in the concentration of adenosine 3′, 5′-cyclic monophosphate (cAMP) are observed during the aggregation phase of starvation-induced development in Dictyostelium discoideum. In this paper we use mathematical modelling together with ideas from robust control theory to identify two factors which appear to make crucial contributions to ensuring the robustness of these oscillations. Firstly, we show that stochastic fluctuations in the molecular interactions play an important role in preserving stable oscillations in the face of variations in the kinetics of the intracellular network. Secondly, we show that synchronisation of the aggregating cells through the diffusion of extracellular cAMP is a key factor in ensuring robustness of the oscillatory waves of cAMP observed in Dictyostelium cell cultures to cell-to-cell variations. A striking and quite general implication of the results is that the robustness analysis of models of oscillating biomolecular networks (circadian clocks, Ca2+ oscillations, etc.) can only be done reliably by using stochastic simulations, even in the case where molecular concentrations are very high. PMID:17997595

  14. Robust control of combustion instabilities

    NASA Astrophysics Data System (ADS)

    Hong, Boe-Shong

    Several interactive dynamical subsystems, each of which has its own time-scale and physical significance, are decomposed to build a feedback-controlled combustion- fluid robust dynamics. On the fast-time scale, the phenomenon of combustion instability is corresponding to the internal feedback of two subsystems: acoustic dynamics and flame dynamics, which are parametrically dependent on the slow-time-scale mean-flow dynamics controlled for global performance by a mean-flow controller. This dissertation constructs such a control system, through modeling, analysis and synthesis, to deal with model uncertainties, environmental noises and time- varying mean-flow operation. Conservation law is decomposed as fast-time acoustic dynamics and slow-time mean-flow dynamics, served for synthesizing LPV (linear parameter varying)- L2-gain robust control law, in which a robust observer is embedded for estimating and controlling the internal status, while achieving trade- offs among robustness, performances and operation. The robust controller is formulated as two LPV-type Linear Matrix Inequalities (LMIs), whose numerical solver is developed by finite-element method. Some important issues related to physical understanding and engineering application are discussed in simulated results of the control system.

  15. Robust and transferable quantification of NMR spectral quality using IROC analysis

    NASA Astrophysics Data System (ADS)

    Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.

    2017-12-01

    Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.

  16. Missing Value Monitoring Enhances the Robustness in Proteomics Quantitation.

    PubMed

    Matafora, Vittoria; Corno, Andrea; Ciliberto, Andrea; Bachi, Angela

    2017-04-07

    In global proteomic analysis, it is estimated that proteins span from millions to less than 100 copies per cell. The challenge of protein quantitation by classic shotgun proteomic techniques relies on the presence of missing values in peptides belonging to low-abundance proteins that lowers intraruns reproducibility affecting postdata statistical analysis. Here, we present a new analytical workflow MvM (missing value monitoring) able to recover quantitation of missing values generated by shotgun analysis. In particular, we used confident data-dependent acquisition (DDA) quantitation only for proteins measured in all the runs, while we filled the missing values with data-independent acquisition analysis using the library previously generated in DDA. We analyzed cell cycle regulated proteins, as they are low abundance proteins with highly dynamic expression levels. Indeed, we found that cell cycle related proteins are the major components of the missing values-rich proteome. Using the MvM workflow, we doubled the number of robustly quantified cell cycle related proteins, and we reduced the number of missing values achieving robust quantitation for proteins over ∼50 molecules per cell. MvM allows lower quantification variance among replicates for low abundance proteins with respect to DDA analysis, which demonstrates the potential of this novel workflow to measure low abundance, dynamically regulated proteins.

  17. RoBuST: an integrated genomics resource for the root and bulb crop families Apiaceae and Alliaceae

    PubMed Central

    2010-01-01

    Background Root and bulb vegetables (RBV) include carrots, celeriac (root celery), parsnips (Apiaceae), onions, garlic, and leek (Alliaceae)—food crops grown globally and consumed worldwide. Few data analysis platforms are currently available where data collection, annotation and integration initiatives are focused on RBV plant groups. Scientists working on RBV include breeders, geneticists, taxonomists, plant pathologists, and plant physiologists who use genomic data for a wide range of activities including the development of molecular genetic maps, delineation of taxonomic relationships, and investigation of molecular aspects of gene expression in biochemical pathways and disease responses. With genomic data coming from such diverse areas of plant science, availability of a community resource focused on these RBV data types would be of great interest to this scientific community. Description The RoBuST database has been developed to initiate a platform for collecting and organizing genomic information useful for RBV researchers. The current release of RoBuST contains genomics data for 294 Alliaceae and 816 Apiaceae plant species and has the following features: (1) comprehensive sequence annotations of 3663 genes 5959 RNAs, 22,723 ESTs and 11,438 regulatory sequence elements from Apiaceae and Alliaceae plant families; (2) graphical tools for visualization and analysis of sequence data; (3) access to traits, biosynthetic pathways, genetic linkage maps and molecular taxonomy data associated with Alliaceae and Apiaceae plants; and (4) comprehensive plant splice signal repository of 659,369 splice signals collected from 6015 plant species for comparative analysis of plant splicing patterns. Conclusions RoBuST, available at http://robust.genome.com, provides an integrated platform for researchers to effortlessly explore and analyze genomic data associated with root and bulb vegetables. PMID:20691054

  18. Identification, Comparison, and Validation of Robust Rumen Microbial Biomarkers for Methane Emissions Using Diverse Bos Taurus Breeds and Basal Diets

    PubMed Central

    Auffret, Marc D.; Stewart, Robert; Dewhurst, Richard J.; Duthie, Carol-Anne; Rooke, John A.; Wallace, Robert J.; Freeman, Tom C.; Snelling, Timothy J.; Watson, Mick; Roehe, Rainer

    2018-01-01

    Previous shotgun metagenomic analyses of ruminal digesta identified some microbial information that might be useful as biomarkers to select cattle that emit less methane (CH4), which is a potent greenhouse gas. It is known that methane production (g/kgDMI) and to an extent the microbial community is heritable and therefore biomarkers can offer a method of selecting cattle for low methane emitting phenotypes. In this study a wider range of Bos Taurus cattle, varying in breed and diet, was investigated to determine microbial communities and genetic markers associated with high/low CH4 emissions. Digesta samples were taken from 50 beef cattle, comprising four cattle breeds, receiving two basal diets containing different proportions of concentrate and also including feed additives (nitrate or lipid), that may influence methane emissions. A combination of partial least square analysis and network analysis enabled the identification of the most significant and robust biomarkers of CH4 emissions (VIP > 0.8) across diets and breeds when comparing all potential biomarkers together. Genes associated with the hydrogenotrophic methanogenesis pathway converting carbon dioxide to methane, provided the dominant biomarkers of CH4 emissions and methanogens were the microbial populations most closely correlated with CH4 emissions and identified by metagenomics. Moreover, these genes grouped together as confirmed by network analysis for each independent experiment and when combined. Finally, the genes involved in the methane synthesis pathway explained a higher proportion of variation in CH4 emissions by PLS analysis compared to phylogenetic parameters or functional genes. These results confirmed the reproducibility of the analysis and the advantage to use these genes as robust biomarkers of CH4 emissions. Volatile fatty acid concentrations and ratios were significantly correlated with CH4, but these factors were not identified as robust enough for predictive purposes. Moreover, the methanotrophic Methylomonas genus was found to be negatively correlated with CH4. Finally, this study confirmed the importance of using robust and applicable biomarkers from the microbiome as a proxy of CH4 emissions across diverse production systems and environments. PMID:29375511

  19. Identification, Comparison, and Validation of Robust Rumen Microbial Biomarkers for Methane Emissions Using Diverse Bos Taurus Breeds and Basal Diets.

    PubMed

    Auffret, Marc D; Stewart, Robert; Dewhurst, Richard J; Duthie, Carol-Anne; Rooke, John A; Wallace, Robert J; Freeman, Tom C; Snelling, Timothy J; Watson, Mick; Roehe, Rainer

    2017-01-01

    Previous shotgun metagenomic analyses of ruminal digesta identified some microbial information that might be useful as biomarkers to select cattle that emit less methane (CH 4 ), which is a potent greenhouse gas. It is known that methane production (g/kgDMI) and to an extent the microbial community is heritable and therefore biomarkers can offer a method of selecting cattle for low methane emitting phenotypes. In this study a wider range of Bos Taurus cattle, varying in breed and diet, was investigated to determine microbial communities and genetic markers associated with high/low CH 4 emissions. Digesta samples were taken from 50 beef cattle, comprising four cattle breeds, receiving two basal diets containing different proportions of concentrate and also including feed additives (nitrate or lipid), that may influence methane emissions. A combination of partial least square analysis and network analysis enabled the identification of the most significant and robust biomarkers of CH 4 emissions (VIP > 0.8) across diets and breeds when comparing all potential biomarkers together. Genes associated with the hydrogenotrophic methanogenesis pathway converting carbon dioxide to methane, provided the dominant biomarkers of CH 4 emissions and methanogens were the microbial populations most closely correlated with CH 4 emissions and identified by metagenomics. Moreover, these genes grouped together as confirmed by network analysis for each independent experiment and when combined. Finally, the genes involved in the methane synthesis pathway explained a higher proportion of variation in CH 4 emissions by PLS analysis compared to phylogenetic parameters or functional genes. These results confirmed the reproducibility of the analysis and the advantage to use these genes as robust biomarkers of CH 4 emissions. Volatile fatty acid concentrations and ratios were significantly correlated with CH 4 , but these factors were not identified as robust enough for predictive purposes. Moreover, the methanotrophic Methylomonas genus was found to be negatively correlated with CH 4 . Finally, this study confirmed the importance of using robust and applicable biomarkers from the microbiome as a proxy of CH 4 emissions across diverse production systems and environments.

  20. A μ analysis-based, controller-synthesis framework for robust bioinspired visual navigation in less-structured environments.

    PubMed

    Keshavan, J; Gremillion, G; Escobar-Alvarez, H; Humbert, J S

    2014-06-01

    Safe, autonomous navigation by aerial microsystems in less-structured environments is a difficult challenge to overcome with current technology. This paper presents a novel visual-navigation approach that combines bioinspired wide-field processing of optic flow information with control-theoretic tools for synthesis of closed loop systems, resulting in robustness and performance guarantees. Structured singular value analysis is used to synthesize a dynamic controller that provides good tracking performance in uncertain environments without resorting to explicit pose estimation or extraction of a detailed environmental depth map. Experimental results with a quadrotor demonstrate the vehicle's robust obstacle-avoidance behaviour in a straight line corridor, an S-shaped corridor and a corridor with obstacles distributed in the vehicle's path. The computational efficiency and simplicity of the current approach offers a promising alternative to satisfying the payload, power and bandwidth constraints imposed by aerial microsystems.

  1. AutoCellSeg: robust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques.

    PubMed

    Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert

    2018-05-08

    In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.

  2. Neutrality and Robustness in Evo-Devo: Emergence of Lateral Inhibition

    PubMed Central

    Munteanu, Andreea; Solé, Ricard V.

    2008-01-01

    Embryonic development is defined by the hierarchical dynamical process that translates genetic information (genotype) into a spatial gene expression pattern (phenotype) providing the positional information for the correct unfolding of the organism. The nature and evolutionary implications of genotype–phenotype mapping still remain key topics in evolutionary developmental biology (evo-devo). We have explored here issues of neutrality, robustness, and diversity in evo-devo by means of a simple model of gene regulatory networks. The small size of the system allowed an exhaustive analysis of the entire fitness landscape and the extent of its neutrality. This analysis shows that evolution leads to a class of robust genetic networks with an expression pattern characteristic of lateral inhibition. This class is a repertoire of distinct implementations of this key developmental process, the diversity of which provides valuable clues about its underlying causal principles. PMID:19023404

  3. Overview of computational control research at UT Austin

    NASA Technical Reports Server (NTRS)

    Bong, Wie

    1989-01-01

    An overview of current research activities at UT Austin is presented to discuss certain technical issues in the following areas: (1) Computer-Aided Nonlinear Control Design: In this project, the describing function method is employed for the nonlinear control analysis and design of a flexible spacecraft equipped with pulse modulated reaction jets. INCA program has been enhanced to allow the numerical calculation of describing functions as well as the nonlinear limit cycle analysis capability in the frequency domain; (2) Robust Linear Quadratic Gaussian (LQG) Compensator Synthesis: Robust control design techniques and software tools are developed for flexible space structures with parameter uncertainty. In particular, an interactive, robust multivariable control design capability is being developed for INCA program; and (3) LQR-Based Autonomous Control System for the Space Station: In this project, real time implementation of LQR-based autonomous control system is investigated for the space station with time-varying inertias and with significant multibody dynamic interactions.

  4. Fully automated analysis of multi-resolution four-channel micro-array genotyping data

    NASA Astrophysics Data System (ADS)

    Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.

    2006-03-01

    We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.

  5. Anthropometric geography applied to the analysis of socioeconomic disparities: cohort trends and spatial patterns of height and robustness in 20th-century Spain.

    PubMed

    Camara, Antonio D; Roman, Joan Garcia

    2015-11-01

    Anthropometrics have been widely used to study the influence of environmental factors on health and nutritional status. In contrast, anthropometric geography has not often been employed to approximate the dynamics of spatial disparities associated with socioeconomic and demographic changes. Spain exhibited intense disparity and change during the middle decades of the 20 th century, with the result that the life courses of the corresponding cohorts were associated with diverse environmental conditions. This was also true of the Spanish territories. This paper presents insights concerning the relationship between socioeconomic changes and living conditions by combining the analysis of cohort trends and the anthropometric cartography of height and physical build. This analysis is conducted for Spanish male cohorts born 1934-1973 that were recorded in the Spanish military statistics. This information is interpreted in light of region-level data on GDP and infant mortality. Our results show an anthropometric convergence across regions that, nevertheless, did not substantially modify the spatial patterns of robustness, featuring primarily robust northeastern regions and weak Central-Southern regions. These patterns persisted until the 1990s (cohorts born during the 1970s). For the most part, anthropometric disparities were associated with socioeconomic disparities, although the former lessened over time to a greater extent than the latter. Interestingly, the various anthropometric indicators utilized here do not point to the same conclusions. Some discrepancies between height and robustness patterns have been found that moderate the statements from the analysis of cohort height alone regarding the level and evolution of living conditions across Spanish regions.

  6. Robust Kalman filter design for predictive wind shear detection

    NASA Technical Reports Server (NTRS)

    Stratton, Alexander D.; Stengel, Robert F.

    1991-01-01

    Severe, low-altitude wind shear is a threat to aviation safety. Airborne sensors under development measure the radial component of wind along a line directly in front of an aircraft. In this paper, optimal estimation theory is used to define a detection algorithm to warn of hazardous wind shear from these sensors. To achieve robustness, a wind shear detection algorithm must distinguish threatening wind shear from less hazardous gustiness, despite variations in wind shear structure. This paper presents statistical analysis methods to refine wind shear detection algorithm robustness. Computational methods predict the ability to warn of severe wind shear and avoid false warning. Comparative capability of the detection algorithm as a function of its design parameters is determined, identifying designs that provide robust detection of severe wind shear.

  7. Robust dynamics in minimal hybrid models of genetic networks

    PubMed Central

    Perkins, Theodore J.; Wilds, Roy; Glass, Leon

    2010-01-01

    Many gene-regulatory networks necessarily display robust dynamics that are insensitive to noise and stable under evolution. We propose that a class of hybrid systems can be used to relate the structure of these networks to their dynamics and provide insight into the origin of robustness. In these systems, the genes are represented by logical functions, and the controlling transcription factor protein molecules are real variables, which are produced and destroyed. As the transcription factor concentrations cross thresholds, they control the production of other transcription factors. We discuss mathematical analysis of these systems and show how the concepts of robustness and minimality can be used to generate putative logical organizations based on observed symbolic sequences. We apply the methods to control of the cell cycle in yeast. PMID:20921006

  8. Robust dynamics in minimal hybrid models of genetic networks.

    PubMed

    Perkins, Theodore J; Wilds, Roy; Glass, Leon

    2010-11-13

    Many gene-regulatory networks necessarily display robust dynamics that are insensitive to noise and stable under evolution. We propose that a class of hybrid systems can be used to relate the structure of these networks to their dynamics and provide insight into the origin of robustness. In these systems, the genes are represented by logical functions, and the controlling transcription factor protein molecules are real variables, which are produced and destroyed. As the transcription factor concentrations cross thresholds, they control the production of other transcription factors. We discuss mathematical analysis of these systems and show how the concepts of robustness and minimality can be used to generate putative logical organizations based on observed symbolic sequences. We apply the methods to control of the cell cycle in yeast.

  9. Neural network robust tracking control with adaptive critic framework for uncertain nonlinear systems.

    PubMed

    Wang, Ding; Liu, Derong; Zhang, Yun; Li, Hongyi

    2018-01-01

    In this paper, we aim to tackle the neural robust tracking control problem for a class of nonlinear systems using the adaptive critic technique. The main contribution is that a neural-network-based robust tracking control scheme is established for nonlinear systems involving matched uncertainties. The augmented system considering the tracking error and the reference trajectory is formulated and then addressed under adaptive critic optimal control formulation, where the initial stabilizing controller is not needed. The approximate control law is derived via solving the Hamilton-Jacobi-Bellman equation related to the nominal augmented system, followed by closed-loop stability analysis. The robust tracking control performance is guaranteed theoretically via Lyapunov approach and also verified through simulation illustration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A Fast and Robust UHPLC-MRM-MS Method to Characterize and Quantify Grape Skin Tannins after Chemical Depolymerization.

    PubMed

    Pinasseau, Lucie; Verbaere, Arnaud; Roques, Maryline; Meudec, Emmanuelle; Vallverdú-Queralt, Anna; Terrier, Nancy; Boulet, Jean-Claude; Cheynier, Véronique; Sommerer, Nicolas

    2016-10-21

    A rapid, sensitive, and selective analysis method using ultra high performance liquid chromatography coupled with triple-quadrupole mass spectrometry (UHPLC-QqQ-MS) has been developed for the characterization and quantification of grape skin flavan-3-ols after acid-catalysed depolymerization in the presence of phloroglucinol (phloroglucinolysis). The compound detection being based on specific MS transitions in Multiple Reaction Monitoring (MRM) mode, this fast gradient robust method allows analysis of constitutive units of grape skin proanthocyanidins, including some present in trace amounts, in a single injection, with a throughput of 6 samples per hour. This method was applied to a set of 214 grape skin samples from 107 different red and white grape cultivars grown under two conditions in the vineyard, irrigated or non-irrigated. The results of triplicate analyses confirmed the robustness of the method, which was thus proven to be suitable for high-throughput and large-scale metabolomics studies. Moreover, these preliminary results suggest that analysis of tannin composition is relevant to investigate the genetic bases of grape response to drought.

  11. Using robust principal component analysis to alleviate day-to-day variability in EEG based emotion classification.

    PubMed

    Ping-Keng Jao; Yuan-Pin Lin; Yi-Hsuan Yang; Tzyy-Ping Jung

    2015-08-01

    An emerging challenge for emotion classification using electroencephalography (EEG) is how to effectively alleviate day-to-day variability in raw data. This study employed the robust principal component analysis (RPCA) to address the problem with a posed hypothesis that background or emotion-irrelevant EEG perturbations lead to certain variability across days and somehow submerge emotion-related EEG dynamics. The empirical results of this study evidently validated our hypothesis and demonstrated the RPCA's feasibility through the analysis of a five-day dataset of 12 subjects. The RPCA allowed tackling the sparse emotion-relevant EEG dynamics from the accompanied background perturbations across days. Sequentially, leveraging the RPCA-purified EEG trials from more days appeared to improve the emotion-classification performance steadily, which was not found in the case using the raw EEG features. Therefore, incorporating the RPCA with existing emotion-aware machine-learning frameworks on a longitudinal dataset of each individual may shed light on the development of a robust affective brain-computer interface (ABCI) that can alleviate ecological inter-day variability.

  12. Using Velocity Anisotropy to Analyze Magnetohydrodynamic Turbulence in Giant Molecular Clouds

    NASA Astrophysics Data System (ADS)

    Madrid, Alecio; Hernandez, Audra

    2018-01-01

    Structure function (SF) analysis is a strong tool for gaging the Alfvénic properties of magnetohydrodynamic (MHD) simulations, yet there is a lack of literature rigorously investigating limitations in the context of radio spectroscopy. This study takes an in depth approach to studying the limitations of SF analysis for analyzing MHD turbulence in giant molecular cloud (GMC) spectroscopy data. MHD turbulence plays a critical role in the structure and evolution of GMCs as well as in the formation of sub-structures known to spawn stellar progenitors. Existing methods of detection are neither economical nor robust (e.g. dust polarization), and nowhere is this more clear than in the theoretical-observational divide in current literature. A significant limitation of GMC spectroscopy results from the large variation in methods used for extracting GMCs from survey data. Thus, a robust method for studying MHD turbulence must correctly gauge physical properties regardless of the data extraction method used. While SF analysis has demonstrated strong potential across a range of simulated conditions, this study finds significant concern regarding its feasibility as a robust tool in GMC spectroscopy.

  13. Analysis and Synthesis of Robust Data Structures

    DTIC Science & Technology

    1990-08-01

    1.3.2 Multiversion Software. .. .. .. .. .. .... .. ... .. ...... 5 1.3.3 Robust Data Structure .. .. .. .. .. .. .. .. .. ... .. ..... 6 1.4...context are 0 multiversion software, which is an adaptation oi N-modulo redundancy (NMR) tech- nique. * recovery blocks, which is an adaptation of...implementations using these features for such a hybrid approach. 1.3.2 Multiversion Software Avizienis [AC77] was the first to adapt NMR technique into

  14. Efficient and Robust Signal Approximations

    DTIC Science & Technology

    2009-05-01

    otherwise. Remark. Permutation matrices are both orthogonal and doubly- stochastic [62]. We will now show how to further simplify the Robust Coding...reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: signal processing, image compression, independent component analysis , sparse

  15. Robust nonlinear canonical correlation analysis: application to seasonal climate forecasting

    NASA Astrophysics Data System (ADS)

    Cannon, A. J.; Hsieh, W. W.

    2008-02-01

    Robust variants of nonlinear canonical correlation analysis (NLCCA) are introduced to improve performance on datasets with low signal-to-noise ratios, for example those encountered when making seasonal climate forecasts. The neural network model architecture of standard NLCCA is kept intact, but the cost functions used to set the model parameters are replaced with more robust variants. The Pearson product-moment correlation in the double-barreled network is replaced by the biweight midcorrelation, and the mean squared error (mse) in the inverse mapping networks can be replaced by the mean absolute error (mae). Robust variants of NLCCA are demonstrated on a synthetic dataset and are used to forecast sea surface temperatures in the tropical Pacific Ocean based on the sea level pressure field. Results suggest that adoption of the biweight midcorrelation can lead to improved performance, especially when a strong, common event exists in both predictor/predictand datasets. Replacing the mse by the mae leads to improved performance on the synthetic dataset, but not on the climate dataset except at the longest lead time, which suggests that the appropriate cost function for the inverse mapping networks is more problem dependent.

  16. Aeroservoelastic Uncertainty Model Identification from Flight Data

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.

    2001-01-01

    Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.

  17. Multiframe video coding for improved performance over wireless channels.

    PubMed

    Budagavi, M; Gibson, J D

    2001-01-01

    We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.

  18. Improving tablet coating robustness by selecting critical process parameters from retrospective data.

    PubMed

    Galí, A; García-Montoya, E; Ascaso, M; Pérez-Lozano, P; Ticó, J R; Miñarro, M; Suñé-Negre, J M

    2016-09-01

    Although tablet coating processes are widely used in the pharmaceutical industry, they often lack adequate robustness. Up-scaling can be challenging as minor changes in parameters can lead to varying quality results. To select critical process parameters (CPP) using retrospective data of a commercial product and to establish a design of experiments (DoE) that would improve the robustness of the coating process. A retrospective analysis of data from 36 commercial batches. Batches were selected based on the quality results generated during batch release, some of which revealed quality deviations concerning the appearance of the coated tablets. The product is already marketed and belongs to the portfolio of a multinational pharmaceutical company. The Statgraphics 5.1 software was used for data processing to determine critical process parameters in order to propose new working ranges. This study confirms that it is possible to determine the critical process parameters and create design spaces based on retrospective data of commercial batches. This type of analysis is thus converted into a tool to optimize the robustness of existing processes. Our results show that a design space can be established with minimum investment in experiments, since current commercial batch data are processed statistically.

  19. Design of Launch Vehicle Flight Control Systems Using Ascent Vehicle Stability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Alaniz, Abran; Hall, Robert; Bedossian, Nazareth; Hall, Charles; Jackson, Mark

    2011-01-01

    A launch vehicle represents a complicated flex-body structural environment for flight control system design. The Ascent-vehicle Stability Analysis Tool (ASAT) is developed to address the complicity in design and analysis of a launch vehicle. The design objective for the flight control system of a launch vehicle is to best follow guidance commands while robustly maintaining system stability. A constrained optimization approach takes the advantage of modern computational control techniques to simultaneously design multiple control systems in compliance with required design specs. "Tower Clearance" and "Load Relief" designs have been achieved for liftoff and max dynamic pressure flight regions, respectively, in the presence of large wind disturbances. The robustness of the flight control system designs has been verified in the frequency domain Monte Carlo analysis using ASAT.

  20. Evaluating optimal therapy robustness by virtual expansion of a sample population, with a case study in cancer immunotherapy

    PubMed Central

    Barish, Syndi; Ochs, Michael F.; Sontag, Eduardo D.; Gevertz, Jana L.

    2017-01-01

    Cancer is a highly heterogeneous disease, exhibiting spatial and temporal variations that pose challenges for designing robust therapies. Here, we propose the VEPART (Virtual Expansion of Populations for Analyzing Robustness of Therapies) technique as a platform that integrates experimental data, mathematical modeling, and statistical analyses for identifying robust optimal treatment protocols. VEPART begins with time course experimental data for a sample population, and a mathematical model fit to aggregate data from that sample population. Using nonparametric statistics, the sample population is amplified and used to create a large number of virtual populations. At the final step of VEPART, robustness is assessed by identifying and analyzing the optimal therapy (perhaps restricted to a set of clinically realizable protocols) across each virtual population. As proof of concept, we have applied the VEPART method to study the robustness of treatment response in a mouse model of melanoma subject to treatment with immunostimulatory oncolytic viruses and dendritic cell vaccines. Our analysis (i) showed that every scheduling variant of the experimentally used treatment protocol is fragile (nonrobust) and (ii) discovered an alternative region of dosing space (lower oncolytic virus dose, higher dendritic cell dose) for which a robust optimal protocol exists. PMID:28716945

  1. Integration of Off-Track Sonic Boom Analysis in Conceptual Design of Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Li, Wu

    2011-01-01

    A highly desired capability for the conceptual design of aircraft is the ability to rapidly and accurately evaluate new concepts to avoid adverse trade decisions that may hinder the development process in the later stages of design. Evaluating the robustness of new low-boom concepts is important for the conceptual design of supersonic aircraft. Here, robustness means that the aircraft configuration has a low-boom ground signature at both under- and off-track locations. An integrated process for off-track boom analysis is developed to facilitate the design of robust low-boom supersonic aircraft. The integrated off-track analysis can also be used to study the sonic boom impact and to plan future flight trajectories where flight conditions and ground elevation might have a significant effect on ground signatures. The key enabler for off-track sonic boom analysis is accurate computational fluid dynamics (CFD) solutions for off-body pressure distributions. To ensure the numerical accuracy of the off-body pressure distributions, a mesh study is performed with Cart3D to determine the mesh requirements for off- body CFD analysis and comparisons are made between the Cart3D and USM3D results. The variations in ground signatures that result from changes in the initial location of the near-field waveform are also examined. Finally, a complete under- and off-track sonic boom analysis is presented for two distinct supersonic concepts to demonstrate the capability of the integrated analysis process.

  2. SU-F-BRD-04: Robustness Analysis of Proton Breast Treatments Using An Alpha-Stable Distribution Parameterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van den Heuvel, F; Hackett, S; Fiorini, F

    Purpose: Currently, planning systems allow robustness calculations to be performed, but a generalized assessment methodology is not yet available. We introduce and evaluate a methodology to quantify the robustness of a plan on an individual patient basis. Methods: We introduce the notion of characterizing a treatment instance (i.e. one single fraction delivery) by describing the dose distribution within an organ as an alpha-stable distribution. The parameters of the distribution (shape(α), scale(γ), position(δ), and symmetry(β)), will vary continuously (in a mathematical sense) as the distributions change with the different positions. The rate of change of the parameters provides a measure ofmore » the robustness of the treatment. The methodology is tested in a planning study of 25 patients with known residual errors at each fraction. Each patient was planned using Eclipse with an IBA-proton beam model. The residual error space for every patient was sampled 30 times, yielding 31 treatment plans for each patient and dose distributions in 5 organs. The parameters’ change rate as a function of Euclidean distance from the original plan was analyzed. Results: More than 1,000 dose distributions were analyzed. For 4 of the 25 patients the change in scale rate (γ) was considerably higher than the lowest change rate, indicating a lack of robustness. The sign of the shape change rate (α) also seemed indicative but the experiment lacked the power to prove significance. Conclusion: There are indications that this robustness measure is a valuable tool to allow a more patient individualized approach to the determination of margins. In a further study we will also evaluate this robustness measure using photon treatments, and evaluate the impact of using breath hold techniques, and the a Monte Carlo based dose deposition calculation. A principle component analysis is also planned.« less

  3. Capacity planning for waste management systems: an interval fuzzy robust dynamic programming approach.

    PubMed

    Nie, Xianghui; Huang, Guo H; Li, Yongping

    2009-11-01

    This study integrates the concepts of interval numbers and fuzzy sets into optimization analysis by dynamic programming as a means of accounting for system uncertainty. The developed interval fuzzy robust dynamic programming (IFRDP) model improves upon previous interval dynamic programming methods. It allows highly uncertain information to be effectively communicated into the optimization process through introducing the concept of fuzzy boundary interval and providing an interval-parameter fuzzy robust programming method for an embedded linear programming problem. Consequently, robustness of the optimization process and solution can be enhanced. The modeling approach is applied to a hypothetical problem for the planning of waste-flow allocation and treatment/disposal facility expansion within a municipal solid waste (MSW) management system. Interval solutions for capacity expansion of waste management facilities and relevant waste-flow allocation are generated and interpreted to provide useful decision alternatives. The results indicate that robust and useful solutions can be obtained, and the proposed IFRDP approach is applicable to practical problems that are associated with highly complex and uncertain information.

  4. Robust linear discriminant models to solve financial crisis in banking sectors

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni

    2014-12-01

    Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.

  5. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, A.; Künsch, H. R.; Schwierz, C.; Stahel, W. A.

    2012-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outlying observations may results from errors (e.g. in data transcription) or from local perturbations in the processes that are responsible for a given pattern of spatial variation. As an example, the spatial distribution of some trace metal in the soils of a region may be distorted by emissions of local anthropogenic sources. Outliers affect the modelling of the large-scale spatial variation, the so-called external drift or trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) [2] proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) [1] for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation. Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and unsampled locations and kriging variances. The method has been implemented in an R package. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis of the Tarrawarra soil moisture data set [3].

  6. Non-negative Tensor Factorization for Robust Exploratory Big-Data Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrov, Boian; Vesselinov, Velimir Valentinov; Djidjev, Hristo Nikolov

    Currently, large multidimensional datasets are being accumulated in almost every field. Data are: (1) collected by distributed sensor networks in real-time all over the globe, (2) produced by large-scale experimental measurements or engineering activities, (3) generated by high-performance simulations, and (4) gathered by electronic communications and socialnetwork activities, etc. Simultaneous analysis of these ultra-large heterogeneous multidimensional datasets is often critical for scientific discoveries, decision-making, emergency response, and national and global security. The importance of such analyses mandates the development of the next-generation of robust machine learning (ML) methods and tools for bigdata exploratory analysis.

  7. Graphical Evaluation of the Ridge-Type Robust Regression Estimators in Mixture Experiments

    PubMed Central

    Erkoc, Ali; Emiroglu, Esra

    2014-01-01

    In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set. PMID:25202738

  8. Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.

    PubMed

    Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas

    2014-01-01

    In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.

  9. Robust biological parametric mapping: an improved technique for multimodal brain image analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.

    2011-03-01

    Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.

  10. Teaching Robust Methods for Exploratory Data Analysis.

    DTIC Science & Technology

    1980-10-01

    of adding a new point x to a sample x1*9...sX n* The Influence Function of the estimate 0 at the value x is defined to be For example, if 0 is the...mean (Ex )/n, we can calculate II+(x,iZ) x-ix Plotting I+, ’I- -9- we see that the mean has an unbounded Influence Function , and is therefore not robust

  11. The Analysis of Design of Robust Nonlinear Estimators and Robust Signal Coding Schemes.

    DTIC Science & Technology

    1982-09-16

    b - )’/ 12. between uniform and nonuniform quantizers. For the nonuni- Proof: If b - acca then form quantizer we can expect the mean-square error to...in the window greater than or equal to the value at We define f7 ’(s) as the n-times filtered signal p + 1; consequently, point p + 1 is the median and

  12. Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications

    NASA Astrophysics Data System (ADS)

    Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon

    1997-04-01

    A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.

  13. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System.

    PubMed

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas; Chicca, Elisabetta

    2012-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xiaojun; Lei, Guangtsai; Pan, Guangwen

    In this paper, the continuous operator is discretized into matrix forms by Galerkin`s procedure, using periodic Battle-Lemarie wavelets as basis/testing functions. The polynomial decomposition of wavelets is applied to the evaluation of matrix elements, which makes the computational effort of the matrix elements no more expensive than that of method of moments (MoM) with conventional piecewise basis/testing functions. A new algorithm is developed employing the fast wavelet transform (FWT). Owing to localization, cancellation, and orthogonal properties of wavelets, very sparse matrices have been obtained, which are then solved by the LSQR iterative method. This algorithm is also adaptive in thatmore » one can add at will finer wavelet bases in the regions where fields vary rapidly, without any damage to the system orthogonality of the wavelet basis functions. To demonstrate the effectiveness of the new algorithm, we applied it to the evaluation of frequency-dependent resistance and inductance matrices of multiple lossy transmission lines. Numerical results agree with previously published data and laboratory measurements. The valid frequency range of the boundary integral equation results has been extended two to three decades in comparison with the traditional MoM approach. The new algorithm has been integrated into the computer aided design tool, MagiCAD, which is used for the design and simulation of high-speed digital systems and multichip modules Pan et al. 29 refs., 7 figs., 6 tabs.« less

  15. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System

    PubMed Central

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L.; Wennekers, Thomas; Chicca, Elisabetta

    2011-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems. PMID:22347163

  16. Challenges and Opportunities in Gen3 Embedded Cooling with High-Quality Microgap Flow

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Avram; Robinson, Franklin L.; Deisenroth, David C.

    2018-01-01

    Gen3, Embedded Cooling, promises to revolutionize thermal management of advanced microelectronic systems by eliminating the sequential conductive and interfacial thermal resistances which dominate the present 'remote cooling' paradigm. Single-phase interchip microfluidic flow with high thermal conductivity chips and substrates has been used successfully to cool single transistors dissipating more than 40kW/sq cm, but efficient heat removal from transistor arrays, larger chips, and chip stacks operating at these prodigious heat fluxes would require the use of high vapor fraction (quality), two-phase cooling in intra- and inter-chip microgap channels. The motivation, as well as the challenges and opportunities associated with evaporative embedded cooling in realistic form factors, is the focus of this paper. The paper will begin with a brief review of the history of thermal packaging, reflecting the 70-year 'inward migration' of cooling technology from the computer-room, to the rack, and then to the single chip and multichip module with 'remote' or attached air- and liquid-cooled coldplates. Discussion of the limitations of this approach and recent results from single-phase embedded cooling will follow. This will set the stage for discussion of the development challenges associated with application of this Gen3 thermal management paradigm to commercial semiconductor hardware, including dealing with the effects of channel length, orientation, and manifold-driven centrifugal acceleration on the governing behavior.

  17. A Digital Lock-In Amplifier for Use at Temperatures of up to 200 °C

    PubMed Central

    Cheng, Jingjing; Xu, Yingjun; Wu, Lei; Wang, Guangwei

    2016-01-01

    Weak voltage signals cannot be reliably measured using currently available logging tools when these tools are subject to high-temperature (up to 200 °C) environments for prolonged periods. In this paper, we present a digital lock-in amplifier (DLIA) capable of operating at temperatures of up to 200 °C. The DLIA contains a low-noise instrument amplifier and signal acquisition and the corresponding signal processing electronics. The high-temperature stability of the DLIA is achieved by designing system-in-package (SiP) and multi-chip module (MCM) components with low thermal resistances. An effective look-up-table (LUT) method was developed for the lock-in amplifier algorithm, to decrease the complexity of the calculations and generate less heat than the traditional way. The performance of the design was tested by determining the linearity, gain, Q value, and frequency characteristic of the DLIA between 25 and 200 °C. The maximal nonlinear error in the linearity of the DLIA working at 200 °C was about 1.736% when the equivalent input was a sine wave signal with an amplitude of between 94.8 and 1896.0 nV and a frequency of 800 kHz. The tests showed that the DLIA proposed could work effectively in high-temperature environments up to 200 °C. PMID:27845710

  18. A comprehensive company database analysis of biological assay variability.

    PubMed

    Kramer, Christian; Dahl, Göran; Tyrchan, Christian; Ulander, Johan

    2016-08-01

    Analysis of data from various compounds measured in diverse biological assays is a central part of drug discovery research projects. However, no systematic overview of the variability in biological assays has been published and judgments on assay quality and robustness of data are often based on personal belief and experience within the drug discovery community. To address this we performed a reproducibility analysis of all biological assays at AstraZeneca between 2005 and 2014. We found an average experimental uncertainty of less than a twofold difference and no technologies or assay types had higher variability than others. This work suggests that robust data can be obtained from the most commonly applied biological assays. Copyright © 2016. Published by Elsevier Ltd.

  19. Stochastic Simulation Tool for Aerospace Structural Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.; Moore, David F.

    2006-01-01

    Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.

  20. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.

    PubMed

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-03-18

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter.

  1. Robust surface roughness indices and morphological interpretation

    NASA Astrophysics Data System (ADS)

    Trevisani, Sebastiano; Rocca, Michele

    2016-04-01

    Geostatistical-based image/surface texture indices based on variogram (Atkison and Lewis, 2000; Herzfeld and Higginson, 1996; Trevisani et al., 2012) and on its robust variant MAD (median absolute differences, Trevisani and Rocca, 2015) offer powerful tools for the analysis and interpretation of surface morphology (potentially not limited to solid earth). In particular, the proposed robust index (Trevisani and Rocca, 2015) with its implementation based on local kernels permits the derivation of a wide set of robust and customizable geomorphometric indices capable to outline specific aspects of surface texture. The stability of MAD in presence of signal noise and abrupt changes in spatial variability is well suited for the analysis of high-resolution digital terrain models. Moreover, the implementation of MAD by means of a pixel-centered perspective based on local kernels, with some analogies to the local binary pattern approach (Lucieer and Stein, 2005; Ojala et al., 2002), permits to create custom roughness indices capable to outline different aspects of surface roughness (Grohmann et al., 2011; Smith, 2015). In the proposed poster, some potentialities of the new indices in the context of geomorphometry and landscape analysis will be presented. At same time, challenges and future developments related to the proposed indices will be outlined. Atkinson, P.M., Lewis, P., 2000. Geostatistical classification for remote sensing: an introduction. Computers & Geosciences 26, 361-371. Grohmann, C.H., Smith, M.J., Riccomini, C., 2011. Multiscale Analysis of Topographic Surface Roughness in the Midland Valley, Scotland. IEEE Transactions on Geoscience and Remote Sensing 49, 1220-1213. Herzfeld, U.C., Higginson, C.A., 1996. Automated geostatistical seafloor classification - Principles, parameters, feature vectors, and discrimination criteria. Computers and Geosciences, 22 (1), pp. 35-52. Lucieer, A., Stein, A., 2005. Texture-based landform segmentation of LiDAR imagery. International Journal of Applied Earth Observation and Geoinformation 6, 261-270. Ojala, T., Pietikäinen, M. & Mäenpää, T. 2002. "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987. Smith, M.W. 2014. "Roughness in the Earth Sciences", Earth-Science Reviews, vol. 136, pp. 202-225. Trevisani, S., Cavalli, M. & Marchi, L. 2012. "Surface texture analysis of a high-resolution DTM: Interpreting an alpine basin", Geomorphology, vol. 161-162, pp. 26-39. Trevisani, S., Rocca, M. 2015. MAD: robust image texture analysis for applications in high resolution geomorphometry. Comput. Geosci. 81, 78-92. doi:10.1016/j.cageo.2015.04.003.

  2. Incorporation of support vector machines in the LIBS toolbox for sensitive and robust classification amidst unexpected sample and system variability

    PubMed Central

    ChariDingari, Narahara; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P.; Kumar, G. Manoj

    2012-01-01

    Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real world applications, e.g. quality assurance and process monitoring. Specifically, variability in sample, system and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a non-linear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), due to its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data – highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples as well as in related areas of forensic and biological sample analysis. PMID:22292496

  3. Anthropometric geography applied to the analysis of socioeconomic disparities: cohort trends and spatial patterns of height and robustness in 20th-century Spain

    PubMed Central

    Camara, Antonio D.; Roman, Joan Garcia

    2014-01-01

    Anthropometrics have been widely used to study the influence of environmental factors on health and nutritional status. In contrast, anthropometric geography has not often been employed to approximate the dynamics of spatial disparities associated with socioeconomic and demographic changes. Spain exhibited intense disparity and change during the middle decades of the 20th century, with the result that the life courses of the corresponding cohorts were associated with diverse environmental conditions. This was also true of the Spanish territories. This paper presents insights concerning the relationship between socioeconomic changes and living conditions by combining the analysis of cohort trends and the anthropometric cartography of height and physical build. This analysis is conducted for Spanish male cohorts born 1934–1973 that were recorded in the Spanish military statistics. This information is interpreted in light of region-level data on GDP and infant mortality. Our results show an anthropometric convergence across regions that, nevertheless, did not substantially modify the spatial patterns of robustness, featuring primarily robust northeastern regions and weak Central-Southern regions. These patterns persisted until the 1990s (cohorts born during the 1970s). For the most part, anthropometric disparities were associated with socioeconomic disparities, although the former lessened over time to a greater extent than the latter. Interestingly, the various anthropometric indicators utilized here do not point to the same conclusions. Some discrepancies between height and robustness patterns have been found that moderate the statements from the analysis of cohort height alone regarding the level and evolution of living conditions across Spanish regions. PMID:26640422

  4. Incorporation of support vector machines in the LIBS toolbox for sensitive and robust classification amidst unexpected sample and system variability.

    PubMed

    Dingari, Narahara Chari; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P; Kumar Gundawar, Manoj

    2012-03-20

    Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real-world applications, e.g., quality assurance and process monitoring. Specifically, variability in sample, system, and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a nonlinear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that the application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), because of its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data-highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples, as well as in related areas of forensic and biological sample analysis.

  5. Reducing the overlay metrology sensitivity to perturbations of the measurement stack

    NASA Astrophysics Data System (ADS)

    Zhou, Yue; Park, DeNeil; Gutjahr, Karsten; Gottipati, Abhishek; Vuong, Tam; Bae, Sung Yong; Stokes, Nicholas; Jiang, Aiqin; Hsu, Po Ya; O'Mahony, Mark; Donini, Andrea; Visser, Bart; de Ruiter, Chris; Grzela, Grzegorz; van der Laan, Hans; Jak, Martin; Izikson, Pavel; Morgan, Stephen

    2017-03-01

    Overlay metrology setup today faces a continuously changing landscape of process steps. During Diffraction Based Overlay (DBO) metrology setup, many different metrology target designs are evaluated in order to cover the full process window. The standard method for overlay metrology setup consists of single-wafer optimization in which the performance of all available metrology targets is evaluated. Without the availability of external reference data or multiwafer measurements it is hard to predict the metrology accuracy and robustness against process variations which naturally occur from wafer-to-wafer and lot-to-lot. In this paper, the capabilities of the Holistic Metrology Qualification (HMQ) setup flow are outlined, in particular with respect to overlay metrology accuracy and process robustness. The significance of robustness and its impact on overlay measurements is discussed using multiple examples. Measurement differences caused by slight stack variations across the target area, called grating imbalance, are shown to cause significant errors in the overlay calculation in case the recipe and target have not been selected properly. To this point, an overlay sensitivity check on perturbations of the measurement stack is presented for improvement of the overlay metrology setup flow. An extensive analysis on Key Performance Indicators (KPIs) from HMQ recipe optimization is performed on µDBO measurements of product wafers. The key parameters describing the sensitivity to perturbations of the measurement stack are based on an intra-target analysis. Using advanced image analysis, which is only possible for image plane detection of μDBO instead of pupil plane detection of DBO, the process robustness performance of a recipe can be determined. Intra-target analysis can be applied for a wide range of applications, independent of layers and devices.

  6. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  7. STATISTICAL SAMPLING AND DATA ANALYSIS

    EPA Science Inventory

    Research is being conducted to develop approaches to improve soil and sediment sampling techniques, measurement design and geostatistics, and data analysis via chemometric, environmetric, and robust statistical methods. Improvements in sampling contaminated soil and other hetero...

  8. Strain-Dependent Transcriptome Signatures for Robustness in Lactococcus lactis

    PubMed Central

    Dijkstra, Annereinou R.; Alkema, Wynand; Starrenburg, Marjo J. C.; van Hijum, Sacha A. F. T.; Bron, Peter A.

    2016-01-01

    Recently, we demonstrated that fermentation conditions have a strong impact on subsequent survival of Lactococcus lactis strain MG1363 during heat and oxidative stress, two important parameters during spray drying. Moreover, employment of a transcriptome-phenotype matching approach revealed groups of genes associated with robustness towards heat and/or oxidative stress. To investigate if other strains have similar or distinct transcriptome signatures for robustness, we applied an identical transcriptome-robustness phenotype matching approach on the L. lactis strains IL1403, KF147 and SK11, which have previously been demonstrated to display highly diverse robustness phenotypes. These strains were subjected to an identical fermentation regime as was performed earlier for strain MG1363 and consisted of twelve conditions, varying in the level of salt and/or oxygen, as well as fermentation temperature and pH. In the exponential phase of growth, cells were harvested for transcriptome analysis and assessment of heat and oxidative stress survival phenotypes. The variation in fermentation conditions resulted in differences in heat and oxidative stress survival of up to five 10-log units. Effects of the fermentation conditions on stress survival of the L. lactis strains were typically strain-dependent, although the fermentation conditions had mainly similar effects on the growth characteristics of the different strains. By association of the transcriptomes and robustness phenotypes highly strain-specific transcriptome signatures for robustness towards heat and oxidative stress were identified, indicating that multiple mechanisms exist to increase robustness and, as a consequence, robustness of each strain requires individual optimization. However, a relatively small overlap in the transcriptome responses of the strains was also identified and this generic transcriptome signature included genes previously associated with stress (ctsR and lplL) and novel genes, including nanE and genes encoding transport proteins. The transcript levels of these genes can function as indicators of robustness and could aid in selection of fermentation parameters, potentially resulting in more optimal robustness during spray drying. PMID:27973578

  9. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  10. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites.

    PubMed

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-03-08

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.

  12. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    PubMed Central

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  13. (Im)Perfect robustness and adaptation of metabolic networks subject to metabolic and gene-expression regulation: marrying control engineering with metabolic control analysis.

    PubMed

    He, Fei; Fromion, Vincent; Westerhoff, Hans V

    2013-11-21

    Metabolic control analysis (MCA) and supply-demand theory have led to appreciable understanding of the systems properties of metabolic networks that are subject exclusively to metabolic regulation. Supply-demand theory has not yet considered gene-expression regulation explicitly whilst a variant of MCA, i.e. Hierarchical Control Analysis (HCA), has done so. Existing analyses based on control engineering approaches have not been very explicit about whether metabolic or gene-expression regulation would be involved, but designed different ways in which regulation could be organized, with the potential of causing adaptation to be perfect. This study integrates control engineering and classical MCA augmented with supply-demand theory and HCA. Because gene-expression regulation involves time integration, it is identified as a natural instantiation of the 'integral control' (or near integral control) known in control engineering. This study then focuses on robustness against and adaptation to perturbations of process activities in the network, which could result from environmental perturbations, mutations or slow noise. It is shown however that this type of 'integral control' should rarely be expected to lead to the 'perfect adaptation': although the gene-expression regulation increases the robustness of important metabolite concentrations, it rarely makes them infinitely robust. For perfect adaptation to occur, the protein degradation reactions should be zero order in the concentration of the protein, which may be rare biologically for cells growing steadily. A proposed new framework integrating the methodologies of control engineering and metabolic and hierarchical control analysis, improves the understanding of biological systems that are regulated both metabolically and by gene expression. In particular, the new approach enables one to address the issue whether the intracellular biochemical networks that have been and are being identified by genomics and systems biology, correspond to the 'perfect' regulatory structures designed by control engineering vis-à-vis optimal functions such as robustness. To the extent that they are not, the analyses suggest how they may become so and this in turn should facilitate synthetic biology and metabolic engineering.

  14. Achieving Robustness to Uncertainty for Financial Decision-making

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnum, George M.; Van Buren, Kendra L.; Hemez, Francois M.

    2014-01-10

    This report investigates the concept of robustness analysis to support financial decision-making. Financial models, that forecast future stock returns or market conditions, depend on assumptions that might be unwarranted and variables that might exhibit large fluctuations from their last-known values. The analysis of robustness explores these sources of uncertainty, and recommends model settings such that the forecasts used for decision-making are as insensitive as possible to the uncertainty. A proof-of-concept is presented with the Capital Asset Pricing Model. The robustness of model predictions is assessed using info-gap decision theory. Info-gaps are models of uncertainty that express the “distance,” or gapmore » of information, between what is known and what needs to be known in order to support the decision. The analysis yields a description of worst-case stock returns as a function of increasing gaps in our knowledge. The analyst can then decide on the best course of action by trading-off worst-case performance with “risk”, which is how much uncertainty they think needs to be accommodated in the future. The report also discusses the Graphical User Interface, developed using the MATLAB® programming environment, such that the user can control the analysis through an easy-to-navigate interface. Three directions of future work are identified to enhance the present software. First, the code should be re-written using the Python scientific programming software. This change will achieve greater cross-platform compatibility, better portability, allow for a more professional appearance, and render it independent from a commercial license, which MATLAB® requires. Second, a capability should be developed to allow users to quickly implement and analyze their own models. This will facilitate application of the software to the evaluation of proprietary financial models. The third enhancement proposed is to add the ability to evaluate multiple models simultaneously. When two models reflect past data with similar accuracy, the more robust of the two is preferable for decision-making because its predictions are, by definition, less sensitive to the uncertainty.« less

  15. (Im)Perfect robustness and adaptation of metabolic networks subject to metabolic and gene-expression regulation: marrying control engineering with metabolic control analysis

    PubMed Central

    2013-01-01

    Background Metabolic control analysis (MCA) and supply–demand theory have led to appreciable understanding of the systems properties of metabolic networks that are subject exclusively to metabolic regulation. Supply–demand theory has not yet considered gene-expression regulation explicitly whilst a variant of MCA, i.e. Hierarchical Control Analysis (HCA), has done so. Existing analyses based on control engineering approaches have not been very explicit about whether metabolic or gene-expression regulation would be involved, but designed different ways in which regulation could be organized, with the potential of causing adaptation to be perfect. Results This study integrates control engineering and classical MCA augmented with supply–demand theory and HCA. Because gene-expression regulation involves time integration, it is identified as a natural instantiation of the ‘integral control’ (or near integral control) known in control engineering. This study then focuses on robustness against and adaptation to perturbations of process activities in the network, which could result from environmental perturbations, mutations or slow noise. It is shown however that this type of ‘integral control’ should rarely be expected to lead to the ‘perfect adaptation’: although the gene-expression regulation increases the robustness of important metabolite concentrations, it rarely makes them infinitely robust. For perfect adaptation to occur, the protein degradation reactions should be zero order in the concentration of the protein, which may be rare biologically for cells growing steadily. Conclusions A proposed new framework integrating the methodologies of control engineering and metabolic and hierarchical control analysis, improves the understanding of biological systems that are regulated both metabolically and by gene expression. In particular, the new approach enables one to address the issue whether the intracellular biochemical networks that have been and are being identified by genomics and systems biology, correspond to the ‘perfect’ regulatory structures designed by control engineering vis-à-vis optimal functions such as robustness. To the extent that they are not, the analyses suggest how they may become so and this in turn should facilitate synthetic biology and metabolic engineering. PMID:24261908

  16. Revisiting Robustness and Evolvability: Evolution in Weighted Genotype Spaces

    PubMed Central

    Partha, Raghavendran; Raman, Karthik

    2014-01-01

    Robustness and evolvability are highly intertwined properties of biological systems. The relationship between these properties determines how biological systems are able to withstand mutations and show variation in response to them. Computational studies have explored the relationship between these two properties using neutral networks of RNA sequences (genotype) and their secondary structures (phenotype) as a model system. However, these studies have assumed every mutation to a sequence to be equally likely; the differences in the likelihood of the occurrence of various mutations, and the consequence of probabilistic nature of the mutations in such a system have previously been ignored. Associating probabilities to mutations essentially results in the weighting of genotype space. We here perform a comparative analysis of weighted and unweighted neutral networks of RNA sequences, and subsequently explore the relationship between robustness and evolvability. We show that assuming an equal likelihood for all mutations (as in an unweighted network), underestimates robustness and overestimates evolvability of a system. In spite of discarding this assumption, we observe that a negative correlation between sequence (genotype) robustness and sequence evolvability persists, and also that structure (phenotype) robustness promotes structure evolvability, as observed in earlier studies using unweighted networks. We also study the effects of base composition bias on robustness and evolvability. Particularly, we explore the association between robustness and evolvability in a sequence space that is AU-rich – sequences with an AU content of 80% or higher, compared to a normal (unbiased) sequence space. We find that evolvability of both sequences and structures in an AU-rich space is lesser compared to the normal space, and robustness higher. We also observe that AU-rich populations evolving on neutral networks of phenotypes, can access less phenotypic variation compared to normal populations evolving on neutral networks. PMID:25390641

  17. Optimized pulses for the control of uncertain qubits

    DOE PAGES

    Grace, Matthew D.; Dominy, Jason M.; Witzel, Wayne M.; ...

    2012-05-18

    The construction of high-fidelity control fields that are robust to control, system, and/or surrounding environment uncertainties is a crucial objective for quantum information processing. Using the two-state Landau-Zener model for illustrative simulations of a controlled qubit, we generate optimal controls for π/2 and π pulses and investigate their inherent robustness to uncertainty in the magnitude of the drift Hamiltonian. Next, we construct a quantum-control protocol to improve system-drift robustness by combining environment-decoupling pulse criteria and optimal control theory for unitary operations. By perturbatively expanding the unitary time-evolution operator for an open quantum system, previous analysis of environment-decoupling control pulses hasmore » calculated explicit control-field criteria to suppress environment-induced errors up to (but not including) third order from π/2 and π pulses. We systematically integrate this criteria with optimal control theory, incorporating an estimate of the uncertain parameter to produce improvements in gate fidelity and robustness, demonstrated via a numerical example based on double quantum dot qubits. For the qubit model used in this work, postfacto analysis of the resulting controls suggests that realistic control-field fluctuations and noise may contribute just as significantly to gate errors as system and environment fluctuations.« less

  18. A new upper bound for the norm of interval matrices with application to robust stability analysis of delayed neural networks.

    PubMed

    Faydasicok, Ozlem; Arik, Sabri

    2013-08-01

    The main problem with the analysis of robust stability of neural networks is to find the upper bound norm for the intervalized interconnection matrices of neural networks. In the previous literature, the major three upper bound norms for the intervalized interconnection matrices have been reported and they have been successfully applied to derive new sufficient conditions for robust stability of delayed neural networks. One of the main contributions of this paper will be the derivation of a new upper bound for the norm of the intervalized interconnection matrices of neural networks. Then, by exploiting this new upper bound norm of interval matrices and using stability theory of Lyapunov functionals and the theory of homomorphic mapping, we will obtain new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slope-bounded activation functions. The results obtained in this paper will be shown to be new and they can be considered alternative results to previously published corresponding results. We also give some illustrative and comparative numerical examples to demonstrate the effectiveness and applicability of the proposed robust stability condition. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Evaluation of prediction capability, robustness, and sensitivity in non-linear landslide susceptibility models, Guantánamo, Cuba

    NASA Astrophysics Data System (ADS)

    Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.

    2011-04-01

    This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.

  20. A powerful and robust test in genetic association studies.

    PubMed

    Cheng, Kuang-Fu; Lee, Jen-Yu

    2014-01-01

    There are several well-known single SNP tests presented in the literature for detecting gene-disease association signals. Having in place an efficient and robust testing process across all genetic models would allow a more comprehensive approach to analysis. Although some studies have shown that it is possible to construct such a test when the variants are common and the genetic model satisfies certain conditions, the model conditions are too restrictive and in general difficult to verify. In this paper, we propose a powerful and robust test without assuming any model restrictions. Our test is based on the selected 2 × 2 tables derived from the usual 2 × 3 table. By signals from these tables, we show through simulations across a wide range of allele frequencies and genetic models that this approach may produce a test which is almost uniformly most powerful in the analysis of low- and high-frequency variants. Two cancer studies are used to demonstrate applications of the proposed test. © 2014 S. Karger AG, Basel.

  1. An H(∞) control approach to robust learning of feedforward neural networks.

    PubMed

    Jing, Xingjian

    2011-09-01

    A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Robust Maneuvering Envelope Estimation Based on Reachability Analysis in an Optimal Control Formulation

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas; Schuet, Stefan R.; Wheeler, Kevin; Acosta, Diana; Kaneshige, John

    2013-01-01

    This paper discusses an algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. Starting with an optimal control formulation, the optimization problem can be rewritten as a Hamilton- Jacobi-Bellman equation. This equation can be solved by level set methods. This approach has been applied on an aircraft example involving structural airframe damage. Monte Carlo validation tests have confirmed that this approach is successful in estimating the safe maneuvering envelope for damaged aircraft.

  3. Comparison between two methodologies for urban drainage decision aid.

    PubMed

    Moura, P M; Baptista, M B; Barraud, S

    2006-01-01

    The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.

  4. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  5. Application of Statistics in Engineering Technology Programs

    ERIC Educational Resources Information Center

    Zhan, Wei; Fink, Rainer; Fang, Alex

    2010-01-01

    Statistics is a critical tool for robustness analysis, measurement system error analysis, test data analysis, probabilistic risk assessment, and many other fields in the engineering world. Traditionally, however, statistics is not extensively used in undergraduate engineering technology (ET) programs, resulting in a major disconnect from industry…

  6. Analysis of airframe/engine interactions in integrated flight and propulsion control

    NASA Technical Reports Server (NTRS)

    Schierman, John D.; Schmidt, David K.

    1991-01-01

    An analysis framework for the assessment of dynamic cross-coupling between airframe and engine systems from the perspective of integrated flight/propulsion control is presented. This analysis involves to determining the significance of the interactions with respect to deterioration in stability robustness and performance, as well as critical frequency ranges where problems may occur due to these interactions. The analysis illustrated here investigates both the airframe's effects on the engine control loops and the engine's effects on the airframe control loops in two case studies. The second case study involves a multi-input/multi-output analysis of the airframe. Sensitivity studies are performed on critical interactions to examine the degradations in the system's stability robustness and performance. Magnitudes of the interactions required to cause instabilities, as well as the frequencies at which the instabilities occur are recorded. Finally, the analysis framework is expanded to include control laws which contain cross-feeds between the airframe and engine systems.

  7. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger

    2008-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  8. The X-43A Six Degree of Freedom Monte Carlo Analysis

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael

    2007-01-01

    This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.

  9. Determining the transport mechanism of an enzyme-catalytic complex metabolic network based on biological robustness.

    PubMed

    Wang, Lei

    2013-04-01

    Understanding the transport mechanism of 1,3-propanediol (1,3-PD) is of critical importance to do further research on gene regulation. Due to the lack of intracellular information, on the basis of enzyme-catalytic system, using biological robustness as performance index, we present a system identification model to infer the most possible transport mechanism of 1,3-PD, in which the performance index consists of the relative error of the extracellular substance concentrations and biological robustness of the intracellular substance concentrations. We will not use a Boolean framework but prefer a model description based on ordinary differential equations. Among other advantages, this also facilitates the robustness analysis, which is the main goal of this paper. An algorithm is constructed to seek the solution of the identification model. Numerical results show that the most possible transport way is active transport coupled with passive diffusion.

  10. Control design for robust stability in linear regulators: Application to aerospace flight control

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1986-01-01

    Time domain stability robustness analysis and design for linear multivariable uncertain systems with bounded uncertainties is the central theme of the research. After reviewing the recently developed upper bounds on the linear elemental (structured), time varying perturbation of an asymptotically stable linear time invariant regulator, it is shown that it is possible to further improve these bounds by employing state transformations. Then introducing a quantitative measure called the stability robustness index, a state feedback conrol design algorithm is presented for a general linear regulator problem and then specialized to the case of modal systems as well as matched systems. The extension of the algorithm to stochastic systems with Kalman filter as the state estimator is presented. Finally an algorithm for robust dynamic compensator design is presented using Parameter Optimization (PO) procedure. Applications in a aircraft control and flexible structure control are presented along with a comparison with other existing methods.

  11. Robust, nonlinear, high angle-of-attack control design for a supermaneuverable vehicle

    NASA Technical Reports Server (NTRS)

    Adams, Richard J.

    1993-01-01

    High angle-of-attack flight control laws are developed for a supermaneuverable fighter aircraft. The methods of dynamic inversion and structured singular value synthesis are combined into an approach which addresses both the nonlinearity and robustness problems of flight at extreme operating conditions. The primary purpose of the dynamic inversion control elements is to linearize the vehicle response across the flight envelope. Structured singular value synthesis is used to design a dynamic controller which provides robust tracking to pilot commands. The resulting control system achieves desired flying qualities and guarantees a large margin of robustness to uncertainties for high angle-of-attack flight conditions. The results of linear simulation and structured singular value stability analysis are presented to demonstrate satisfaction of the design criteria. High fidelity nonlinear simulation results show that the combined dynamics inversion/structured singular value synthesis control law achieves a high level of performance in a realistic environment.

  12. Robust failure detection filters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sanmartin, A. M.

    1985-01-01

    The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch.

  13. Smoothing effect for spatially distributed renewable resources and its impact on power grid robustness.

    PubMed

    Nagata, Motoki; Hirata, Yoshito; Fujiwara, Naoya; Tanaka, Gouhei; Suzuki, Hideyuki; Aihara, Kazuyuki

    2017-03-01

    In this paper, we show that spatial correlation of renewable energy outputs greatly influences the robustness of the power grids against large fluctuations of the effective power. First, we evaluate the spatial correlation among renewable energy outputs. We find that the spatial correlation of renewable energy outputs depends on the locations, while the influence of the spatial correlation of renewable energy outputs on power grids is not well known. Thus, second, by employing the topology of the power grid in eastern Japan, we analyze the robustness of the power grid with spatial correlation of renewable energy outputs. The analysis is performed by using a realistic differential-algebraic equations model. The results show that the spatial correlation of the energy resources strongly degrades the robustness of the power grid. Our results suggest that we should consider the spatial correlation of the renewable energy outputs when estimating the stability of power grids.

  14. Topological properties of robust biological and computational networks

    PubMed Central

    Navlakha, Saket; He, Xin; Faloutsos, Christos; Bar-Joseph, Ziv

    2014-01-01

    Network robustness is an important principle in biology and engineering. Previous studies of global networks have identified both redundancy and sparseness as topological properties used by robust networks. By focusing on molecular subnetworks, or modules, we show that module topology is tightly linked to the level of environmental variability (noise) the module expects to encounter. Modules internal to the cell that are less exposed to environmental noise are more connected and less robust than external modules. A similar design principle is used by several other biological networks. We propose a simple change to the evolutionary gene duplication model which gives rise to the rich range of module topologies observed within real networks. We apply these observations to evaluate and design communication networks that are specifically optimized for noisy or malicious environments. Combined, joint analysis of biological and computational networks leads to novel algorithms and insights benefiting both fields. PMID:24789562

  15. A Computational Framework to Control Verification and Robustness Analysis

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2010-01-01

    This paper presents a methodology for evaluating the robustness of a controller based on its ability to satisfy the design requirements. The framework proposed is generic since it allows for high-fidelity models, arbitrary control structures and arbitrary functional dependencies between the requirements and the uncertain parameters. The cornerstone of this contribution is the ability to bound the region of the uncertain parameter space where the degradation in closed-loop performance remains acceptable. The size of this bounding set, whose geometry can be prescribed according to deterministic or probabilistic uncertainty models, is a measure of robustness. The robustness metrics proposed herein are the parametric safety margin, the reliability index, the failure probability and upper bounds to this probability. The performance observed at the control verification setting, where the assumptions and approximations used for control design may no longer hold, will fully determine the proposed control assessment.

  16. A robust ridge regression approach in the presence of both multicollinearity and outliers in the data

    NASA Astrophysics Data System (ADS)

    Shariff, Nurul Sima Mohamad; Ferdaos, Nur Aqilah

    2017-08-01

    Multicollinearity often leads to inconsistent and unreliable parameter estimates in regression analysis. This situation will be more severe in the presence of outliers it will cause fatter tails in the error distributions than the normal distributions. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is expected to be affected by the presence of outliers due to some assumptions imposed in the modeling procedure. Thus, the robust version of existing ridge method with some modification in the inverse matrix and the estimated response value is introduced. The performance of the proposed method is discussed and comparisons are made with several existing estimators namely, Ordinary Least Squares (OLS), ridge regression and robust ridge regression based on GM-estimates. The finding of this study is able to produce reliable parameter estimates in the presence of both multicollinearity and outliers in the data.

  17. Robust control of electrostatic torsional micromirrors using adaptive sliding-mode control

    NASA Astrophysics Data System (ADS)

    Sane, Harshad S.; Yazdi, Navid; Mastrangelo, Carlos H.

    2005-01-01

    This paper presents high-resolution control of torsional electrostatic micromirrors beyond their inherent pull-in instability using robust sliding-mode control (SMC). The objectives of this paper are two-fold - firstly, to demonstrate the applicability of SMC for MEMS devices; secondly - to present a modified SMC algorithm that yields improved control accuracy. SMC enables compact realization of a robust controller tolerant of device characteristic variations and nonlinearities. Robustness of the control loop is demonstrated through extensive simulations and measurements on MEMS with a wide range in their characteristics. Control of two-axis gimbaled micromirrors beyond their pull-in instability with overall 10-bit pointing accuracy is confirmed experimentally. In addition, this paper presents an analysis of the sources of errors in discrete-time implementation of the control algorithm. To minimize these errors, we present an adaptive version of the SMC algorithm that yields substantial performance improvement without considerably increasing implementation complexity.

  18. The comparison of predictive scheduling algorithms for different sizes of job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.; Krenczyk, D.

    2016-08-01

    In the paper a survey of predictive and reactive scheduling methods is done in order to evaluate how the ability of prediction of reliability characteristics influences over robustness criteria. The most important reliability characteristics are: Mean Time to Failure, Mean Time of Repair. Survey analysis is done for a job shop scheduling problem. The paper answers the question: what method generates robust schedules in the case of a bottleneck failure occurrence before, at the beginning of planned maintenance actions or after planned maintenance actions? Efficiency of predictive schedules is evaluated using criteria: makespan, total tardiness, flow time, idle time. Efficiency of reactive schedules is evaluated using: solution robustness criterion and quality robustness criterion. This paper is the continuation of the research conducted in the paper [1], where the survey of predictive and reactive scheduling methods is done only for small size scheduling problems.

  19. Robust fast controller design via nonlinear fractional differential equations.

    PubMed

    Zhou, Xi; Wei, Yiheng; Liang, Shu; Wang, Yong

    2017-07-01

    A new method for linear system controller design is proposed whereby the closed-loop system achieves both robustness and fast response. The robustness performance considered here means the damping ratio of closed-loop system can keep its desired value under system parameter perturbation, while the fast response, represented by rise time of system output, can be improved by tuning the controller parameter. We exploit techniques from both the nonlinear systems control and the fractional order systems control to derive a novel nonlinear fractional order controller. For theoretical analysis of the closed-loop system performance, two comparison theorems are developed for a class of fractional differential equations. Moreover, the rise time of the closed-loop system can be estimated, which facilitates our controller design to satisfy the fast response performance and maintain the robustness. Finally, numerical examples are given to illustrate the effectiveness of our methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Optimal design of loudspeaker arrays for robust cross-talk cancellation using the Taguchi method and the genetic algorithm.

    PubMed

    Bai, Mingsian R; Tung, Chih-Wei; Lee, Chih-Chung

    2005-05-01

    An optimal design technique of loudspeaker arrays for cross-talk cancellation with application in three-dimensional audio is presented. An array focusing scheme is presented on the basis of the inverse propagation that relates the transducers to a set of chosen control points. Tikhonov regularization is employed in designing the inverse cancellation filters. An extensive analysis is conducted to explore the cancellation performance and robustness issues. To best compromise the performance and robustness of the cross-talk cancellation system, optimal configurations are obtained with the aid of the Taguchi method and the genetic algorithm (GA). The proposed systems are further justified by physical as well as subjective experiments. The results reveal that large number of loudspeakers, closely spaced configuration, and optimal control point design all contribute to the robustness of cross-talk cancellation systems (CCS) against head misalignment.

  1. Architecture-Dependent Robustness and Bistability in a Class of Genetic Circuits

    PubMed Central

    Zhang, Jiajun; Yuan, Zhanjiang; Li, Han-Xiong; Zhou, Tianshou

    2010-01-01

    Understanding the relationship between genotype and phenotype is a challenge in systems biology. An interesting yet related issue is why a particular circuit topology is present in a cell when the same function can supposedly be obtained from an alternative architecture. Here we analyzed two topologically equivalent genetic circuits of coupled positive and negative feedback loops, named NAT and ALT circuits, respectively. The computational search for the oscillation volume of the entire biologically reasonable parameter region through large-scale random samplings shows that the NAT circuit exhibits a distinctly larger fraction of the oscillatory region than the ALT circuit. Such a global robustness difference between two circuits is supplemented by analyzing local robustness, including robustness to parameter perturbations and to molecular noise. In addition, detailed dynamical analysis shows that the molecular noise of both circuits can induce transient switching of the different mechanism between a stable steady state and a stable limit cycle. Our investigation on robustness and dynamics through examples provides insights into the relationship between network architecture and its function. PMID:20712986

  2. Neural robust stabilization via event-triggering mechanism and adaptive learning technique.

    PubMed

    Wang, Ding; Liu, Derong

    2018-06-01

    The robust control synthesis of continuous-time nonlinear systems with uncertain term is investigated via event-triggering mechanism and adaptive critic learning technique. We mainly focus on combining the event-triggering mechanism with adaptive critic designs, so as to solve the nonlinear robust control problem. This can not only make better use of computation and communication resources, but also conduct controller design from the view of intelligent optimization. Through theoretical analysis, the nonlinear robust stabilization can be achieved by obtaining an event-triggered optimal control law of the nominal system with a newly defined cost function and a certain triggering condition. The adaptive critic technique is employed to facilitate the event-triggered control design, where a neural network is introduced as an approximator of the learning phase. The performance of the event-triggered robust control scheme is validated via simulation studies and comparisons. The present method extends the application domain of both event-triggered control and adaptive critic control to nonlinear systems possessing dynamical uncertainties. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. SU-F-T-192: Study of Robustness Analysis Method of Multiple Field Optimized IMPT Plans for Head & Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Wang, X; Li, H

    Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less

  4. A robust functional-data-analysis method for data recovery in multichannel sensor systems.

    PubMed

    Sun, Jian; Liao, Haitao; Upadhyaya, Belle R

    2014-08-01

    Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of the proposed method in recovering non-skewed signals.

  5. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  6. Fire flame detection based on GICA and target tracking

    NASA Astrophysics Data System (ADS)

    Rong, Jianzhong; Zhou, Dechuang; Yao, Wei; Gao, Wei; Chen, Juan; Wang, Jian

    2013-04-01

    To improve the video fire detection rate, a robust fire detection algorithm based on the color, motion and pattern characteristics of fire targets was proposed, which proved a satisfactory fire detection rate for different fire scenes. In this fire detection algorithm: (a) a rule-based generic color model was developed based on analysis on a large quantity of flame pixels; (b) from the traditional GICA (Geometrical Independent Component Analysis) model, a Cumulative Geometrical Independent Component Analysis (C-GICA) model was developed for motion detection without static background and (c) a BP neural network fire recognition model based on multi-features of the fire pattern was developed. Fire detection tests on benchmark fire video clips of different scenes have shown the robustness, accuracy and fast-response of the algorithm.

  7. Robustness Metrics: How Are They Calculated, When Should They Be Used and Why Do They Give Different Results?

    NASA Astrophysics Data System (ADS)

    McPhail, C.; Maier, H. R.; Kwakkel, J. H.; Giuliani, M.; Castelletti, A.; Westra, S.

    2018-02-01

    Robustness is being used increasingly for decision analysis in relation to deep uncertainty and many metrics have been proposed for its quantification. Recent studies have shown that the application of different robustness metrics can result in different rankings of decision alternatives, but there has been little discussion of what potential causes for this might be. To shed some light on this issue, we present a unifying framework for the calculation of robustness metrics, which assists with understanding how robustness metrics work, when they should be used, and why they sometimes disagree. The framework categorizes the suitability of metrics to a decision-maker based on (1) the decision-context (i.e., the suitability of using absolute performance or regret), (2) the decision-maker's preferred level of risk aversion, and (3) the decision-maker's preference toward maximizing performance, minimizing variance, or some higher-order moment. This article also introduces a conceptual framework describing when relative robustness values of decision alternatives obtained using different metrics are likely to agree and disagree. This is used as a measure of how "stable" the ranking of decision alternatives is when determined using different robustness metrics. The framework is tested on three case studies, including water supply augmentation in Adelaide, Australia, the operation of a multipurpose regulated lake in Italy, and flood protection for a hypothetical river based on a reach of the river Rhine in the Netherlands. The proposed conceptual framework is confirmed by the case study results, providing insight into the reasons for disagreements between rankings obtained using different robustness metrics.

  8. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System

    PubMed Central

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-01-01

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter. PMID:26999153

  9. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity.

    PubMed

    Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F

    2018-06-01

    This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Bayes factors based on robust TDT-type tests for family trio design.

    PubMed

    Yuan, Min; Pan, Xiaoqing; Yang, Yaning

    2015-06-01

    Adaptive transmission disequilibrium test (aTDT) and MAX3 test are two robust-efficient association tests for case-parent family trio data. Both tests incorporate information of common genetic models including recessive, additive and dominant models and are efficient in power and robust to genetic model specifications. The aTDT uses information of departure from Hardy-Weinberg disequilibrium to identify the potential genetic model underlying the data and then applies the corresponding TDT-type test, and the MAX3 test is defined as the maximum of the absolute value of three TDT-type tests under the three common genetic models. In this article, we propose three robust Bayes procedures, the aTDT based Bayes factor, MAX3 based Bayes factor and Bayes model averaging (BMA), for association analysis with case-parent trio design. The asymptotic distributions of aTDT under the null and alternative hypothesis are derived in order to calculate its Bayes factor. Extensive simulations show that the Bayes factors and the p-values of the corresponding tests are generally consistent and these Bayes factors are robust to genetic model specifications, especially so when the priors on the genetic models are equal. When equal priors are used for the underlying genetic models, the Bayes factor method based on aTDT is more powerful than those based on MAX3 and Bayes model averaging. When the prior placed a small (large) probability on the true model, the Bayes factor based on aTDT (BMA) is more powerful. Analysis of a simulation data about RA from GAW15 is presented to illustrate applications of the proposed methods.

  11. Robust multi-model control of an autonomous wind power system

    NASA Astrophysics Data System (ADS)

    Cutululis, Nicolas Antonio; Ceanga, Emil; Hansen, Anca Daniela; Sørensen, Poul

    2006-09-01

    This article presents a robust multi-model control structure for a wind power system that uses a variable speed wind turbine (VSWT) driving a permanent magnet synchronous generator (PMSG) connected to a local grid. The control problem consists in maximizing the energy captured from the wind for varying wind speeds. The VSWT-PMSG linearized model analysis reveals the resonant nature of its dynamic at points on the optimal regimes characteristic (ORC). The natural frequency of the system and the damping factor are strongly dependent on the operating point on the ORC. Under these circumstances a robust multi-model control structure is designed. The simulation results prove the viability of the proposed control structure. Copyright

  12. Decentralized adaptive control of robot manipulators with robust stabilization design

    NASA Technical Reports Server (NTRS)

    Yuan, Bau-San; Book, Wayne J.

    1988-01-01

    Due to geometric nonlinearities and complex dynamics, a decentralized technique for adaptive control for multilink robot arms is attractive. Lyapunov-function theory for stability analysis provides an approach to robust stabilization. Each joint of the arm is treated as a component subsystem. The adaptive controller is made locally stable with servo signals including proportional and integral gains. This results in the bound on the dynamical interactions with other subsystems. A nonlinear controller which stabilizes the system with uniform boundedness is used to improve the robustness properties of the overall system. As a result, the robot tracks the reference trajectories with convergence. This strategy makes computation simple and therefore facilitates real-time implementation.

  13. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  14. Beyond singular values and loop shapes

    NASA Technical Reports Server (NTRS)

    Stein, G.

    1985-01-01

    The status of singular value loop-shaping as a design paradigm for multivariable feedback systems is reviewed. It shows that this paradigm is an effective design tool whenever the problem specifications are spacially round. The tool can be arbitrarily conservative, however, when they are not. This happens because singular value conditions for robust performance are not tight (necessary and sufficient) and can severely overstate actual requirements. An alternate paradign is discussed which overcomes these limitations. The alternative includes a more general problem formulation, a new matrix function mu, and tight conditions for both robust stability and robust performance. The state of the art currently permits analysis of feedback systems within this new paradigm. Synthesis remains a subject of research.

  15. Robust video copy detection approach based on local tangent space alignment

    NASA Astrophysics Data System (ADS)

    Nie, Xiushan; Qiao, Qianping

    2012-04-01

    We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.

  16. Robust Bayesian Factor Analysis

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Yuan, Ke-Hai

    2003-01-01

    Bayesian factor analysis (BFA) assumes the normal distribution of the current sample conditional on the parameters. Practical data in social and behavioral sciences typically have significant skewness and kurtosis. If the normality assumption is not attainable, the posterior analysis will be inaccurate, although the BFA depends less on the current…

  17. Interlaboratory Comparison Test as an Evaluation of Applicability of an Alternative Edible Oil Analysis by 1H NMR Spectroscopy.

    PubMed

    Zailer, Elina; Holzgrabe, Ulrike; Diehl, Bernd W K

    2017-11-01

    A proton (1H) NMR spectroscopic method was established for the quality assessment of vegetable oils. To date, several research studies have been published demonstrating the high potential of the NMR technique in lipid analysis. An interlaboratory comparison was organized with the following main objectives: (1) to evaluate an alternative analysis of edible oils by using 1H NMR spectroscopy; and (2) to determine the robustness and reproducibility of the method. Five different edible oil samples were analyzed by evaluating 15 signals (free fatty acids, peroxides, aldehydes, double bonds, and linoleic and linolenic acids) in each spectrum. A total of 21 NMR data sets were obtained from 17 international participant laboratories. The performance of each laboratory was assessed by their z-scores. The test was successfully passed by 90.5% of the participants. Results showed that NMR spectroscopy is a robust alternative method for edible oil analysis.

  18. Data-Driven Robust RVFLNs Modeling of a Blast Furnace Iron-Making Process Using Cauchy Distribution Weighted M-Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Lv, Youbin; Wang, Hong

    Optimal operation of a practical blast furnace (BF) ironmaking process depends largely on a good measurement of molten iron quality (MIQ) indices. However, measuring the MIQ online is not feasible using the available techniques. In this paper, a novel data-driven robust modeling is proposed for online estimation of MIQ using improved random vector functional-link networks (RVFLNs). Since the output weights of traditional RVFLNs are obtained by the least squares approach, a robustness problem may occur when the training dataset is contaminated with outliers. This affects the modeling accuracy of RVFLNs. To solve this problem, a Cauchy distribution weighted M-estimation basedmore » robust RFVLNs is proposed. Since the weights of different outlier data are properly determined by the Cauchy distribution, their corresponding contribution on modeling can be properly distinguished. Thus robust and better modeling results can be achieved. Moreover, given that the BF is a complex nonlinear system with numerous coupling variables, the data-driven canonical correlation analysis is employed to identify the most influential components from multitudinous factors that affect the MIQ indices to reduce the model dimension. Finally, experiments using industrial data and comparative studies have demonstrated that the obtained model produces a better modeling and estimating accuracy and stronger robustness than other modeling methods.« less

  19. Robust climate policies under uncertainty: a comparison of robust decision making and info-gap methods.

    PubMed

    Hall, Jim W; Lempert, Robert J; Keller, Klaus; Hackbarth, Andrew; Mijere, Christophe; McInerney, David J

    2012-10-01

    This study compares two widely used approaches for robustness analysis of decision problems: the info-gap method originally developed by Ben-Haim and the robust decision making (RDM) approach originally developed by Lempert, Popper, and Bankes. The study uses each approach to evaluate alternative paths for climate-altering greenhouse gas emissions given the potential for nonlinear threshold responses in the climate system, significant uncertainty about such a threshold response and a variety of other key parameters, as well as the ability to learn about any threshold responses over time. Info-gap and RDM share many similarities. Both represent uncertainty as sets of multiple plausible futures, and both seek to identify robust strategies whose performance is insensitive to uncertainties. Yet they also exhibit important differences, as they arrange their analyses in different orders, treat losses and gains in different ways, and take different approaches to imprecise probabilistic information. The study finds that the two approaches reach similar but not identical policy recommendations and that their differing attributes raise important questions about their appropriate roles in decision support applications. The comparison not only improves understanding of these specific methods, it also suggests some broader insights into robustness approaches and a framework for comparing them. © 2012 RAND Corporation.

  20. Conditioning and Robustness of RNA Boltzmann Sampling under Thermodynamic Parameter Perturbations.

    PubMed

    Rogers, Emily; Murrugarra, David; Heitsch, Christine

    2017-07-25

    Understanding how RNA secondary structure prediction methods depend on the underlying nearest-neighbor thermodynamic model remains a fundamental challenge in the field. Minimum free energy (MFE) predictions are known to be "ill conditioned" in that small changes to the thermodynamic model can result in significantly different optimal structures. Hence, the best practice is now to sample from the Boltzmann distribution, which generates a set of suboptimal structures. Although the structural signal of this Boltzmann sample is known to be robust to stochastic noise, the conditioning and robustness under thermodynamic perturbations have yet to be addressed. We present here a mathematically rigorous model for conditioning inspired by numerical analysis, and also a biologically inspired definition for robustness under thermodynamic perturbation. We demonstrate the strong correlation between conditioning and robustness and use its tight relationship to define quantitative thresholds for well versus ill conditioning. These resulting thresholds demonstrate that the majority of the sequences are at least sample robust, which verifies the assumption of sampling's improved conditioning over the MFE prediction. Furthermore, because we find no correlation between conditioning and MFE accuracy, the presence of both well- and ill-conditioned sequences indicates the continued need for both thermodynamic model refinements and alternate RNA structure prediction methods beyond the physics-based ones. Copyright © 2017. Published by Elsevier Inc.

  1. Improving power and robustness for detecting genetic association with extreme-value sampling design.

    PubMed

    Chen, Hua Yun; Li, Mingyao

    2011-12-01

    Extreme-value sampling design that samples subjects with extremely large or small quantitative trait values is commonly used in genetic association studies. Samples in such designs are often treated as "cases" and "controls" and analyzed using logistic regression. Such a case-control analysis ignores the potential dose-response relationship between the quantitative trait and the underlying trait locus and thus may lead to loss of power in detecting genetic association. An alternative approach to analyzing such data is to model the dose-response relationship by a linear regression model. However, parameter estimation from this model can be biased, which may lead to inflated type I errors. We propose a robust and efficient approach that takes into consideration of both the biased sampling design and the potential dose-response relationship. Extensive simulations demonstrate that the proposed method is more powerful than the traditional logistic regression analysis and is more robust than the linear regression analysis. We applied our method to the analysis of a candidate gene association study on high-density lipoprotein cholesterol (HDL-C) which includes study subjects with extremely high or low HDL-C levels. Using our method, we identified several SNPs showing a stronger evidence of association with HDL-C than the traditional case-control logistic regression analysis. Our results suggest that it is important to appropriately model the quantitative traits and to adjust for the biased sampling when dose-response relationship exists in extreme-value sampling designs. © 2011 Wiley Periodicals, Inc.

  2. Self-Critical, and Robust, Procedures for the Analysis of Multivariate Normal Data.

    DTIC Science & Technology

    1982-06-01

    Influence Functions The influence function is the most important tt of qual- itative zobustness since many other robustness characteristics of an estimator...may be derived from it. The influence function characterizes the (asymptotic) response of an estimator to an additional observation as a function of...the influence function be bounded. It is also advantageous, in our opinion, if the influence functions are re-descending to zero. The influence function for

  3. Reducing regional vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    NASA Astrophysics Data System (ADS)

    Reed, Patrick; Trindade, Bernardo; Jonathan, Herman; Harrison, Zeff; Gregory, Characklis

    2016-04-01

    Emerging water scarcity concerns in southeastern US are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify regionally coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative management strategies. Results show that the sampling of deeply uncertain factors in the computational search phase of MORDM can aid in the discovery of management actions that substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be explored jointly to decrease robustness conflicts between the utilities. The insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.

  4. Reducing regional vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    NASA Astrophysics Data System (ADS)

    Trindade, B. C.; Reed, P. M.; Herman, J. D.; Zeff, H. B.; Characklis, G. W.

    2015-12-01

    Emerging water scarcity concerns in southeastern US are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify regionally coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative management strategies. Results show that the sampling of deeply uncertain factors in the computational search phase of MORDM can aid in the discovery of management actions that substantially improve the robustness of individual utilities as well as of the overall region to water scarcity. Cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management should be explored jointly to decrease robustness conflicts between the utilities. The insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.

  5. Supercritical fluid chromatography for GMP analysis in support of pharmaceutical development and manufacturing activities.

    PubMed

    Hicks, Michael B; Regalado, Erik L; Tan, Feng; Gong, Xiaoyi; Welch, Christopher J

    2016-01-05

    Supercritical fluid chromatography (SFC) has long been a preferred method for enantiopurity analysis in support of pharmaceutical discovery and development, but implementation of the technique in regulated GMP laboratories has been somewhat slow, owing to limitations in instrument sensitivity, reproducibility, accuracy and robustness. In recent years, commercialization of next generation analytical SFC instrumentation has addressed previous shortcomings, making the technique better suited for GMP analysis. In this study we investigate the use of modern SFC for enantiopurity analysis of several pharmaceutical intermediates and compare the results with the conventional HPLC approaches historically used for analysis in a GMP setting. The findings clearly illustrate that modern SFC now exhibits improved precision, reproducibility, accuracy and robustness; also providing superior resolution and peak capacity compared to HPLC. Based on these findings, the use of modern chiral SFC is recommended for GMP studies of stereochemistry in pharmaceutical development and manufacturing. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Orion Orbit Control Design and Analysis

    NASA Technical Reports Server (NTRS)

    Jackson, Mark; Gonzalez, Rodolfo; Sims, Christopher

    2007-01-01

    The analysis of candidate thruster configurations for the Crew Exploration Vehicle (CEV) is presented. Six candidate configurations were considered for the prime contractor baseline design. The analysis included analytical assessments of control authority, control precision, efficiency and robustness, as well as simulation assessments of control performance. The principles used in the analytic assessments of controllability, robustness and fuel performance are covered and results provided for the configurations assessed. Simulation analysis was conducted using a pulse width modulated, 6 DOF reaction system control law with a simplex-based thruster selection algorithm. Control laws were automatically derived from hardware configuration parameters including thruster locations, directions, magnitude and specific impulse, as well as vehicle mass properties. This parameterized controller allowed rapid assessment of multiple candidate layouts. Simulation results are presented for final phase rendezvous and docking, as well as low lunar orbit attitude hold. Finally, on-going analysis to consider alternate Service Module designs and to assess the pilot-ability of the baseline design are discussed to provide a status of orbit control design work to date.

  7. Increased robustness of single-molecule counting with microfluidics, digital isothermal amplification, and a mobile phone versus real-time kinetic measurements.

    PubMed

    Selck, David A; Karymov, Mikhail A; Sun, Bing; Ismagilov, Rustem F

    2013-11-19

    Quantitative bioanalytical measurements are commonly performed in a kinetic format and are known to not be robust to perturbation that affects the kinetics itself or the measurement of kinetics. We hypothesized that the same measurements performed in a "digital" (single-molecule) format would show increased robustness to such perturbations. Here, we investigated the robustness of an amplification reaction (reverse-transcription loop-mediated amplification, RT-LAMP) in the context of fluctuations in temperature and time when this reaction is used for quantitative measurements of HIV-1 RNA molecules under limited-resource settings (LRS). The digital format that counts molecules using dRT-LAMP chemistry detected a 2-fold change in concentration of HIV-1 RNA despite a 6 °C temperature variation (p-value = 6.7 × 10(-7)), whereas the traditional kinetic (real-time) format did not (p-value = 0.25). Digital analysis was also robust to a 20 min change in reaction time, to poor imaging conditions obtained with a consumer cell-phone camera, and to automated cloud-based processing of these images (R(2) = 0.9997 vs true counts over a 100-fold dynamic range). Fluorescent output of multiplexed PCR amplification could also be imaged with the cell phone camera using flash as the excitation source. Many nonlinear amplification schemes based on organic, inorganic, and biochemical reactions have been developed, but their robustness is not well understood. This work implies that these chemistries may be significantly more robust in the digital, rather than kinetic, format. It also calls for theoretical studies to predict robustness of these chemistries and, more generally, to design robust reaction architectures. The SlipChip that we used here and other digital microfluidic technologies already exist to enable testing of these predictions. Such work may lead to identification or creation of robust amplification chemistries that enable rapid and precise quantitative molecular measurements under LRS. Furthermore, it may provide more general principles describing robustness of chemical and biological networks in digital formats.

  8. Robust analysis of trends in noisy tokamak confinement data using geodesic least squares regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verdoolaege, G., E-mail: geert.verdoolaege@ugent.be; Laboratory for Plasma Physics, Royal Military Academy, B-1000 Brussels; Shabbir, A.

    Regression analysis is a very common activity in fusion science for unveiling trends and parametric dependencies, but it can be a difficult matter. We have recently developed the method of geodesic least squares (GLS) regression that is able to handle errors in all variables, is robust against data outliers and uncertainty in the regression model, and can be used with arbitrary distribution models and regression functions. We here report on first results of application of GLS to estimation of the multi-machine scaling law for the energy confinement time in tokamaks, demonstrating improved consistency of the GLS results compared to standardmore » least squares.« less

  9. Interval Analysis Approach to Prototype the Robust Control of the Laboratory Overhead Crane

    NASA Astrophysics Data System (ADS)

    Smoczek, J.; Szpytko, J.; Hyla, P.

    2014-07-01

    The paper describes the software-hardware equipment and control-measurement solutions elaborated to prototype the laboratory scaled overhead crane control system. The novelty approach to crane dynamic system modelling and fuzzy robust control scheme design is presented. The iterative procedure for designing a fuzzy scheduling control scheme is developed based on the interval analysis of discrete-time closed-loop system characteristic polynomial coefficients in the presence of rope length and mass of a payload variation to select the minimum set of operating points corresponding to the midpoints of membership functions at which the linear controllers are determined through desired poles assignment. The experimental results obtained on the laboratory stand are presented.

  10. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  11. MeDICi Software Superglue for Data Analysis Pipelines

    ScienceCinema

    Ian Gorton

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework is an integrated middleware platform developed to solve data analysis and processing needs of scientists across many domains. MeDICi is scalable, easily modified, and robust to multiple languages, protocols, and hardware platforms, and in use today by PNNL scientists for bioinformatics, power grid failure analysis, and text analysis.

  12. Robust Variable Selection with Exponential Squared Loss.

    PubMed

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-04-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.

  13. Robust Variable Selection with Exponential Squared Loss

    PubMed Central

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-01-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996

  14. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.

  15. Reducing regional drought vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    NASA Astrophysics Data System (ADS)

    Trindade, B. C.; Reed, P. M.; Herman, J. D.; Zeff, H. B.; Characklis, G. W.

    2017-06-01

    Emerging water scarcity concerns in many urban regions are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative drought management strategies. Our results show that appropriately designing adaptive risk-of-failure action triggers required stressing them with a comprehensive sample of deeply uncertain factors in the computational search phase of MORDM. Search under the new ensemble of states-of-the-world is shown to fundamentally change perceived performance tradeoffs and substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Search under deep uncertainty enhanced the discovery of how cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be employed jointly to improve regional robustness and decrease robustness conflicts between the utilities. Insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.

  16. Robust Bounded Influence Tests in Linear Models

    DTIC Science & Technology

    1988-11-01

    sensitivity analysis and bounded influence estimation. In: Evaluation of Econometric Models, J. Kmenta and J.B. Ramsey (eds.) Academic Press, New York...1R’OBUST bOUNDED INFLUENCE TESTS IN LINEA’ MODELS and( I’homas P. [lettmansperger* Tim [PennsylvanLa State UJniversity A M i0d fix pu111 rsos.p JJ 1 0...November 1988 ROBUST BOUNDED INFLUENCE TESTS IN LINEAR MODELS Marianthi Markatou The University of Iowa and Thomas P. Hettmansperger* The Pennsylvania

  17. Return Difference Feedback Design for Robust Uncertainty Tolerance in Stochastic Multivariable Control Systems.

    DTIC Science & Technology

    1984-07-01

    34robustness" analysis for multiloop feedback systems. Reference [55] describes a simple method based on the Perron - Frobenius Theory of non-negative...Viewpoint, " Operator Theory : Advances and Applications, 12, pp. 277-302, 1984. - E. A. Jonckheere, "New Bound on the Sensitivity -- of the Solution of...Reidel, Dordrecht, Holland, 1984. M. G. Safonov, "Comments on Singular Value Theory in Uncertain Feedback Systems, " to appear IEEE Trans. on Automatic

  18. Strict Constraint Feasibility in Analysis and Design of Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.

  19. Genotype by environment interaction and breeding for robustness in livestock

    PubMed Central

    Rauw, Wendy M.; Gomez-Raya, Luis

    2015-01-01

    The increasing size of the human population is projected to result in an increase in meat consumption. However, at the same time, the dominant position of meat as the center of meals is on the decline. Modern objections to the consumption of meat include public concerns with animal welfare in livestock production systems. Animal breeding practices have become part of the debate since it became recognized that animals in a population that have been selected for high production efficiency are more at risk for behavioral, physiological and immunological problems. As a solution, animal breeding practices need to include selection for robustness traits, which can be implemented through the use of reaction norms analysis, or though the direct inclusion of robustness traits in the breeding objective and in the selection index. This review gives an overview of genotype × environment interactions (the influence of the environment, reaction norms, phenotypic plasticity, canalization, and genetic homeostasis), reaction norms analysis in livestock production, options for selection for increased levels of production and against environmental sensitivity, and direct inclusion of robustness traits in the selection index. Ethical considerations of breeding for improved animal welfare are discussed. The discussion on animal breeding practices has been initiated and is very alive today. This positive trend is part of the sustainable food production movement that aims at feeding 9.15 billion people not just in the near future but also beyond. PMID:26539207

  20. Computational methods of robust controller design for aerodynamic flutter suppression

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1981-01-01

    The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

  1. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    PubMed

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Robust Principal Component Analysis Regularized by Truncated Nuclear Norm for Identifying Differentially Expressed Genes.

    PubMed

    Wang, Ya-Xuan; Gao, Ying-Lian; Liu, Jin-Xing; Kong, Xiang-Zhen; Li, Hai-Jun

    2017-09-01

    Identifying differentially expressed genes from the thousands of genes is a challenging task. Robust principal component analysis (RPCA) is an efficient method in the identification of differentially expressed genes. RPCA method uses nuclear norm to approximate the rank function. However, theoretical studies showed that the nuclear norm minimizes all singular values, so it may not be the best solution to approximate the rank function. The truncated nuclear norm is defined as the sum of some smaller singular values, which may achieve a better approximation of the rank function than nuclear norm. In this paper, a novel method is proposed by replacing nuclear norm of RPCA with the truncated nuclear norm, which is named robust principal component analysis regularized by truncated nuclear norm (TRPCA). The method decomposes the observation matrix of genomic data into a low-rank matrix and a sparse matrix. Because the significant genes can be considered as sparse signals, the differentially expressed genes are viewed as the sparse perturbation signals. Thus, the differentially expressed genes can be identified according to the sparse matrix. The experimental results on The Cancer Genome Atlas data illustrate that the TRPCA method outperforms other state-of-the-art methods in the identification of differentially expressed genes.

  3. The analysis of image feature robustness using cometcloud

    PubMed Central

    Qi, Xin; Kim, Hyunjoo; Xing, Fuyong; Parashar, Manish; Foran, David J.; Yang, Lin

    2012-01-01

    The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval. PMID:23248759

  4. Quantitative proteomics and network analysis of SSA1 and SSB1 deletion mutants reveals robustness of chaperone HSP70 network in Saccharomyces cerevisiae

    PubMed Central

    Jarnuczak, Andrew F.; Eyers, Claire E.; Schwartz, Jean‐Marc; Grant, Christopher M.

    2015-01-01

    Molecular chaperones play an important role in protein homeostasis and the cellular response to stress. In particular, the HSP70 chaperones in yeast mediate a large volume of protein folding through transient associations with their substrates. This chaperone interaction network can be disturbed by various perturbations, such as environmental stress or a gene deletion. Here, we consider deletions of two major chaperone proteins, SSA1 and SSB1, from the chaperone network in Sacchromyces cerevisiae. We employ a SILAC‐based approach to examine changes in global and local protein abundance and rationalise our results via network analysis and graph theoretical approaches. Although the deletions result in an overall increase in intracellular protein content, correlated with an increase in cell size, this is not matched by substantial changes in individual protein concentrations. Despite the phenotypic robustness to deletion of these major hub proteins, it cannot be simply explained by the presence of paralogues. Instead, network analysis and a theoretical consideration of folding workload suggest that the robustness to perturbation is a product of the overall network structure. This highlights how quantitative proteomics and systems modelling can be used to rationalise emergent network properties, and how the HSP70 system can accommodate the loss of major hubs. PMID:25689132

  5. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  6. Conflicts in Coalitions: A Stability Analysis of Robust Multi-City Regional Water Supply Portfolios

    NASA Astrophysics Data System (ADS)

    Gold, D.; Trindade, B. C.; Reed, P. M.; Characklis, G. W.

    2017-12-01

    Regional cooperation among water utilities can improve the robustness of urban water supply portfolios to deeply uncertain future conditions such as those caused by climate change or population growth. Coordination mechanisms such as water transfers, coordinated demand management, and shared infrastructure, can improve the efficiency of resource allocation and delay the need for new infrastructure investments. Regionalization does however come at a cost. Regionally coordinated water supply plans may be vulnerable to any emerging instabilities in the regional coalition. If one or more regional actors does not cooperate or follow the required regional actions in a time of crisis, the overall system performance may degrade. Furthermore, when crafting regional water supply portfolios, decision makers must choose a framework for measuring the performance of regional policies based on the evaluation of the objective values for each individual actor. Regional evaluations may inherently favor one actor's interests over those of another. This work focuses on four interconnected water utilities in the Research Triangle region of North Carolina for which robust regional water supply portfolios have previously been designed using multi-objective optimization to maximize the robustness of the worst performing utility across several objectives. This study 1) examines the sensitivity of portfolio performance to deviations from prescribed actions by individual utilities, 2) quantifies the implications of the regional formulation used to evaluate robustness for the portfolio performance of each individual utility and 3) elucidates the inherent regional tensions and conflicts that exist between utilities under this regionalization scheme through visual diagnostics of the system under simulated drought scenarios. Results of this analysis will help inform the creation of future regional water supply portfolios and provide insight into the nature of multi-actor water supply systems.

  7. Robustness of power systems under a democratic-fiber-bundle-like model

    NASA Astrophysics Data System (ADS)

    Yaǧan, Osman

    2015-06-01

    We consider a power system with N transmission lines whose initial loads (i.e., power flows) L1,...,LN are independent and identically distributed with PL(x ) =P [L ≤x ] . The capacity Ci defines the maximum flow allowed on line i and is assumed to be given by Ci=(1 +α ) Li , with α >0 . We study the robustness of this power system against random attacks (or failures) that target a p fraction of the lines, under a democratic fiber-bundle-like model. Namely, when a line fails, the load it was carrying is redistributed equally among the remaining lines. Our contributions are as follows. (i) We show analytically that the final breakdown of the system always takes place through a first-order transition at the critical attack size p=1 -E/[L ] maxx(P [L >x ](α x +E [L |L >x ]) ) , where E [.] is the expectation operator; (ii) we derive conditions on the distribution PL(x ) for which the first-order breakdown of the system occurs abruptly without any preceding diverging rate of failure; (iii) we provide a detailed analysis of the robustness of the system under three specific load distributions—uniform, Pareto, and Weibull—showing that with the minimum load Lmin and mean load E [L ] fixed, Pareto distribution is the worst (in terms of robustness) among the three, whereas Weibull distribution is the best with shape parameter selected relatively large; (iv) we provide numerical results that confirm our mean-field analysis; and (v) we show that p is maximized when the load distribution is a Dirac delta function centered at E [L ] , i.e., when all lines carry the same load. This last finding is particularly surprising given that heterogeneity is known to lead to high robustness against random failures in many other systems.

  8. Robustness analysis of superpixel algorithms to image blur, additive Gaussian noise, and impulse noise

    NASA Astrophysics Data System (ADS)

    Brekhna, Brekhna; Mahmood, Arif; Zhou, Yuanfeng; Zhang, Caiming

    2017-11-01

    Superpixels have gradually become popular in computer vision and image processing applications. However, no comprehensive study has been performed to evaluate the robustness of superpixel algorithms in regard to common forms of noise in natural images. We evaluated the robustness of 11 recently proposed algorithms to different types of noise. The images were corrupted with various degrees of Gaussian blur, additive white Gaussian noise, and impulse noise that either made the object boundaries weak or added extra information to it. We performed a robustness analysis of simple linear iterative clustering (SLIC), Voronoi Cells (VCells), flooding-based superpixel generation (FCCS), bilateral geodesic distance (Bilateral-G), superpixel via geodesic distance (SSS-G), manifold SLIC (M-SLIC), Turbopixels, superpixels extracted via energy-driven sampling (SEEDS), lazy random walk (LRW), real-time superpixel segmentation by DBSCAN clustering, and video supervoxels using partially absorbing random walks (PARW) algorithms. The evaluation process was carried out both qualitatively and quantitatively. For quantitative performance comparison, we used achievable segmentation accuracy (ASA), compactness, under-segmentation error (USE), and boundary recall (BR) on the Berkeley image database. The results demonstrated that all algorithms suffered performance degradation due to noise. For Gaussian blur, Bilateral-G exhibited optimal results for ASA and USE measures, SLIC yielded optimal compactness, whereas FCCS and DBSCAN remained optimal for BR. For the case of additive Gaussian and impulse noises, FCCS exhibited optimal results for ASA, USE, and BR, whereas Bilateral-G remained a close competitor in ASA and USE for Gaussian noise only. Additionally, Turbopixel demonstrated optimal performance for compactness for both types of noise. Thus, no single algorithm was able to yield optimal results for all three types of noise across all performance measures. Conclusively, to solve real-world problems effectively, more robust superpixel algorithms must be developed.

  9. Visualization of the Invisible, Explanation of the Unknown, Ruggedization of the Unstable: Sensitivity Analysis, Virtual Tryout and Robust Design through Systematic Stochastic Simulation

    NASA Astrophysics Data System (ADS)

    Zwickl, Titus; Carleer, Bart; Kubli, Waldemar

    2005-08-01

    In the past decade, sheet metal forming simulation became a well established tool to predict the formability of parts. In the automotive industry, this has enabled significant reduction in the cost and time for vehicle design and development, and has helped to improve the quality and performance of vehicle parts. However, production stoppages for troubleshooting and unplanned die maintenance, as well as production quality fluctuations continue to plague manufacturing cost and time. The focus therefore has shifted in recent times beyond mere feasibility to robustness of the product and process being engineered. Ensuring robustness is the next big challenge for the virtual tryout / simulation technology. We introduce new methods, based on systematic stochastic simulations, to visualize the behavior of the part during the whole forming process — in simulation as well as in production. Sensitivity analysis explains the response of the part to changes in influencing parameters. Virtual tryout allows quick exploration of changed designs and conditions. Robust design and manufacturing guarantees quality and process capability for the production process. While conventional simulations helped to reduce development time and cost by ensuring feasible processes, robustness engineering tools have the potential for far greater cost and time savings. Through examples we illustrate how expected and unexpected behavior of deep drawing parts may be tracked down, identified and assigned to the influential parameters. With this knowledge, defects can be eliminated or springback can be compensated e.g.; the response of the part to uncontrollable noise can be predicted and minimized. The newly introduced methods enable more reliable and predictable stamping processes in general.

  10. Glomerular structural-functional relationship models of diabetic nephropathy are robust in type 1 diabetic patients.

    PubMed

    Mauer, Michael; Caramori, Maria Luiza; Fioretto, Paola; Najafian, Behzad

    2015-06-01

    Studies of structural-functional relationships have improved understanding of the natural history of diabetic nephropathy (DN). However, in order to consider structural end points for clinical trials, the robustness of the resultant models needs to be verified. This study examined whether structural-functional relationship models derived from a large cohort of type 1 diabetic (T1D) patients with a wide range of renal function are robust. The predictability of models derived from multiple regression analysis and piecewise linear regression analysis was also compared. T1D patients (n = 161) with research renal biopsies were divided into two equal groups matched for albumin excretion rate (AER). Models to explain AER and glomerular filtration rate (GFR) by classical DN lesions in one group (T1D-model, or T1D-M) were applied to the other group (T1D-test, or T1D-T) and regression analyses were performed. T1D-M-derived models explained 70 and 63% of AER variance and 32 and 21% of GFR variance in T1D-M and T1D-T, respectively, supporting the substantial robustness of the models. Piecewise linear regression analyses substantially improved predictability of the models with 83% of AER variance and 66% of GFR variance explained by classical DN glomerular lesions alone. These studies demonstrate that DN structural-functional relationship models are robust, and if appropriate models are used, glomerular lesions alone explain a major proportion of AER and GFR variance in T1D patients. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  11. Nontargeted metabolomic analysis and "commercial-homophyletic" comparison-induced biomarkers verification for the systematic chemical differentiation of five different parts of Panax ginseng.

    PubMed

    Qiu, Shi; Yang, Wen-Zhi; Yao, Chang-Liang; Qiu, Zhi-Dong; Shi, Xiao-Jian; Zhang, Jing-Xian; Hou, Jin-Jun; Wang, Qiu-Rong; Wu, Wan-Ying; Guo, De-An

    2016-07-01

    A key segment in authentication of herbal medicines is the establishment of robust biomarkers that embody the intrinsic metabolites difference independent of the growing environment or processing technics. We present a strategy by nontargeted metabolomics and "Commercial-homophyletic" comparison-induced biomarkers verification with new bioinformatic vehicles, to improve the efficiency and reliability in authentication of herbal medicines. The chemical differentiation of five different parts (root, leaf, flower bud, berry, and seed) of Panax ginseng was illustrated as a case study. First, an optimized ultra-performance liquid chromatography/quadrupole time-of-flight-MS(E) (UPLC/QTOF-MS(E)) approach was established for global metabolites profiling. Second, UNIFI™ combined with search of an in-house library was employed to automatically characterize the metabolites. Third, pattern recognition multivariate statistical analysis of the MS(E) data of different parts of commercial and homophyletic samples were separately performed to explore potential biomarkers. Fourth, potential biomarkers deduced from commercial and homophyletic root and leaf samples were cross-compared to infer robust biomarkers. Fifth, discriminating models by artificial neutral network (ANN) were established to identify different parts of P. ginseng. Consequently, 164 compounds were characterized, and 11 robust biomarkers enabling the differentiation among root, leaf, flower bud, and berry, were discovered by removing those structurally unstable and possibly processing-related ones. The ANN models using the robust biomarkers managed to exactly discriminate four different parts and root adulterant with leaf as well. Conclusively, biomarkers verification using homophyletic samples conduces to the discovery of robust biomarkers. The integrated strategy facilitates authentication of herbal medicines in a more efficient and more intelligent manner. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.

    ERIC Educational Resources Information Center

    Olson, Jeffery E.

    Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…

  13. Developments in Cylindrical Shell Stability Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Starnes, James H., Jr.

    1998-01-01

    Today high-performance computing systems and new analytical and numerical techniques enable engineers to explore the use of advanced materials for shell design. This paper reviews some of the historical developments of shell buckling analysis and design. The paper concludes by identifying key research directions for reliable and robust methods development in shell stability analysis and design.

  14. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.

  15. Planning for robust reserve networks using uncertainty analysis

    USGS Publications Warehouse

    Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  16. Design and Analysis of Morpheus Lander Flight Control System

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Yang, Lee; Fritz, Mathew; Nguyen, Louis H.; Johnson, Wyatt R.; Hart, Jeremy J.

    2014-01-01

    The Morpheus Lander is a vertical takeoff and landing test bed vehicle developed to demonstrate the system performance of the Guidance, Navigation and Control (GN&C) system capability for the integrated autonomous landing and hazard avoidance system hardware and software. The Morpheus flight control system design must be robust to various mission profiles. This paper presents a design methodology for employing numerical optimization to develop the Morpheus flight control system. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics and propellant slosh. Under the assumption that the Morpheus time-varying dynamics and control system can be frozen over a short period of time, the flight controllers are designed to stabilize all selected frozen-time control systems in the presence of parametric uncertainty. Both control gains in the inner attitude control loop and guidance gains in the outer position control loop are designed to maximize the vehicle performance while ensuring robustness. The flight control system designs provided herein have been demonstrated to provide stable control systems in both Draper Ares Stability Analysis Tool (ASAT) and the NASA/JSC Trick-based Morpheus time domain simulation.

  17. Performance analysis of robust road sign identification

    NASA Astrophysics Data System (ADS)

    Ali, Nursabillilah M.; Mustafah, Y. M.; Rashid, N. K. A. M.

    2013-12-01

    This study describes performance analysis of a robust system for road sign identification that incorporated two stages of different algorithms. The proposed algorithms consist of HSV color filtering and PCA techniques respectively in detection and recognition stages. The proposed algorithms are able to detect the three standard types of colored images namely Red, Yellow and Blue. The hypothesis of the study is that road sign images can be used to detect and identify signs that are involved with the existence of occlusions and rotational changes. PCA is known as feature extraction technique that reduces dimensional size. The sign image can be easily recognized and identified by the PCA method as is has been used in many application areas. Based on the experimental result, it shows that the HSV is robust in road sign detection with minimum of 88% and 77% successful rate for non-partial and partial occlusions images. For successful recognition rates using PCA can be achieved in the range of 94-98%. The occurrences of all classes are recognized successfully is between 5% and 10% level of occlusions.

  18. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young

    2017-08-01

    A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.

  19. Metabolic robustness in young roots underpins a predictive model of maize hybrid performance in the field.

    PubMed

    de Abreu E Lima, Francisco; Westhues, Matthias; Cuadros-Inostroza, Álvaro; Willmitzer, Lothar; Melchinger, Albrecht E; Nikoloski, Zoran

    2017-04-01

    Heterosis has been extensively exploited for yield gain in maize (Zea mays L.). Here we conducted a comparative metabolomics-based analysis of young roots from in vitro germinating seedlings and from leaves of field-grown plants in a panel of inbred lines from the Dent and Flint heterotic patterns as well as selected F 1 hybrids. We found that metabolite levels in hybrids were more robust than in inbred lines. Using state-of-the-art modeling techniques, the most robust metabolites from roots and leaves explained up to 37 and 44% of the variance in the biomass from plants grown in two distinct field trials. In addition, a correlation-based analysis highlighted the trade-off between defense-related metabolites and hybrid performance. Therefore, our findings demonstrated the potential of metabolic profiles from young maize roots grown under tightly controlled conditions to predict hybrid performance in multiple field trials, thus bridging the greenhouse-field gap. © 2017 The Authors The Plant Journal © 2017 John Wiley & Sons Ltd.

  20. Application of polynomial control to design a robust oscillation-damping controller in a multimachine power system.

    PubMed

    Hasanvand, Hamed; Mozafari, Babak; Arvan, Mohammad R; Amraee, Turaj

    2015-11-01

    This paper addresses the application of a static Var compensator (SVC) to improve the damping of interarea oscillations. Optimal location and size of SVC are defined using bifurcation and modal analysis to satisfy its primary application. Furthermore, the best-input signal for damping controller is selected using Hankel singular values and right half plane-zeros. The proposed approach is aimed to design a robust PI controller based on interval plants and Kharitonov's theorem. The objective here is to determine the stability region to attain robust stability, the desired phase margin, gain margin, and bandwidth. The intersection of the resulting stability regions yields the set of kp-ki parameters. In addition, optimal multiobjective design of PI controller using particle swarm optimization (PSO) algorithm is presented. The effectiveness of the suggested controllers in damping of local and interarea oscillation modes of a multimachine power system, over a wide range of loading conditions and system configurations, is confirmed through eigenvalue analysis and nonlinear time domain simulation. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. A Fully Automated and Robust Method to Incorporate Stamping Data in Crash, NVH and Durability Analysis

    NASA Astrophysics Data System (ADS)

    Palaniswamy, Hariharasudhan; Kanthadai, Narayan; Roy, Subir; Beauchesne, Erwan

    2011-08-01

    Crash, NVH (Noise, Vibration, Harshness), and durability analysis are commonly deployed in structural CAE analysis for mechanical design of components especially in the automotive industry. Components manufactured by stamping constitute a major portion of the automotive structure. In CAE analysis they are modeled at a nominal state with uniform thickness and no residual stresses and strains. However, in reality the stamped components have non-uniformly distributed thickness and residual stresses and strains resulting from stamping. It is essential to consider the stamping information in CAE analysis to accurately model the behavior of the sheet metal structures under different loading conditions. Especially with the current emphasis on weight reduction by replacing conventional steels with aluminum and advanced high strength steels it is imperative to avoid over design. Considering this growing need in industry, a highly automated and robust method has been integrated within Altair Hyperworks® to initialize sheet metal components in CAE models with stamping data. This paper demonstrates this new feature and the influence of stamping data for a full car frontal crash analysis.

  2. Neural integrators for decision making: a favorable tradeoff between robustness and sensitivity

    PubMed Central

    Cain, Nicholas; Barreiro, Andrea K.; Shadlen, Michael

    2013-01-01

    A key step in many perceptual decision tasks is the integration of sensory inputs over time, but a fundamental questions remain about how this is accomplished in neural circuits. One possibility is to balance decay modes of membranes and synapses with recurrent excitation. To allow integration over long timescales, however, this balance must be exceedingly precise. The need for fine tuning can be overcome via a “robust integrator” mechanism in which momentary inputs must be above a preset limit to be registered by the circuit. The degree of this limiting embodies a tradeoff between sensitivity to the input stream and robustness against parameter mistuning. Here, we analyze the consequences of this tradeoff for decision-making performance. For concreteness, we focus on the well-studied random dot motion discrimination task and constrain stimulus parameters by experimental data. We show that mistuning feedback in an integrator circuit decreases decision performance but that the robust integrator mechanism can limit this loss. Intriguingly, even for perfectly tuned circuits with no immediate need for a robustness mechanism, including one often does not impose a substantial penalty for decision-making performance. The implication is that robust integrators may be well suited to subserve the basic function of evidence integration in many cognitive tasks. We develop these ideas using simulations of coupled neural units and the mathematics of sequential analysis. PMID:23446688

  3. A Robust Approach to Risk Assessment Based on Species Sensitivity Distributions.

    PubMed

    Monti, Gianna S; Filzmoser, Peter; Deutsch, Roland C

    2018-05-03

    The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real-world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation. © 2018 Society for Risk Analysis.

  4. Robust Crossfeed Design for Hovering Rotorcraft

    NASA Technical Reports Server (NTRS)

    Catapang, David R.

    1993-01-01

    Control law design for rotorcraft fly-by-wire systems normally attempts to decouple angular responses using fixed-gain crossfeeds. This approach can lead to poor decoupling over the frequency range of pilot inputs and increase the load on the feedback loops. In order to improve the decoupling performance, dynamic crossfeeds may be adopted. Moreover, because of the large changes that occur in rotorcraft dynamics due to small changes about the nominal design condition, especially for near-hovering flight, the crossfeed design must be 'robust'. A new low-order matching method is presented here to design robust crossfeed compensators for multi-input, multi-output (MIMO) systems. The technique identifies degrees-of-freedom that can be decoupled using crossfeeds, given an anticipated set of parameter variations for the range of flight conditions of concern. Cross-coupling is then reduced for degrees-of-freedom that can use crossfeed compensation by minimizing off-axis response magnitude average and variance. Results are presented for the analysis of pitch, roll, yaw and heave coupling of the UH-60 Black Hawk helicopter in near-hovering flight. Robust crossfeeds are designed that show significant improvement in decoupling performance and robustness over nominal, single design point, compensators. The design method and results are presented in an easily used graphical format that lends significant physical insight to the design procedure. This plant pre-compensation technique is an appropriate preliminary step to the design of robust feedback control laws for rotorcraft.

  5. Temporal assessment of radiomic features on clinical mammography in a high-risk population

    NASA Astrophysics Data System (ADS)

    Mendel, Kayla R.; Li, Hui; Lan, Li; Chan, Chun-Wai; King, Lauren M.; Tayob, Nabihah; Whitman, Gary; El-Zein, Randa; Bedrosian, Isabelle; Giger, Maryellen L.

    2018-02-01

    Extraction of high-dimensional quantitative data from medical images has become necessary in disease risk assessment, diagnostics and prognostics. Radiomic workflows for mammography typically involve a single medical image for each patient although medical images may exist for multiple imaging exams, especially in screening protocols. Our study takes advantage of the availability of mammograms acquired over multiple years for the prediction of cancer onset. This study included 841 images from 328 patients who developed subsequent mammographic abnormalities, which were confirmed as either cancer (n=173) or non-cancer (n=155) through diagnostic core needle biopsy. Quantitative radiomic analysis was conducted on antecedent FFDMs acquired a year or more prior to diagnostic biopsy. Analysis was limited to the breast contralateral to that in which the abnormality arose. Novel metrics were used to identify robust radiomic features. The most robust features were evaluated in the task of predicting future malignancies on a subset of 72 subjects (23 cancer cases and 49 non-cancer controls) with mammograms over multiple years. Using linear discriminant analysis, the robust radiomic features were merged into predictive signatures by: (i) using features from only the most recent contralateral mammogram, (ii) change in feature values between mammograms, and (iii) ratio of feature values over time, yielding AUCs of 0.57 (SE=0.07), 0.63 (SE=0.06), and 0.66 (SE=0.06), respectively. The AUCs for temporal radiomics (ratio) statistically differed from chance, suggesting that changes in radiomics over time may be critical for risk assessment. Overall, we found that our two-stage process of robustness assessment followed by performance evaluation served well in our investigation on the role of temporal radiomics in risk assessment.

  6. Improved FastICA algorithm in fMRI data analysis using the sparsity property of the sources.

    PubMed

    Ge, Ruiyang; Wang, Yubao; Zhang, Jipeng; Yao, Li; Zhang, Hang; Long, Zhiying

    2016-04-01

    As a blind source separation technique, independent component analysis (ICA) has many applications in functional magnetic resonance imaging (fMRI). Although either temporal or spatial prior information has been introduced into the constrained ICA and semi-blind ICA methods to improve the performance of ICA in fMRI data analysis, certain types of additional prior information, such as the sparsity, has seldom been added to the ICA algorithms as constraints. In this study, we proposed a SparseFastICA method by adding the source sparsity as a constraint to the FastICA algorithm to improve the performance of the widely used FastICA. The source sparsity is estimated through a smoothed ℓ0 norm method. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of SparseFastICA and made a performance comparison between SparseFastICA, FastICA and Infomax ICA. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of SparseFastICA for the source separation in fMRI data. Both the simulated and real fMRI experimental results showed that SparseFastICA has better robustness to noise and better spatial detection power than FastICA. Although the spatial detection power of SparseFastICA and Infomax did not show significant difference, SparseFastICA had faster computation speed than Infomax. SparseFastICA was comparable to the Infomax algorithm with a faster computation speed. More importantly, SparseFastICA outperformed FastICA in robustness and spatial detection power and can be used to identify more accurate brain networks than FastICA algorithm. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. An index-based robust decision making framework for watershed management in a changing climate.

    PubMed

    Kim, Yeonjoo; Chung, Eun-Sung

    2014-03-01

    This study developed an index-based robust decision making framework for watershed management dealing with water quantity and quality issues in a changing climate. It consists of two parts of management alternative development and analysis. The first part for alternative development consists of six steps: 1) to understand the watershed components and process using HSPF model, 2) to identify the spatial vulnerability ranking using two indices: potential streamflow depletion (PSD) and potential water quality deterioration (PWQD), 3) to quantify the residents' preferences on water management demands and calculate the watershed evaluation index which is the weighted combinations of PSD and PWQD, 4) to set the quantitative targets for water quantity and quality, 5) to develop a list of feasible alternatives and 6) to eliminate the unacceptable alternatives. The second part for alternative analysis has three steps: 7) to analyze all selected alternatives with a hydrologic simulation model considering various climate change scenarios, 8) to quantify the alternative evaluation index including social and hydrologic criteria with utilizing multi-criteria decision analysis methods and 9) to prioritize all options based on a minimax regret strategy for robust decision. This framework considers the uncertainty inherent in climate models and climate change scenarios with utilizing the minimax regret strategy, a decision making strategy under deep uncertainty and thus this procedure derives the robust prioritization based on the multiple utilities of alternatives from various scenarios. In this study, the proposed procedure was applied to the Korean urban watershed, which has suffered from streamflow depletion and water quality deterioration. Our application shows that the framework provides a useful watershed management tool for incorporating quantitative and qualitative information into the evaluation of various policies with regard to water resource planning and management. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. A Statistical Analysis of Brain Morphology Using Wild Bootstrapping

    PubMed Central

    Ibrahim, Joseph G.; Tang, Niansheng; Rowe, Daniel B.; Hao, Xuejun; Bansal, Ravi; Peterson, Bradley S.

    2008-01-01

    Methods for the analysis of brain morphology, including voxel-based morphology and surface-based morphometries, have been used to detect associations between brain structure and covariates of interest, such as diagnosis, severity of disease, age, IQ, and genotype. The statistical analysis of morphometric measures usually involves two statistical procedures: 1) invoking a statistical model at each voxel (or point) on the surface of the brain or brain subregion, followed by mapping test statistics (e.g., t test) or their associated p values at each of those voxels; 2) correction for the multiple statistical tests conducted across all voxels on the surface of the brain region under investigation. We propose the use of new statistical methods for each of these procedures. We first use a heteroscedastic linear model to test the associations between the morphological measures at each voxel on the surface of the specified subregion (e.g., cortical or subcortical surfaces) and the covariates of interest. Moreover, we develop a robust test procedure that is based on a resampling method, called wild bootstrapping. This procedure assesses the statistical significance of the associations between a measure of given brain structure and the covariates of interest. The value of this robust test procedure lies in its computationally simplicity and in its applicability to a wide range of imaging data, including data from both anatomical and functional magnetic resonance imaging (fMRI). Simulation studies demonstrate that this robust test procedure can accurately control the family-wise error rate. We demonstrate the application of this robust test procedure to the detection of statistically significant differences in the morphology of the hippocampus over time across gender groups in a large sample of healthy subjects. PMID:17649909

  9. Using expert judgments to explore robust alternatives for forest management under climate change.

    PubMed

    McDaniels, Timothy; Mills, Tamsin; Gregory, Robin; Ohlson, Dan

    2012-12-01

    We develop and apply a judgment-based approach to selecting robust alternatives, which are defined here as reasonably likely to achieve objectives, over a range of uncertainties. The intent is to develop an approach that is more practical in terms of data and analysis requirements than current approaches, informed by the literature and experience with probability elicitation and judgmental forecasting. The context involves decisions about managing forest lands that have been severely affected by mountain pine beetles in British Columbia, a pest infestation that is climate-exacerbated. A forest management decision was developed as the basis for the context, objectives, and alternatives for land management actions, to frame and condition the judgments. A wide range of climate forecasts, taken to represent the 10-90% levels on cumulative distributions for future climate, were developed to condition judgments. An elicitation instrument was developed, tested, and revised to serve as the basis for eliciting probabilistic three-point distributions regarding the performance of selected alternatives, over a set of relevant objectives, in the short and long term. The elicitations were conducted in a workshop comprising 14 regional forest management specialists. We employed the concept of stochastic dominance to help identify robust alternatives. We used extensive sensitivity analysis to explore the patterns in the judgments, and also considered the preferred alternatives for each individual expert. The results show that two alternatives that are more flexible than the current policies are judged more likely to perform better than the current alternatives on average in terms of stochastic dominance. The results suggest judgmental approaches to robust decision making deserve greater attention and testing. © 2012 Society for Risk Analysis.

  10. Real-time PCR machine system modeling and a systematic approach for the robust design of a real-time PCR-on-a-chip system.

    PubMed

    Lee, Da-Sheng

    2010-01-01

    Chip-based DNA quantification systems are widespread, and used in many point-of-care applications. However, instruments for such applications may not be maintained or calibrated regularly. Since machine reliability is a key issue for normal operation, this study presents a system model of the real-time Polymerase Chain Reaction (PCR) machine to analyze the instrument design through numerical experiments. Based on model analysis, a systematic approach was developed to lower the variation of DNA quantification and achieve a robust design for a real-time PCR-on-a-chip system. Accelerated lift testing was adopted to evaluate the reliability of the chip prototype. According to the life test plan, this proposed real-time PCR-on-a-chip system was simulated to work continuously for over three years with similar reproducibility in DNA quantification. This not only shows the robustness of the lab-on-a-chip system, but also verifies the effectiveness of our systematic method for achieving a robust design.

  11. Synthetic biology and regulatory networks: where metabolic systems biology meets control engineering

    PubMed Central

    He, Fei; Murabito, Ettore; Westerhoff, Hans V.

    2016-01-01

    Metabolic pathways can be engineered to maximize the synthesis of various products of interest. With the advent of computational systems biology, this endeavour is usually carried out through in silico theoretical studies with the aim to guide and complement further in vitro and in vivo experimental efforts. Clearly, what counts is the result in vivo, not only in terms of maximal productivity but also robustness against environmental perturbations. Engineering an organism towards an increased production flux, however, often compromises that robustness. In this contribution, we review and investigate how various analytical approaches used in metabolic engineering and synthetic biology are related to concepts developed by systems and control engineering. While trade-offs between production optimality and cellular robustness have already been studied diagnostically and statically, the dynamics also matter. Integration of the dynamic design aspects of control engineering with the more diagnostic aspects of metabolic, hierarchical control and regulation analysis is leading to the new, conceptual and operational framework required for the design of robust and productive dynamic pathways. PMID:27075000

  12. Adaptive integral robust control and application to electromechanical servo systems.

    PubMed

    Deng, Wenxiang; Yao, Jianyong

    2017-03-01

    This paper proposes a continuous adaptive integral robust control with robust integral of the sign of the error (RISE) feedback for a class of uncertain nonlinear systems, in which the RISE feedback gain is adapted online to ensure the robustness against disturbances without the prior bound knowledge of the additive disturbances. In addition, an adaptive compensation integrated with the proposed adaptive RISE feedback term is also constructed to further reduce design conservatism when the system also exists parametric uncertainties. Lyapunov analysis reveals the proposed controllers could guarantee the tracking errors are asymptotically converging to zero with continuous control efforts. To illustrate the high performance nature of the developed controllers, numerical simulations are provided. At the end, an application case of an actual electromechanical servo system driven by motor is also studied, with some specific design consideration, and comparative experimental results are obtained to verify the effectiveness of the proposed controllers. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. A robust and scalable neuromorphic communication system by combining synaptic time multiplexing and MIMO-OFDM.

    PubMed

    Srinivasa, Narayan; Zhang, Deying; Grigorian, Beayna

    2014-03-01

    This paper describes a novel architecture for enabling robust and efficient neuromorphic communication. The architecture combines two concepts: 1) synaptic time multiplexing (STM) that trades space for speed of processing to create an intragroup communication approach that is firing rate independent and offers more flexibility in connectivity than cross-bar architectures and 2) a wired multiple input multiple output (MIMO) communication with orthogonal frequency division multiplexing (OFDM) techniques to enable a robust and efficient intergroup communication for neuromorphic systems. The MIMO-OFDM concept for the proposed architecture was analyzed by simulating large-scale spiking neural network architecture. Analysis shows that the neuromorphic system with MIMO-OFDM exhibits robust and efficient communication while operating in real time with a high bit rate. Through combining STM with MIMO-OFDM techniques, the resulting system offers a flexible and scalable connectivity as well as a power and area efficient solution for the implementation of very large-scale spiking neural architectures in hardware.

  14. Stochastic Control Synthesis of Systems with Structured Uncertainty

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L. (Technical Monitor); Crespo, Luis G.

    2003-01-01

    This paper presents a study on the design of robust controllers by using random variables to model structured uncertainty for both SISO and MIMO feedback systems. Once the parameter uncertainty is prescribed with probability density functions, its effects are propagated through the analysis leading to stochastic metrics for the system's output. Control designs that aim for satisfactory performances while guaranteeing robust closed loop stability are attained by solving constrained non-linear optimization problems in the frequency domain. This approach permits not only to quantify the probability of having unstable and unfavorable responses for a particular control design but also to search for controls while favoring the values of the parameters with higher chance of occurrence. In this manner, robust optimality is achieved while the characteristic conservatism of conventional robust control methods is eliminated. Examples that admit closed form expressions for the probabilistic metrics of the output are used to elucidate the nature of the problem at hand and validate the proposed formulations.

  15. A Robust Shape Reconstruction Method for Facial Feature Point Detection.

    PubMed

    Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi

    2017-01-01

    Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  16. Robust QRS peak detection by multimodal information fusion of ECG and blood pressure signals.

    PubMed

    Ding, Quan; Bai, Yong; Erol, Yusuf Bugra; Salas-Boni, Rebeca; Zhang, Xiaorong; Hu, Xiao

    2016-11-01

    QRS peak detection is a challenging problem when ECG signal is corrupted. However, additional physiological signals may also provide information about the QRS position. In this study, we focus on a unique benchmark provided by PhysioNet/Computing in Cardiology Challenge 2014 and Physiological Measurement focus issue: robust detection of heart beats in multimodal data, which aimed to explore robust methods for QRS detection in multimodal physiological signals. A dataset of 200 training and 210 testing records are used, where the testing records are hidden for evaluating the performance only. An information fusion framework for robust QRS detection is proposed by leveraging existing ECG and ABP analysis tools and combining heart beats derived from different sources. Results show that our approach achieves an overall accuracy of 90.94% and 88.66% on the training and testing datasets, respectively. Furthermore, we observe expected performance at each step of the proposed approach, as an evidence of the effectiveness of our approach. Discussion on the limitations of our approach is also provided.

  17. Robustness against parametric noise of nonideal holonomic gates

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Aniello, Paolo; Napolitano, Mario; Florio, Giuseppe

    2007-07-01

    Holonomic gates for quantum computation are commonly considered to be robust against certain kinds of parametric noise, the cause of this robustness being the geometric character of the transformation achieved in the adiabatic limit. On the other hand, the effects of decoherence are expected to become more and more relevant when the adiabatic limit is approached. Starting from the system described by Florio [Phys. Rev. A 73, 022327 (2006)], here we discuss the behavior of nonideal holonomic gates at finite operational time, i.e., long before the adiabatic limit is reached. We have considered several models of parametric noise and studied the robustness of finite-time gates. The results obtained suggest that the finite-time gates present some effects of cancellation of the perturbations introduced by the noise which mimic the geometrical cancellation effect of standard holonomic gates. Nevertheless, a careful analysis of the results leads to the conclusion that these effects are related to a dynamical instead of a geometrical feature.

  18. Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.

    PubMed

    Li, Xianwei; Gao, Huijun

    2015-10-01

    Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.

  19. Exploiting structure: Introduction and motivation

    NASA Technical Reports Server (NTRS)

    Xu, Zhong Ling

    1993-01-01

    Research activities performed during the period of 29 June 1993 through 31 Aug. 1993 are summarized. The Robust Stability of Systems where transfer function or characteristic polynomial are multilinear affine functions of parameters of interest in two directions, Algorithmic and Theoretical, was developed. In the algorithmic direction, a new approach that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty is found. This technique is called 'Stability by linear process.' In fact, the 'Stability by linear process' described gives an algorithm. In analysis, we obtained a robustness criterion for the family of polynomials with coefficients of multilinear affine function in the coefficient space and obtained the result for the robust stability of diamond families of polynomials with complex coefficients also. We obtained the limited results for SPR design and we provide a framework for solving ACS. Finally, copies of the outline of our results are provided in the appendix. Also, there is an administration issue in the appendix.

  20. Optimal strategy analysis based on robust predictive control for inventory system with random demand

    NASA Astrophysics Data System (ADS)

    Saputra, Aditya; Widowati, Sutrisno

    2017-12-01

    In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.

Top